Feb 20 14:44:19.262568 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 20 14:44:19.871917 master-0 kubenswrapper[4172]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 14:44:19.871917 master-0 kubenswrapper[4172]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 20 14:44:19.871917 master-0 kubenswrapper[4172]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 14:44:19.875201 master-0 kubenswrapper[4172]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 14:44:19.875201 master-0 kubenswrapper[4172]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 20 14:44:19.875201 master-0 kubenswrapper[4172]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 14:44:19.877470 master-0 kubenswrapper[4172]: I0220 14:44:19.877275 4172 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 20 14:44:19.882570 master-0 kubenswrapper[4172]: W0220 14:44:19.882527 4172 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 14:44:19.882570 master-0 kubenswrapper[4172]: W0220 14:44:19.882556 4172 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 14:44:19.882570 master-0 kubenswrapper[4172]: W0220 14:44:19.882564 4172 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 14:44:19.882570 master-0 kubenswrapper[4172]: W0220 14:44:19.882573 4172 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882582 4172 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882594 4172 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882605 4172 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882616 4172 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882624 4172 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882633 4172 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882642 4172 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882650 4172 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882658 4172 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882666 4172 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882674 4172 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882682 4172 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882689 4172 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882697 4172 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882705 4172 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882713 4172 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882721 4172 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882744 4172 feature_gate.go:330] unrecognized feature gate: Example Feb 20 14:44:19.882775 master-0 kubenswrapper[4172]: W0220 14:44:19.882754 4172 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882764 4172 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882773 4172 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882782 4172 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882792 4172 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882801 4172 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882813 4172 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882823 4172 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882831 4172 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882840 4172 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882848 4172 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882856 4172 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882864 4172 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882872 4172 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882880 4172 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882887 4172 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882896 4172 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882903 4172 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882911 4172 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882918 4172 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 14:44:19.883639 master-0 kubenswrapper[4172]: W0220 14:44:19.882951 4172 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.882959 4172 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.882966 4172 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.882974 4172 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.882982 4172 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.882990 4172 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.882998 4172 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.883006 4172 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.883013 4172 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.883021 4172 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.883029 4172 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.883037 4172 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.883045 4172 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.883053 4172 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.883078 4172 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.883086 4172 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.883094 4172 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.883102 4172 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.883110 4172 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.883118 4172 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 14:44:19.884556 master-0 kubenswrapper[4172]: W0220 14:44:19.883126 4172 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: W0220 14:44:19.883134 4172 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: W0220 14:44:19.883144 4172 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: W0220 14:44:19.883154 4172 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: W0220 14:44:19.883162 4172 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: W0220 14:44:19.883170 4172 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: W0220 14:44:19.883178 4172 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: W0220 14:44:19.883186 4172 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: W0220 14:44:19.883193 4172 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: W0220 14:44:19.883201 4172 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: I0220 14:44:19.883350 4172 flags.go:64] FLAG: --address="0.0.0.0" Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: I0220 14:44:19.883366 4172 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: I0220 14:44:19.883380 4172 flags.go:64] FLAG: --anonymous-auth="true" Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: I0220 14:44:19.883391 4172 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: I0220 14:44:19.883403 4172 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: I0220 14:44:19.883412 4172 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: I0220 14:44:19.883426 4172 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: I0220 14:44:19.883437 4172 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: I0220 14:44:19.883446 4172 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: I0220 14:44:19.883456 4172 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: I0220 14:44:19.883466 4172 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 20 14:44:19.885450 master-0 kubenswrapper[4172]: I0220 14:44:19.883475 4172 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883485 4172 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883494 4172 flags.go:64] FLAG: --cgroup-root="" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883503 4172 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883512 4172 flags.go:64] FLAG: --client-ca-file="" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883528 4172 flags.go:64] FLAG: --cloud-config="" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883537 4172 flags.go:64] FLAG: --cloud-provider="" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883546 4172 flags.go:64] FLAG: --cluster-dns="[]" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883557 4172 flags.go:64] FLAG: --cluster-domain="" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883566 4172 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883576 4172 flags.go:64] FLAG: --config-dir="" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883584 4172 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883594 4172 flags.go:64] FLAG: --container-log-max-files="5" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883606 4172 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883636 4172 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883646 4172 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883658 4172 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883668 4172 flags.go:64] FLAG: --contention-profiling="false" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883678 4172 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883687 4172 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883697 4172 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883706 4172 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883717 4172 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883726 4172 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883735 4172 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 20 14:44:19.886403 master-0 kubenswrapper[4172]: I0220 14:44:19.883743 4172 flags.go:64] FLAG: --enable-load-reader="false" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883752 4172 flags.go:64] FLAG: --enable-server="true" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883761 4172 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883772 4172 flags.go:64] FLAG: --event-burst="100" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883781 4172 flags.go:64] FLAG: --event-qps="50" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883790 4172 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883800 4172 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883810 4172 flags.go:64] FLAG: --eviction-hard="" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883820 4172 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883829 4172 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883839 4172 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883848 4172 flags.go:64] FLAG: --eviction-soft="" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883860 4172 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883869 4172 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883879 4172 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883888 4172 flags.go:64] FLAG: --experimental-mounter-path="" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883896 4172 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883905 4172 flags.go:64] FLAG: --fail-swap-on="true" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883914 4172 flags.go:64] FLAG: --feature-gates="" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883948 4172 flags.go:64] FLAG: --file-check-frequency="20s" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883958 4172 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883968 4172 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883978 4172 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883988 4172 flags.go:64] FLAG: --healthz-port="10248" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.883998 4172 flags.go:64] FLAG: --help="false" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.884007 4172 flags.go:64] FLAG: --hostname-override="" Feb 20 14:44:19.887583 master-0 kubenswrapper[4172]: I0220 14:44:19.884016 4172 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884025 4172 flags.go:64] FLAG: --http-check-frequency="20s" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884035 4172 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884043 4172 flags.go:64] FLAG: --image-credential-provider-config="" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884052 4172 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884060 4172 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884069 4172 flags.go:64] FLAG: --image-service-endpoint="" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884077 4172 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884087 4172 flags.go:64] FLAG: --kube-api-burst="100" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884095 4172 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884104 4172 flags.go:64] FLAG: --kube-api-qps="50" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884113 4172 flags.go:64] FLAG: --kube-reserved="" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884122 4172 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884131 4172 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884140 4172 flags.go:64] FLAG: --kubelet-cgroups="" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884149 4172 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884160 4172 flags.go:64] FLAG: --lock-file="" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884168 4172 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884181 4172 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884189 4172 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884202 4172 flags.go:64] FLAG: --log-json-split-stream="false" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884211 4172 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884220 4172 flags.go:64] FLAG: --log-text-split-stream="false" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884229 4172 flags.go:64] FLAG: --logging-format="text" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884238 4172 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 20 14:44:19.888945 master-0 kubenswrapper[4172]: I0220 14:44:19.884248 4172 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884256 4172 flags.go:64] FLAG: --manifest-url="" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884265 4172 flags.go:64] FLAG: --manifest-url-header="" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884276 4172 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884285 4172 flags.go:64] FLAG: --max-open-files="1000000" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884295 4172 flags.go:64] FLAG: --max-pods="110" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884304 4172 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884314 4172 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884322 4172 flags.go:64] FLAG: --memory-manager-policy="None" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884332 4172 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884341 4172 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884350 4172 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884359 4172 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884377 4172 flags.go:64] FLAG: --node-status-max-images="50" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884386 4172 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884395 4172 flags.go:64] FLAG: --oom-score-adj="-999" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884403 4172 flags.go:64] FLAG: --pod-cidr="" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884412 4172 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884426 4172 flags.go:64] FLAG: --pod-manifest-path="" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884434 4172 flags.go:64] FLAG: --pod-max-pids="-1" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884443 4172 flags.go:64] FLAG: --pods-per-core="0" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884452 4172 flags.go:64] FLAG: --port="10250" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884461 4172 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884471 4172 flags.go:64] FLAG: --provider-id="" Feb 20 14:44:19.890043 master-0 kubenswrapper[4172]: I0220 14:44:19.884481 4172 flags.go:64] FLAG: --qos-reserved="" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884490 4172 flags.go:64] FLAG: --read-only-port="10255" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884502 4172 flags.go:64] FLAG: --register-node="true" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884512 4172 flags.go:64] FLAG: --register-schedulable="true" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884521 4172 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884536 4172 flags.go:64] FLAG: --registry-burst="10" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884544 4172 flags.go:64] FLAG: --registry-qps="5" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884553 4172 flags.go:64] FLAG: --reserved-cpus="" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884563 4172 flags.go:64] FLAG: --reserved-memory="" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884575 4172 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884584 4172 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884594 4172 flags.go:64] FLAG: --rotate-certificates="false" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884604 4172 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884613 4172 flags.go:64] FLAG: --runonce="false" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884622 4172 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884632 4172 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884640 4172 flags.go:64] FLAG: --seccomp-default="false" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884649 4172 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884658 4172 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884667 4172 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884676 4172 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884685 4172 flags.go:64] FLAG: --storage-driver-password="root" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884694 4172 flags.go:64] FLAG: --storage-driver-secure="false" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884703 4172 flags.go:64] FLAG: --storage-driver-table="stats" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884711 4172 flags.go:64] FLAG: --storage-driver-user="root" Feb 20 14:44:19.891097 master-0 kubenswrapper[4172]: I0220 14:44:19.884720 4172 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: I0220 14:44:19.884729 4172 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: I0220 14:44:19.884739 4172 flags.go:64] FLAG: --system-cgroups="" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: I0220 14:44:19.884747 4172 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: I0220 14:44:19.884760 4172 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: I0220 14:44:19.884769 4172 flags.go:64] FLAG: --tls-cert-file="" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: I0220 14:44:19.884778 4172 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: I0220 14:44:19.884789 4172 flags.go:64] FLAG: --tls-min-version="" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: I0220 14:44:19.884798 4172 flags.go:64] FLAG: --tls-private-key-file="" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: I0220 14:44:19.884811 4172 flags.go:64] FLAG: --topology-manager-policy="none" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: I0220 14:44:19.884820 4172 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: I0220 14:44:19.884829 4172 flags.go:64] FLAG: --topology-manager-scope="container" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: I0220 14:44:19.884838 4172 flags.go:64] FLAG: --v="2" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: I0220 14:44:19.884849 4172 flags.go:64] FLAG: --version="false" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: I0220 14:44:19.884861 4172 flags.go:64] FLAG: --vmodule="" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: I0220 14:44:19.884871 4172 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: I0220 14:44:19.884881 4172 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: W0220 14:44:19.885106 4172 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: W0220 14:44:19.885118 4172 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: W0220 14:44:19.885126 4172 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: W0220 14:44:19.885137 4172 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: W0220 14:44:19.885147 4172 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: W0220 14:44:19.885156 4172 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 14:44:19.892218 master-0 kubenswrapper[4172]: W0220 14:44:19.885164 4172 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885171 4172 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885180 4172 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885188 4172 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885196 4172 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885204 4172 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885212 4172 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885219 4172 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885227 4172 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885235 4172 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885242 4172 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885251 4172 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885258 4172 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885266 4172 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885274 4172 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885281 4172 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885289 4172 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885297 4172 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885307 4172 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885315 4172 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 14:44:19.893600 master-0 kubenswrapper[4172]: W0220 14:44:19.885323 4172 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885331 4172 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885339 4172 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885346 4172 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885354 4172 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885362 4172 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885369 4172 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885386 4172 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885394 4172 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885402 4172 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885410 4172 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885417 4172 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885424 4172 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885432 4172 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885440 4172 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885448 4172 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885456 4172 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885464 4172 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885473 4172 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885484 4172 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 14:44:19.894690 master-0 kubenswrapper[4172]: W0220 14:44:19.885495 4172 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885505 4172 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885513 4172 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885521 4172 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885529 4172 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885537 4172 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885545 4172 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885552 4172 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885559 4172 feature_gate.go:330] unrecognized feature gate: Example Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885567 4172 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885577 4172 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885587 4172 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885598 4172 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885607 4172 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885616 4172 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885625 4172 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885634 4172 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885643 4172 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885651 4172 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 14:44:19.895708 master-0 kubenswrapper[4172]: W0220 14:44:19.885659 4172 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 14:44:19.896703 master-0 kubenswrapper[4172]: W0220 14:44:19.885666 4172 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 14:44:19.896703 master-0 kubenswrapper[4172]: W0220 14:44:19.885675 4172 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 14:44:19.896703 master-0 kubenswrapper[4172]: W0220 14:44:19.885682 4172 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 14:44:19.896703 master-0 kubenswrapper[4172]: W0220 14:44:19.885694 4172 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 14:44:19.896703 master-0 kubenswrapper[4172]: W0220 14:44:19.885702 4172 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 14:44:19.896703 master-0 kubenswrapper[4172]: W0220 14:44:19.885709 4172 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 14:44:19.896703 master-0 kubenswrapper[4172]: I0220 14:44:19.886619 4172 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 14:44:19.898240 master-0 kubenswrapper[4172]: I0220 14:44:19.897795 4172 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 20 14:44:19.898240 master-0 kubenswrapper[4172]: I0220 14:44:19.898234 4172 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 20 14:44:19.898424 master-0 kubenswrapper[4172]: W0220 14:44:19.898389 4172 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 14:44:19.898424 master-0 kubenswrapper[4172]: W0220 14:44:19.898416 4172 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 14:44:19.898524 master-0 kubenswrapper[4172]: W0220 14:44:19.898428 4172 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 14:44:19.898524 master-0 kubenswrapper[4172]: W0220 14:44:19.898438 4172 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 14:44:19.898524 master-0 kubenswrapper[4172]: W0220 14:44:19.898446 4172 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 14:44:19.898524 master-0 kubenswrapper[4172]: W0220 14:44:19.898456 4172 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 14:44:19.898524 master-0 kubenswrapper[4172]: W0220 14:44:19.898467 4172 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 14:44:19.898524 master-0 kubenswrapper[4172]: W0220 14:44:19.898479 4172 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 14:44:19.898524 master-0 kubenswrapper[4172]: W0220 14:44:19.898489 4172 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 14:44:19.898524 master-0 kubenswrapper[4172]: W0220 14:44:19.898501 4172 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 14:44:19.898524 master-0 kubenswrapper[4172]: W0220 14:44:19.898514 4172 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 14:44:19.898524 master-0 kubenswrapper[4172]: W0220 14:44:19.898524 4172 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 14:44:19.898524 master-0 kubenswrapper[4172]: W0220 14:44:19.898534 4172 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898544 4172 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898552 4172 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898559 4172 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898570 4172 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898579 4172 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898590 4172 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898600 4172 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898610 4172 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898623 4172 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898638 4172 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898649 4172 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898661 4172 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898672 4172 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898680 4172 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898687 4172 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898695 4172 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898703 4172 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898711 4172 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 14:44:19.899110 master-0 kubenswrapper[4172]: W0220 14:44:19.898718 4172 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898726 4172 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898734 4172 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898741 4172 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898749 4172 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898756 4172 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898764 4172 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898771 4172 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898780 4172 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898787 4172 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898795 4172 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898803 4172 feature_gate.go:330] unrecognized feature gate: Example Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898810 4172 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898818 4172 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898826 4172 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898836 4172 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898844 4172 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898853 4172 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898861 4172 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898869 4172 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 14:44:19.900025 master-0 kubenswrapper[4172]: W0220 14:44:19.898877 4172 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.898885 4172 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.898892 4172 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.898901 4172 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.898909 4172 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.898919 4172 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.898962 4172 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.898975 4172 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.898986 4172 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.898996 4172 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.899008 4172 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.899018 4172 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.899028 4172 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.899038 4172 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.899049 4172 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.899059 4172 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.899069 4172 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.899079 4172 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.899089 4172 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 14:44:19.901142 master-0 kubenswrapper[4172]: W0220 14:44:19.899098 4172 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 14:44:19.902124 master-0 kubenswrapper[4172]: W0220 14:44:19.899108 4172 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 14:44:19.902124 master-0 kubenswrapper[4172]: I0220 14:44:19.899122 4172 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 14:44:19.902124 master-0 kubenswrapper[4172]: W0220 14:44:19.899371 4172 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 14:44:19.902124 master-0 kubenswrapper[4172]: W0220 14:44:19.899389 4172 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 14:44:19.902124 master-0 kubenswrapper[4172]: W0220 14:44:19.899400 4172 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 14:44:19.902124 master-0 kubenswrapper[4172]: W0220 14:44:19.899411 4172 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 14:44:19.902124 master-0 kubenswrapper[4172]: W0220 14:44:19.899425 4172 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 14:44:19.902124 master-0 kubenswrapper[4172]: W0220 14:44:19.899441 4172 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 14:44:19.902124 master-0 kubenswrapper[4172]: W0220 14:44:19.899455 4172 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 14:44:19.902124 master-0 kubenswrapper[4172]: W0220 14:44:19.899466 4172 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 14:44:19.902124 master-0 kubenswrapper[4172]: W0220 14:44:19.899477 4172 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 14:44:19.902124 master-0 kubenswrapper[4172]: W0220 14:44:19.899487 4172 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 14:44:19.902124 master-0 kubenswrapper[4172]: W0220 14:44:19.899498 4172 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 14:44:19.902124 master-0 kubenswrapper[4172]: W0220 14:44:19.899507 4172 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899518 4172 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899528 4172 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899539 4172 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899549 4172 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899558 4172 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899569 4172 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899579 4172 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899590 4172 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899599 4172 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899609 4172 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899619 4172 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899656 4172 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899669 4172 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899679 4172 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899689 4172 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899699 4172 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899709 4172 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899719 4172 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899728 4172 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899738 4172 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 14:44:19.902780 master-0 kubenswrapper[4172]: W0220 14:44:19.899747 4172 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899761 4172 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899774 4172 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899785 4172 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899796 4172 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899809 4172 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899820 4172 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899830 4172 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899840 4172 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899853 4172 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899864 4172 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899875 4172 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899886 4172 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899896 4172 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899906 4172 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899916 4172 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899956 4172 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899967 4172 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899978 4172 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 14:44:19.904053 master-0 kubenswrapper[4172]: W0220 14:44:19.899987 4172 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.899997 4172 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900007 4172 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900017 4172 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900027 4172 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900036 4172 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900046 4172 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900055 4172 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900065 4172 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900077 4172 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900088 4172 feature_gate.go:330] unrecognized feature gate: Example Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900098 4172 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900107 4172 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900117 4172 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900141 4172 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900151 4172 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900161 4172 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900171 4172 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900181 4172 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 14:44:19.904887 master-0 kubenswrapper[4172]: W0220 14:44:19.900195 4172 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 14:44:19.905754 master-0 kubenswrapper[4172]: W0220 14:44:19.900206 4172 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 14:44:19.905754 master-0 kubenswrapper[4172]: I0220 14:44:19.900223 4172 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 14:44:19.905754 master-0 kubenswrapper[4172]: I0220 14:44:19.900484 4172 server.go:940] "Client rotation is on, will bootstrap in background" Feb 20 14:44:19.905754 master-0 kubenswrapper[4172]: I0220 14:44:19.903997 4172 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 20 14:44:19.906103 master-0 kubenswrapper[4172]: I0220 14:44:19.906052 4172 server.go:997] "Starting client certificate rotation" Feb 20 14:44:19.906164 master-0 kubenswrapper[4172]: I0220 14:44:19.906104 4172 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 20 14:44:19.906435 master-0 kubenswrapper[4172]: I0220 14:44:19.906370 4172 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 20 14:44:19.940855 master-0 kubenswrapper[4172]: I0220 14:44:19.940782 4172 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 14:44:19.946185 master-0 kubenswrapper[4172]: I0220 14:44:19.946131 4172 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 14:44:19.947035 master-0 kubenswrapper[4172]: E0220 14:44:19.946870 4172 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:19.978209 master-0 kubenswrapper[4172]: I0220 14:44:19.978117 4172 log.go:25] "Validated CRI v1 runtime API" Feb 20 14:44:19.985762 master-0 kubenswrapper[4172]: I0220 14:44:19.985712 4172 log.go:25] "Validated CRI v1 image API" Feb 20 14:44:19.988327 master-0 kubenswrapper[4172]: I0220 14:44:19.988268 4172 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 20 14:44:19.994009 master-0 kubenswrapper[4172]: I0220 14:44:19.993919 4172 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 f887e099-fa60-4eeb-b981-d71fb787fc62:/dev/vda3] Feb 20 14:44:19.994132 master-0 kubenswrapper[4172]: I0220 14:44:19.993999 4172 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Feb 20 14:44:20.026760 master-0 kubenswrapper[4172]: I0220 14:44:20.026270 4172 manager.go:217] Machine: {Timestamp:2026-02-20 14:44:20.022875558 +0000 UTC m=+0.578101218 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:48b9dd3ce20842759e3dc6160315340b SystemUUID:48b9dd3c-e208-4275-9e3d-c6160315340b BootID:509e02d8-f41f-4d6f-8d1a-4efa2a52c9c0 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:2c:8d:77 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:3f:57:48 Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:1a:32:2a:84:99:77 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 20 14:44:20.026760 master-0 kubenswrapper[4172]: I0220 14:44:20.026697 4172 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 20 14:44:20.027072 master-0 kubenswrapper[4172]: I0220 14:44:20.026880 4172 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 20 14:44:20.027624 master-0 kubenswrapper[4172]: I0220 14:44:20.027581 4172 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 20 14:44:20.028045 master-0 kubenswrapper[4172]: I0220 14:44:20.027983 4172 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 20 14:44:20.028371 master-0 kubenswrapper[4172]: I0220 14:44:20.028038 4172 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 20 14:44:20.028456 master-0 kubenswrapper[4172]: I0220 14:44:20.028408 4172 topology_manager.go:138] "Creating topology manager with none policy" Feb 20 14:44:20.028456 master-0 kubenswrapper[4172]: I0220 14:44:20.028427 4172 container_manager_linux.go:303] "Creating device plugin manager" Feb 20 14:44:20.029170 master-0 kubenswrapper[4172]: I0220 14:44:20.029136 4172 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 20 14:44:20.029363 master-0 kubenswrapper[4172]: I0220 14:44:20.029177 4172 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 20 14:44:20.030026 master-0 kubenswrapper[4172]: I0220 14:44:20.029992 4172 state_mem.go:36] "Initialized new in-memory state store" Feb 20 14:44:20.030184 master-0 kubenswrapper[4172]: I0220 14:44:20.030153 4172 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 20 14:44:20.034074 master-0 kubenswrapper[4172]: I0220 14:44:20.034037 4172 kubelet.go:418] "Attempting to sync node with API server" Feb 20 14:44:20.034074 master-0 kubenswrapper[4172]: I0220 14:44:20.034071 4172 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 20 14:44:20.034189 master-0 kubenswrapper[4172]: I0220 14:44:20.034117 4172 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 20 14:44:20.034189 master-0 kubenswrapper[4172]: I0220 14:44:20.034137 4172 kubelet.go:324] "Adding apiserver pod source" Feb 20 14:44:20.034189 master-0 kubenswrapper[4172]: I0220 14:44:20.034163 4172 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 20 14:44:20.044174 master-0 kubenswrapper[4172]: I0220 14:44:20.044113 4172 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 20 14:44:20.044838 master-0 kubenswrapper[4172]: W0220 14:44:20.044700 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:20.044968 master-0 kubenswrapper[4172]: W0220 14:44:20.044685 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:20.044968 master-0 kubenswrapper[4172]: E0220 14:44:20.044902 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:20.045147 master-0 kubenswrapper[4172]: E0220 14:44:20.045034 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:20.046787 master-0 kubenswrapper[4172]: I0220 14:44:20.046733 4172 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 20 14:44:20.047212 master-0 kubenswrapper[4172]: I0220 14:44:20.047169 4172 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 20 14:44:20.047355 master-0 kubenswrapper[4172]: I0220 14:44:20.047219 4172 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 20 14:44:20.047355 master-0 kubenswrapper[4172]: I0220 14:44:20.047240 4172 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 20 14:44:20.047355 master-0 kubenswrapper[4172]: I0220 14:44:20.047258 4172 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 20 14:44:20.047355 master-0 kubenswrapper[4172]: I0220 14:44:20.047275 4172 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 20 14:44:20.047355 master-0 kubenswrapper[4172]: I0220 14:44:20.047295 4172 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 20 14:44:20.047355 master-0 kubenswrapper[4172]: I0220 14:44:20.047311 4172 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 20 14:44:20.047355 master-0 kubenswrapper[4172]: I0220 14:44:20.047328 4172 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 20 14:44:20.047355 master-0 kubenswrapper[4172]: I0220 14:44:20.047346 4172 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 20 14:44:20.047355 master-0 kubenswrapper[4172]: I0220 14:44:20.047363 4172 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 20 14:44:20.047872 master-0 kubenswrapper[4172]: I0220 14:44:20.047413 4172 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 20 14:44:20.048063 master-0 kubenswrapper[4172]: I0220 14:44:20.048022 4172 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 20 14:44:20.050040 master-0 kubenswrapper[4172]: I0220 14:44:20.049888 4172 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 20 14:44:20.052722 master-0 kubenswrapper[4172]: I0220 14:44:20.052680 4172 server.go:1280] "Started kubelet" Feb 20 14:44:20.054100 master-0 kubenswrapper[4172]: I0220 14:44:20.053947 4172 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 20 14:44:20.054212 master-0 kubenswrapper[4172]: I0220 14:44:20.054127 4172 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 20 14:44:20.054212 master-0 kubenswrapper[4172]: I0220 14:44:20.053979 4172 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 20 14:44:20.054640 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 20 14:44:20.055162 master-0 kubenswrapper[4172]: I0220 14:44:20.054876 4172 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 20 14:44:20.056100 master-0 kubenswrapper[4172]: I0220 14:44:20.056018 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:20.065316 master-0 kubenswrapper[4172]: I0220 14:44:20.065240 4172 server.go:449] "Adding debug handlers to kubelet server" Feb 20 14:44:20.065454 master-0 kubenswrapper[4172]: I0220 14:44:20.065407 4172 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 20 14:44:20.065559 master-0 kubenswrapper[4172]: I0220 14:44:20.065469 4172 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 20 14:44:20.065753 master-0 kubenswrapper[4172]: E0220 14:44:20.065682 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:20.066261 master-0 kubenswrapper[4172]: I0220 14:44:20.066203 4172 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 20 14:44:20.066261 master-0 kubenswrapper[4172]: I0220 14:44:20.066238 4172 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 20 14:44:20.066405 master-0 kubenswrapper[4172]: I0220 14:44:20.066262 4172 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 20 14:44:20.066460 master-0 kubenswrapper[4172]: I0220 14:44:20.066426 4172 reconstruct.go:97] "Volume reconstruction finished" Feb 20 14:44:20.066460 master-0 kubenswrapper[4172]: I0220 14:44:20.066441 4172 reconciler.go:26] "Reconciler: start to sync state" Feb 20 14:44:20.067415 master-0 kubenswrapper[4172]: E0220 14:44:20.067342 4172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 20 14:44:20.067519 master-0 kubenswrapper[4172]: W0220 14:44:20.067403 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:20.067578 master-0 kubenswrapper[4172]: E0220 14:44:20.067523 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:20.067578 master-0 kubenswrapper[4172]: E0220 14:44:20.065992 4172 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1895fb9850feb2cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.052628171 +0000 UTC m=+0.607853801,LastTimestamp:2026-02-20 14:44:20.052628171 +0000 UTC m=+0.607853801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:20.068580 master-0 kubenswrapper[4172]: I0220 14:44:20.068531 4172 factory.go:153] Registering CRI-O factory Feb 20 14:44:20.068580 master-0 kubenswrapper[4172]: I0220 14:44:20.068571 4172 factory.go:221] Registration of the crio container factory successfully Feb 20 14:44:20.068727 master-0 kubenswrapper[4172]: I0220 14:44:20.068667 4172 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 20 14:44:20.068787 master-0 kubenswrapper[4172]: I0220 14:44:20.068768 4172 factory.go:55] Registering systemd factory Feb 20 14:44:20.068843 master-0 kubenswrapper[4172]: I0220 14:44:20.068793 4172 factory.go:221] Registration of the systemd container factory successfully Feb 20 14:44:20.068982 master-0 kubenswrapper[4172]: E0220 14:44:20.068859 4172 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 20 14:44:20.069062 master-0 kubenswrapper[4172]: I0220 14:44:20.069021 4172 factory.go:103] Registering Raw factory Feb 20 14:44:20.069136 master-0 kubenswrapper[4172]: I0220 14:44:20.069062 4172 manager.go:1196] Started watching for new ooms in manager Feb 20 14:44:20.071144 master-0 kubenswrapper[4172]: I0220 14:44:20.071091 4172 manager.go:319] Starting recovery of all containers Feb 20 14:44:20.095902 master-0 kubenswrapper[4172]: I0220 14:44:20.095815 4172 manager.go:324] Recovery completed Feb 20 14:44:20.108826 master-0 kubenswrapper[4172]: I0220 14:44:20.108752 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:20.110771 master-0 kubenswrapper[4172]: I0220 14:44:20.110637 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:20.110771 master-0 kubenswrapper[4172]: I0220 14:44:20.110751 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:20.110771 master-0 kubenswrapper[4172]: I0220 14:44:20.110794 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:20.112220 master-0 kubenswrapper[4172]: I0220 14:44:20.112160 4172 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 20 14:44:20.112220 master-0 kubenswrapper[4172]: I0220 14:44:20.112195 4172 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 20 14:44:20.112220 master-0 kubenswrapper[4172]: I0220 14:44:20.112230 4172 state_mem.go:36] "Initialized new in-memory state store" Feb 20 14:44:20.116215 master-0 kubenswrapper[4172]: I0220 14:44:20.116171 4172 policy_none.go:49] "None policy: Start" Feb 20 14:44:20.117007 master-0 kubenswrapper[4172]: I0220 14:44:20.116964 4172 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 20 14:44:20.117059 master-0 kubenswrapper[4172]: I0220 14:44:20.117016 4172 state_mem.go:35] "Initializing new in-memory state store" Feb 20 14:44:20.166211 master-0 kubenswrapper[4172]: E0220 14:44:20.166103 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:20.191538 master-0 kubenswrapper[4172]: I0220 14:44:20.191489 4172 manager.go:334] "Starting Device Plugin manager" Feb 20 14:44:20.218723 master-0 kubenswrapper[4172]: I0220 14:44:20.191553 4172 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 20 14:44:20.218723 master-0 kubenswrapper[4172]: I0220 14:44:20.191572 4172 server.go:79] "Starting device plugin registration server" Feb 20 14:44:20.218723 master-0 kubenswrapper[4172]: I0220 14:44:20.192101 4172 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 20 14:44:20.218723 master-0 kubenswrapper[4172]: I0220 14:44:20.192120 4172 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 20 14:44:20.218723 master-0 kubenswrapper[4172]: I0220 14:44:20.192739 4172 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 20 14:44:20.218723 master-0 kubenswrapper[4172]: I0220 14:44:20.192904 4172 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 20 14:44:20.218723 master-0 kubenswrapper[4172]: I0220 14:44:20.192951 4172 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 20 14:44:20.218723 master-0 kubenswrapper[4172]: E0220 14:44:20.194366 4172 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 20 14:44:20.218723 master-0 kubenswrapper[4172]: I0220 14:44:20.204669 4172 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 20 14:44:20.218723 master-0 kubenswrapper[4172]: I0220 14:44:20.206463 4172 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 20 14:44:20.218723 master-0 kubenswrapper[4172]: I0220 14:44:20.206533 4172 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 20 14:44:20.218723 master-0 kubenswrapper[4172]: I0220 14:44:20.206561 4172 kubelet.go:2335] "Starting kubelet main sync loop" Feb 20 14:44:20.218723 master-0 kubenswrapper[4172]: E0220 14:44:20.206630 4172 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 20 14:44:20.218723 master-0 kubenswrapper[4172]: W0220 14:44:20.207489 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:20.218723 master-0 kubenswrapper[4172]: E0220 14:44:20.207532 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:20.268655 master-0 kubenswrapper[4172]: E0220 14:44:20.268565 4172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 20 14:44:20.292786 master-0 kubenswrapper[4172]: I0220 14:44:20.292707 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:20.293839 master-0 kubenswrapper[4172]: I0220 14:44:20.293785 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:20.293950 master-0 kubenswrapper[4172]: I0220 14:44:20.293846 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:20.293950 master-0 kubenswrapper[4172]: I0220 14:44:20.293871 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:20.293950 master-0 kubenswrapper[4172]: I0220 14:44:20.293917 4172 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 14:44:20.294853 master-0 kubenswrapper[4172]: E0220 14:44:20.294780 4172 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 20 14:44:20.307082 master-0 kubenswrapper[4172]: I0220 14:44:20.307008 4172 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Feb 20 14:44:20.307190 master-0 kubenswrapper[4172]: I0220 14:44:20.307114 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:20.308325 master-0 kubenswrapper[4172]: I0220 14:44:20.308276 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:20.308325 master-0 kubenswrapper[4172]: I0220 14:44:20.308325 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:20.308678 master-0 kubenswrapper[4172]: I0220 14:44:20.308343 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:20.308678 master-0 kubenswrapper[4172]: I0220 14:44:20.308458 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:20.308678 master-0 kubenswrapper[4172]: I0220 14:44:20.308637 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 14:44:20.308844 master-0 kubenswrapper[4172]: I0220 14:44:20.308730 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:20.309389 master-0 kubenswrapper[4172]: I0220 14:44:20.309341 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:20.309469 master-0 kubenswrapper[4172]: I0220 14:44:20.309392 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:20.309469 master-0 kubenswrapper[4172]: I0220 14:44:20.309410 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:20.309589 master-0 kubenswrapper[4172]: I0220 14:44:20.309531 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:20.309724 master-0 kubenswrapper[4172]: I0220 14:44:20.309681 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:20.309810 master-0 kubenswrapper[4172]: I0220 14:44:20.309736 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:20.309810 master-0 kubenswrapper[4172]: I0220 14:44:20.309757 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:20.309810 master-0 kubenswrapper[4172]: I0220 14:44:20.309775 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:44:20.310005 master-0 kubenswrapper[4172]: I0220 14:44:20.309828 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:20.310335 master-0 kubenswrapper[4172]: I0220 14:44:20.310292 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:20.310335 master-0 kubenswrapper[4172]: I0220 14:44:20.310327 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:20.310475 master-0 kubenswrapper[4172]: I0220 14:44:20.310342 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:20.310809 master-0 kubenswrapper[4172]: I0220 14:44:20.310769 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:20.310902 master-0 kubenswrapper[4172]: I0220 14:44:20.310817 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:20.310902 master-0 kubenswrapper[4172]: I0220 14:44:20.310841 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:20.310902 master-0 kubenswrapper[4172]: I0220 14:44:20.310855 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.310902 master-0 kubenswrapper[4172]: I0220 14:44:20.310890 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:20.311168 master-0 kubenswrapper[4172]: I0220 14:44:20.310785 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:20.313312 master-0 kubenswrapper[4172]: I0220 14:44:20.313252 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:20.313419 master-0 kubenswrapper[4172]: I0220 14:44:20.313334 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:20.313419 master-0 kubenswrapper[4172]: I0220 14:44:20.313366 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:20.313869 master-0 kubenswrapper[4172]: I0220 14:44:20.313829 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:20.314156 master-0 kubenswrapper[4172]: I0220 14:44:20.314111 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.314296 master-0 kubenswrapper[4172]: I0220 14:44:20.314250 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:20.315209 master-0 kubenswrapper[4172]: I0220 14:44:20.315170 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:20.315400 master-0 kubenswrapper[4172]: I0220 14:44:20.315266 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:20.315481 master-0 kubenswrapper[4172]: I0220 14:44:20.315413 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:20.315481 master-0 kubenswrapper[4172]: I0220 14:44:20.315372 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:20.315596 master-0 kubenswrapper[4172]: I0220 14:44:20.315482 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:20.315596 master-0 kubenswrapper[4172]: I0220 14:44:20.315525 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:20.315596 master-0 kubenswrapper[4172]: I0220 14:44:20.315486 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:20.315596 master-0 kubenswrapper[4172]: I0220 14:44:20.315549 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:20.315596 master-0 kubenswrapper[4172]: I0220 14:44:20.315435 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:20.316631 master-0 kubenswrapper[4172]: I0220 14:44:20.316603 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:44:20.316817 master-0 kubenswrapper[4172]: I0220 14:44:20.316794 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:20.318033 master-0 kubenswrapper[4172]: I0220 14:44:20.317907 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:20.318033 master-0 kubenswrapper[4172]: I0220 14:44:20.318008 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:20.318033 master-0 kubenswrapper[4172]: I0220 14:44:20.318026 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:20.368663 master-0 kubenswrapper[4172]: I0220 14:44:20.368608 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 14:44:20.368793 master-0 kubenswrapper[4172]: I0220 14:44:20.368666 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.368793 master-0 kubenswrapper[4172]: I0220 14:44:20.368700 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.368793 master-0 kubenswrapper[4172]: I0220 14:44:20.368734 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:44:20.369045 master-0 kubenswrapper[4172]: I0220 14:44:20.368786 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.369045 master-0 kubenswrapper[4172]: I0220 14:44:20.368836 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.369045 master-0 kubenswrapper[4172]: I0220 14:44:20.368972 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:44:20.369045 master-0 kubenswrapper[4172]: I0220 14:44:20.369038 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:44:20.369264 master-0 kubenswrapper[4172]: I0220 14:44:20.369081 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.369264 master-0 kubenswrapper[4172]: I0220 14:44:20.369123 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.369264 master-0 kubenswrapper[4172]: I0220 14:44:20.369155 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.369264 master-0 kubenswrapper[4172]: I0220 14:44:20.369189 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:44:20.369264 master-0 kubenswrapper[4172]: I0220 14:44:20.369222 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.369264 master-0 kubenswrapper[4172]: I0220 14:44:20.369253 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 14:44:20.369610 master-0 kubenswrapper[4172]: I0220 14:44:20.369285 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.369610 master-0 kubenswrapper[4172]: I0220 14:44:20.369317 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.369610 master-0 kubenswrapper[4172]: I0220 14:44:20.369348 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.470901 master-0 kubenswrapper[4172]: I0220 14:44:20.470527 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.470901 master-0 kubenswrapper[4172]: I0220 14:44:20.470895 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.471373 master-0 kubenswrapper[4172]: I0220 14:44:20.470729 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.471373 master-0 kubenswrapper[4172]: I0220 14:44:20.470970 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:44:20.471373 master-0 kubenswrapper[4172]: I0220 14:44:20.471005 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 14:44:20.471373 master-0 kubenswrapper[4172]: I0220 14:44:20.471028 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:44:20.471373 master-0 kubenswrapper[4172]: I0220 14:44:20.471037 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.471373 master-0 kubenswrapper[4172]: I0220 14:44:20.471079 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.471373 master-0 kubenswrapper[4172]: I0220 14:44:20.471089 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 14:44:20.471373 master-0 kubenswrapper[4172]: I0220 14:44:20.471149 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.471373 master-0 kubenswrapper[4172]: I0220 14:44:20.471150 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.471373 master-0 kubenswrapper[4172]: I0220 14:44:20.471148 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471345 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471372 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471441 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471478 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471517 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471569 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471569 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471604 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471576 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471634 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471663 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471671 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471719 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471748 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471755 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471723 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.471895 master-0 kubenswrapper[4172]: I0220 14:44:20.471781 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.472860 master-0 kubenswrapper[4172]: I0220 14:44:20.471796 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.472860 master-0 kubenswrapper[4172]: I0220 14:44:20.471835 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 14:44:20.472860 master-0 kubenswrapper[4172]: I0220 14:44:20.471835 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.472860 master-0 kubenswrapper[4172]: I0220 14:44:20.471903 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.472860 master-0 kubenswrapper[4172]: I0220 14:44:20.471968 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 14:44:20.495714 master-0 kubenswrapper[4172]: I0220 14:44:20.495648 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:20.497043 master-0 kubenswrapper[4172]: I0220 14:44:20.496996 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:20.497161 master-0 kubenswrapper[4172]: I0220 14:44:20.497051 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:20.497161 master-0 kubenswrapper[4172]: I0220 14:44:20.497069 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:20.497161 master-0 kubenswrapper[4172]: I0220 14:44:20.497125 4172 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 14:44:20.498184 master-0 kubenswrapper[4172]: E0220 14:44:20.498121 4172 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 20 14:44:20.656279 master-0 kubenswrapper[4172]: I0220 14:44:20.656106 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 14:44:20.670567 master-0 kubenswrapper[4172]: I0220 14:44:20.670515 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:44:20.670567 master-0 kubenswrapper[4172]: E0220 14:44:20.670549 4172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 20 14:44:20.692176 master-0 kubenswrapper[4172]: I0220 14:44:20.692114 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:20.717428 master-0 kubenswrapper[4172]: I0220 14:44:20.717329 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:20.729470 master-0 kubenswrapper[4172]: I0220 14:44:20.729405 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:44:20.898682 master-0 kubenswrapper[4172]: I0220 14:44:20.898640 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:20.900807 master-0 kubenswrapper[4172]: I0220 14:44:20.900754 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:20.900807 master-0 kubenswrapper[4172]: I0220 14:44:20.900807 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:20.901024 master-0 kubenswrapper[4172]: I0220 14:44:20.900827 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:20.901024 master-0 kubenswrapper[4172]: I0220 14:44:20.900946 4172 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 14:44:20.901974 master-0 kubenswrapper[4172]: E0220 14:44:20.901901 4172 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 20 14:44:20.908621 master-0 kubenswrapper[4172]: W0220 14:44:20.908435 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:20.908621 master-0 kubenswrapper[4172]: E0220 14:44:20.908524 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:21.057462 master-0 kubenswrapper[4172]: I0220 14:44:21.057413 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:21.332273 master-0 kubenswrapper[4172]: W0220 14:44:21.332086 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:21.332273 master-0 kubenswrapper[4172]: E0220 14:44:21.332214 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:21.338504 master-0 kubenswrapper[4172]: W0220 14:44:21.338389 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:21.338646 master-0 kubenswrapper[4172]: E0220 14:44:21.338525 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:21.417196 master-0 kubenswrapper[4172]: W0220 14:44:21.417118 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod687e92a6cecf1e2beeef16a0b322ad08.slice/crio-c7111b0bf2b7379929af69699174f229cbbc25f01fc7ffc44b3371950f17c6f2 WatchSource:0}: Error finding container c7111b0bf2b7379929af69699174f229cbbc25f01fc7ffc44b3371950f17c6f2: Status 404 returned error can't find the container with id c7111b0bf2b7379929af69699174f229cbbc25f01fc7ffc44b3371950f17c6f2 Feb 20 14:44:21.418324 master-0 kubenswrapper[4172]: W0220 14:44:21.418241 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9ad9373c007a4fcd25e70622bdc8deb.slice/crio-ff5aeff3d91fe04ad5b35e5f18daa8ee28aba3161b0999bafdb650c9674062ac WatchSource:0}: Error finding container ff5aeff3d91fe04ad5b35e5f18daa8ee28aba3161b0999bafdb650c9674062ac: Status 404 returned error can't find the container with id ff5aeff3d91fe04ad5b35e5f18daa8ee28aba3161b0999bafdb650c9674062ac Feb 20 14:44:21.424075 master-0 kubenswrapper[4172]: I0220 14:44:21.424035 4172 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 14:44:21.424292 master-0 kubenswrapper[4172]: W0220 14:44:21.424221 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12dab5d350ebc129b0bfa4714d330b15.slice/crio-b953e5f23702f5654559767cf06b2635635ca7c579d9ee9d2d2bf61bf3d9a6b1 WatchSource:0}: Error finding container b953e5f23702f5654559767cf06b2635635ca7c579d9ee9d2d2bf61bf3d9a6b1: Status 404 returned error can't find the container with id b953e5f23702f5654559767cf06b2635635ca7c579d9ee9d2d2bf61bf3d9a6b1 Feb 20 14:44:21.472155 master-0 kubenswrapper[4172]: E0220 14:44:21.472040 4172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 20 14:44:21.481385 master-0 kubenswrapper[4172]: W0220 14:44:21.481333 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56c3cb71c9851003c8de7e7c5db4b87e.slice/crio-d15f8dfa0d113319aa72954517575419d7a6afcad7f7cef9517b2fb935c0ea42 WatchSource:0}: Error finding container d15f8dfa0d113319aa72954517575419d7a6afcad7f7cef9517b2fb935c0ea42: Status 404 returned error can't find the container with id d15f8dfa0d113319aa72954517575419d7a6afcad7f7cef9517b2fb935c0ea42 Feb 20 14:44:21.575100 master-0 kubenswrapper[4172]: W0220 14:44:21.574920 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:21.575100 master-0 kubenswrapper[4172]: E0220 14:44:21.575071 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:21.584868 master-0 kubenswrapper[4172]: W0220 14:44:21.584777 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc997c8e9d3be51d454d8e61e376bef08.slice/crio-5ea8ac7578359ce087855682fd87fbd08a72604f8701716ddbb28b051d93bff2 WatchSource:0}: Error finding container 5ea8ac7578359ce087855682fd87fbd08a72604f8701716ddbb28b051d93bff2: Status 404 returned error can't find the container with id 5ea8ac7578359ce087855682fd87fbd08a72604f8701716ddbb28b051d93bff2 Feb 20 14:44:21.703103 master-0 kubenswrapper[4172]: I0220 14:44:21.702953 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:21.704723 master-0 kubenswrapper[4172]: I0220 14:44:21.704636 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:21.704723 master-0 kubenswrapper[4172]: I0220 14:44:21.704713 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:21.704994 master-0 kubenswrapper[4172]: I0220 14:44:21.704750 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:21.704994 master-0 kubenswrapper[4172]: I0220 14:44:21.704825 4172 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 14:44:21.706157 master-0 kubenswrapper[4172]: E0220 14:44:21.706091 4172 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 20 14:44:21.968877 master-0 kubenswrapper[4172]: I0220 14:44:21.968676 4172 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 20 14:44:21.970647 master-0 kubenswrapper[4172]: E0220 14:44:21.970575 4172 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:22.058440 master-0 kubenswrapper[4172]: I0220 14:44:22.058375 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:22.212358 master-0 kubenswrapper[4172]: I0220 14:44:22.212257 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"b953e5f23702f5654559767cf06b2635635ca7c579d9ee9d2d2bf61bf3d9a6b1"} Feb 20 14:44:22.213779 master-0 kubenswrapper[4172]: I0220 14:44:22.213747 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"5ea8ac7578359ce087855682fd87fbd08a72604f8701716ddbb28b051d93bff2"} Feb 20 14:44:22.214774 master-0 kubenswrapper[4172]: I0220 14:44:22.214746 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"d15f8dfa0d113319aa72954517575419d7a6afcad7f7cef9517b2fb935c0ea42"} Feb 20 14:44:22.215554 master-0 kubenswrapper[4172]: I0220 14:44:22.215513 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"c7111b0bf2b7379929af69699174f229cbbc25f01fc7ffc44b3371950f17c6f2"} Feb 20 14:44:22.216287 master-0 kubenswrapper[4172]: I0220 14:44:22.216244 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"ff5aeff3d91fe04ad5b35e5f18daa8ee28aba3161b0999bafdb650c9674062ac"} Feb 20 14:44:22.797554 master-0 kubenswrapper[4172]: W0220 14:44:22.797472 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:22.797554 master-0 kubenswrapper[4172]: E0220 14:44:22.797543 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:23.058087 master-0 kubenswrapper[4172]: I0220 14:44:23.057956 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:23.073315 master-0 kubenswrapper[4172]: E0220 14:44:23.073242 4172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 20 14:44:23.307226 master-0 kubenswrapper[4172]: I0220 14:44:23.307052 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:23.308158 master-0 kubenswrapper[4172]: I0220 14:44:23.308082 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:23.308158 master-0 kubenswrapper[4172]: I0220 14:44:23.308129 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:23.308158 master-0 kubenswrapper[4172]: I0220 14:44:23.308144 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:23.308293 master-0 kubenswrapper[4172]: I0220 14:44:23.308201 4172 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 14:44:23.309102 master-0 kubenswrapper[4172]: E0220 14:44:23.309070 4172 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 20 14:44:24.058235 master-0 kubenswrapper[4172]: I0220 14:44:24.058157 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:24.222032 master-0 kubenswrapper[4172]: I0220 14:44:24.221893 4172 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="8aa8f34057d37d62316a09602947b9934df303dc999d1b14efc423cb04940c72" exitCode=0 Feb 20 14:44:24.222032 master-0 kubenswrapper[4172]: I0220 14:44:24.221986 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"8aa8f34057d37d62316a09602947b9934df303dc999d1b14efc423cb04940c72"} Feb 20 14:44:24.222388 master-0 kubenswrapper[4172]: I0220 14:44:24.222108 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:24.223165 master-0 kubenswrapper[4172]: I0220 14:44:24.223114 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:24.223165 master-0 kubenswrapper[4172]: I0220 14:44:24.223148 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:24.223165 master-0 kubenswrapper[4172]: I0220 14:44:24.223156 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:24.228962 master-0 kubenswrapper[4172]: W0220 14:44:24.228865 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:24.229065 master-0 kubenswrapper[4172]: E0220 14:44:24.228971 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:24.301476 master-0 kubenswrapper[4172]: W0220 14:44:24.301410 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:24.301476 master-0 kubenswrapper[4172]: E0220 14:44:24.301472 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:24.316958 master-0 kubenswrapper[4172]: W0220 14:44:24.316814 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:24.316958 master-0 kubenswrapper[4172]: E0220 14:44:24.316877 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:25.057843 master-0 kubenswrapper[4172]: I0220 14:44:25.057778 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:25.225895 master-0 kubenswrapper[4172]: I0220 14:44:25.225752 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"ab258fec42d8ec54f4f2b16e7f18ce6e3f88de1f121875064baf67bce8e05a10"} Feb 20 14:44:25.225895 master-0 kubenswrapper[4172]: I0220 14:44:25.225834 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"10bfd96b29aba7539a53e7ab2b44c245c4854718cd635aecd100e792a48f1fdc"} Feb 20 14:44:25.226655 master-0 kubenswrapper[4172]: I0220 14:44:25.225913 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:25.227095 master-0 kubenswrapper[4172]: I0220 14:44:25.227053 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:25.227095 master-0 kubenswrapper[4172]: I0220 14:44:25.227079 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:25.227095 master-0 kubenswrapper[4172]: I0220 14:44:25.227087 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:25.228202 master-0 kubenswrapper[4172]: I0220 14:44:25.228160 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/0.log" Feb 20 14:44:25.228860 master-0 kubenswrapper[4172]: I0220 14:44:25.228805 4172 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="5dae0cda083922f44e09675386ff70d64b6a454b5a905eda6e5187d7ab0422e0" exitCode=1 Feb 20 14:44:25.228860 master-0 kubenswrapper[4172]: I0220 14:44:25.228851 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"5dae0cda083922f44e09675386ff70d64b6a454b5a905eda6e5187d7ab0422e0"} Feb 20 14:44:25.229004 master-0 kubenswrapper[4172]: I0220 14:44:25.228957 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:25.229698 master-0 kubenswrapper[4172]: I0220 14:44:25.229663 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:25.229698 master-0 kubenswrapper[4172]: I0220 14:44:25.229686 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:25.229698 master-0 kubenswrapper[4172]: I0220 14:44:25.229694 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:25.229951 master-0 kubenswrapper[4172]: I0220 14:44:25.229910 4172 scope.go:117] "RemoveContainer" containerID="5dae0cda083922f44e09675386ff70d64b6a454b5a905eda6e5187d7ab0422e0" Feb 20 14:44:26.058009 master-0 kubenswrapper[4172]: I0220 14:44:26.057953 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:26.103405 master-0 kubenswrapper[4172]: I0220 14:44:26.103346 4172 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 20 14:44:26.104508 master-0 kubenswrapper[4172]: E0220 14:44:26.104452 4172 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:26.232553 master-0 kubenswrapper[4172]: I0220 14:44:26.232497 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 20 14:44:26.233100 master-0 kubenswrapper[4172]: I0220 14:44:26.233032 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/0.log" Feb 20 14:44:26.233674 master-0 kubenswrapper[4172]: I0220 14:44:26.233608 4172 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="bb798ca2d5a26455ed20f988214c4091a2110223c74f07cdf2f44a8af1cef396" exitCode=1 Feb 20 14:44:26.233674 master-0 kubenswrapper[4172]: I0220 14:44:26.233650 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"bb798ca2d5a26455ed20f988214c4091a2110223c74f07cdf2f44a8af1cef396"} Feb 20 14:44:26.233815 master-0 kubenswrapper[4172]: I0220 14:44:26.233700 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:26.233815 master-0 kubenswrapper[4172]: I0220 14:44:26.233714 4172 scope.go:117] "RemoveContainer" containerID="5dae0cda083922f44e09675386ff70d64b6a454b5a905eda6e5187d7ab0422e0" Feb 20 14:44:26.233815 master-0 kubenswrapper[4172]: I0220 14:44:26.233749 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:26.234789 master-0 kubenswrapper[4172]: I0220 14:44:26.234756 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:26.234789 master-0 kubenswrapper[4172]: I0220 14:44:26.234786 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:26.234789 master-0 kubenswrapper[4172]: I0220 14:44:26.234796 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:26.234967 master-0 kubenswrapper[4172]: I0220 14:44:26.234946 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:26.235033 master-0 kubenswrapper[4172]: I0220 14:44:26.234978 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:26.235033 master-0 kubenswrapper[4172]: I0220 14:44:26.234993 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:26.235126 master-0 kubenswrapper[4172]: I0220 14:44:26.235073 4172 scope.go:117] "RemoveContainer" containerID="bb798ca2d5a26455ed20f988214c4091a2110223c74f07cdf2f44a8af1cef396" Feb 20 14:44:26.235539 master-0 kubenswrapper[4172]: E0220 14:44:26.235189 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 20 14:44:26.274873 master-0 kubenswrapper[4172]: E0220 14:44:26.274826 4172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 20 14:44:26.510074 master-0 kubenswrapper[4172]: I0220 14:44:26.509939 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:26.511608 master-0 kubenswrapper[4172]: I0220 14:44:26.511131 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:26.511608 master-0 kubenswrapper[4172]: I0220 14:44:26.511176 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:26.511608 master-0 kubenswrapper[4172]: I0220 14:44:26.511194 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:26.511608 master-0 kubenswrapper[4172]: I0220 14:44:26.511243 4172 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 14:44:26.512052 master-0 kubenswrapper[4172]: E0220 14:44:26.512007 4172 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 20 14:44:27.057613 master-0 kubenswrapper[4172]: I0220 14:44:27.057546 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:27.238478 master-0 kubenswrapper[4172]: I0220 14:44:27.238426 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 20 14:44:27.239587 master-0 kubenswrapper[4172]: I0220 14:44:27.239542 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:27.240426 master-0 kubenswrapper[4172]: I0220 14:44:27.240376 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:27.240426 master-0 kubenswrapper[4172]: I0220 14:44:27.240411 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:27.240426 master-0 kubenswrapper[4172]: I0220 14:44:27.240421 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:27.240764 master-0 kubenswrapper[4172]: I0220 14:44:27.240696 4172 scope.go:117] "RemoveContainer" containerID="bb798ca2d5a26455ed20f988214c4091a2110223c74f07cdf2f44a8af1cef396" Feb 20 14:44:27.240982 master-0 kubenswrapper[4172]: E0220 14:44:27.240909 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 20 14:44:27.549426 master-0 kubenswrapper[4172]: W0220 14:44:27.549343 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:27.549426 master-0 kubenswrapper[4172]: E0220 14:44:27.549416 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:28.058208 master-0 kubenswrapper[4172]: I0220 14:44:28.057462 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:28.248700 master-0 kubenswrapper[4172]: W0220 14:44:28.248553 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:28.248700 master-0 kubenswrapper[4172]: E0220 14:44:28.248659 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:28.552052 master-0 kubenswrapper[4172]: W0220 14:44:28.551564 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:28.552052 master-0 kubenswrapper[4172]: E0220 14:44:28.551642 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:29.058138 master-0 kubenswrapper[4172]: I0220 14:44:29.058064 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:29.972518 master-0 kubenswrapper[4172]: E0220 14:44:29.972308 4172 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1895fb9850feb2cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.052628171 +0000 UTC m=+0.607853801,LastTimestamp:2026-02-20 14:44:20.052628171 +0000 UTC m=+0.607853801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:30.057908 master-0 kubenswrapper[4172]: I0220 14:44:30.057812 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:30.129287 master-0 kubenswrapper[4172]: W0220 14:44:30.129104 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:30.129287 master-0 kubenswrapper[4172]: E0220 14:44:30.129292 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 20 14:44:30.194786 master-0 kubenswrapper[4172]: E0220 14:44:30.194679 4172 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 20 14:44:31.058262 master-0 kubenswrapper[4172]: I0220 14:44:31.058036 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 20 14:44:31.254122 master-0 kubenswrapper[4172]: I0220 14:44:31.253967 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"6dbf7c55ace0ed513f2aaaeda5aa48d72fc75a02defc6cc2063a7bcf59d1c27f"} Feb 20 14:44:31.256711 master-0 kubenswrapper[4172]: I0220 14:44:31.256631 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"1dbd1253fb8b09bfbaa096d3703dce0afe66c7bc42222d1d422586b85221b083"} Feb 20 14:44:31.256862 master-0 kubenswrapper[4172]: I0220 14:44:31.256801 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:31.258403 master-0 kubenswrapper[4172]: I0220 14:44:31.258345 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:31.258403 master-0 kubenswrapper[4172]: I0220 14:44:31.258405 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:31.258568 master-0 kubenswrapper[4172]: I0220 14:44:31.258426 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:31.260996 master-0 kubenswrapper[4172]: I0220 14:44:31.260883 4172 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="92784546c39ab249199b64e99295b360ac694daa7345bcc5ca4290c1679248d5" exitCode=0 Feb 20 14:44:31.261145 master-0 kubenswrapper[4172]: I0220 14:44:31.261081 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:31.261277 master-0 kubenswrapper[4172]: I0220 14:44:31.261005 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerDied","Data":"92784546c39ab249199b64e99295b360ac694daa7345bcc5ca4290c1679248d5"} Feb 20 14:44:31.262289 master-0 kubenswrapper[4172]: I0220 14:44:31.262228 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:31.262289 master-0 kubenswrapper[4172]: I0220 14:44:31.262287 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:31.262488 master-0 kubenswrapper[4172]: I0220 14:44:31.262310 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:31.266980 master-0 kubenswrapper[4172]: I0220 14:44:31.266898 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:31.267998 master-0 kubenswrapper[4172]: I0220 14:44:31.267954 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:31.268053 master-0 kubenswrapper[4172]: I0220 14:44:31.268014 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:31.268053 master-0 kubenswrapper[4172]: I0220 14:44:31.268034 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:32.269740 master-0 kubenswrapper[4172]: I0220 14:44:32.268336 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"321be2d7453c33396b3363bf789e4d552d4e8d66090aa9915bf60f644a971c6e"} Feb 20 14:44:32.270544 master-0 kubenswrapper[4172]: I0220 14:44:32.270335 4172 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="6dbf7c55ace0ed513f2aaaeda5aa48d72fc75a02defc6cc2063a7bcf59d1c27f" exitCode=1 Feb 20 14:44:32.270544 master-0 kubenswrapper[4172]: I0220 14:44:32.270433 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:32.270544 master-0 kubenswrapper[4172]: I0220 14:44:32.270477 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"6dbf7c55ace0ed513f2aaaeda5aa48d72fc75a02defc6cc2063a7bcf59d1c27f"} Feb 20 14:44:32.271188 master-0 kubenswrapper[4172]: I0220 14:44:32.271159 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:32.271188 master-0 kubenswrapper[4172]: I0220 14:44:32.271190 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:32.271304 master-0 kubenswrapper[4172]: I0220 14:44:32.271206 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:32.912631 master-0 kubenswrapper[4172]: I0220 14:44:32.912347 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:32.913677 master-0 kubenswrapper[4172]: I0220 14:44:32.913629 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:32.913760 master-0 kubenswrapper[4172]: I0220 14:44:32.913688 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:32.913760 master-0 kubenswrapper[4172]: I0220 14:44:32.913707 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:32.913840 master-0 kubenswrapper[4172]: I0220 14:44:32.913768 4172 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 14:44:33.045253 master-0 kubenswrapper[4172]: I0220 14:44:33.045156 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:33.045253 master-0 kubenswrapper[4172]: E0220 14:44:33.045155 4172 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 14:44:33.045253 master-0 kubenswrapper[4172]: E0220 14:44:33.045183 4172 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 20 14:44:33.066372 master-0 kubenswrapper[4172]: I0220 14:44:33.066315 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found] Feb 20 14:44:33.277568 master-0 kubenswrapper[4172]: I0220 14:44:33.277517 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2"} Feb 20 14:44:33.278291 master-0 kubenswrapper[4172]: I0220 14:44:33.277686 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:33.279856 master-0 kubenswrapper[4172]: I0220 14:44:33.279163 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:33.279856 master-0 kubenswrapper[4172]: I0220 14:44:33.279208 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:33.279856 master-0 kubenswrapper[4172]: I0220 14:44:33.279222 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:33.279856 master-0 kubenswrapper[4172]: I0220 14:44:33.279571 4172 scope.go:117] "RemoveContainer" containerID="6dbf7c55ace0ed513f2aaaeda5aa48d72fc75a02defc6cc2063a7bcf59d1c27f" Feb 20 14:44:33.334823 master-0 kubenswrapper[4172]: I0220 14:44:33.334748 4172 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:33.530044 master-0 kubenswrapper[4172]: I0220 14:44:33.530000 4172 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:34.066310 master-0 kubenswrapper[4172]: I0220 14:44:34.066270 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:34.281329 master-0 kubenswrapper[4172]: I0220 14:44:34.281254 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"24b2aee1f972d18ca4405ff399927f57d407665113e657b4f3db6303afde8747"} Feb 20 14:44:34.281763 master-0 kubenswrapper[4172]: I0220 14:44:34.281411 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:34.281948 master-0 kubenswrapper[4172]: I0220 14:44:34.281890 4172 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:34.282381 master-0 kubenswrapper[4172]: I0220 14:44:34.282345 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:34.282430 master-0 kubenswrapper[4172]: I0220 14:44:34.282383 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:34.282430 master-0 kubenswrapper[4172]: I0220 14:44:34.282403 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:34.578632 master-0 kubenswrapper[4172]: I0220 14:44:34.578537 4172 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:34.894639 master-0 kubenswrapper[4172]: I0220 14:44:34.894521 4172 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 20 14:44:34.916290 master-0 kubenswrapper[4172]: I0220 14:44:34.916199 4172 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 20 14:44:35.064338 master-0 kubenswrapper[4172]: I0220 14:44:35.064266 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:35.285708 master-0 kubenswrapper[4172]: I0220 14:44:35.285669 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:35.286332 master-0 kubenswrapper[4172]: I0220 14:44:35.286122 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:35.286332 master-0 kubenswrapper[4172]: I0220 14:44:35.286300 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"3c2b6c4d3887c6ce78fb1f319d3d917dd19b6ede5e9ab3d53c00d05b6ea4ef23"} Feb 20 14:44:35.286671 master-0 kubenswrapper[4172]: I0220 14:44:35.286645 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:35.286671 master-0 kubenswrapper[4172]: I0220 14:44:35.286667 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:35.286780 master-0 kubenswrapper[4172]: I0220 14:44:35.286676 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:35.287089 master-0 kubenswrapper[4172]: I0220 14:44:35.287058 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:35.287089 master-0 kubenswrapper[4172]: I0220 14:44:35.287071 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:35.287089 master-0 kubenswrapper[4172]: I0220 14:44:35.287078 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:35.908917 master-0 kubenswrapper[4172]: W0220 14:44:35.908825 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 20 14:44:35.909269 master-0 kubenswrapper[4172]: E0220 14:44:35.908919 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 20 14:44:36.064814 master-0 kubenswrapper[4172]: I0220 14:44:36.064750 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:36.181335 master-0 kubenswrapper[4172]: W0220 14:44:36.181118 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 20 14:44:36.181335 master-0 kubenswrapper[4172]: E0220 14:44:36.181199 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 20 14:44:36.288119 master-0 kubenswrapper[4172]: I0220 14:44:36.288043 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:36.288976 master-0 kubenswrapper[4172]: I0220 14:44:36.288073 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:36.289199 master-0 kubenswrapper[4172]: I0220 14:44:36.289132 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:36.289199 master-0 kubenswrapper[4172]: I0220 14:44:36.289198 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:36.289452 master-0 kubenswrapper[4172]: I0220 14:44:36.289216 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:36.290373 master-0 kubenswrapper[4172]: I0220 14:44:36.290246 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:36.290373 master-0 kubenswrapper[4172]: I0220 14:44:36.290289 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:36.290373 master-0 kubenswrapper[4172]: I0220 14:44:36.290305 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:37.064822 master-0 kubenswrapper[4172]: I0220 14:44:37.064691 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:37.127156 master-0 kubenswrapper[4172]: W0220 14:44:37.127054 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:37.127156 master-0 kubenswrapper[4172]: E0220 14:44:37.127158 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 20 14:44:38.067161 master-0 kubenswrapper[4172]: I0220 14:44:38.067055 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:38.278205 master-0 kubenswrapper[4172]: I0220 14:44:38.278100 4172 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:38.278477 master-0 kubenswrapper[4172]: I0220 14:44:38.278285 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:38.279600 master-0 kubenswrapper[4172]: I0220 14:44:38.279523 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:38.279600 master-0 kubenswrapper[4172]: I0220 14:44:38.279581 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:38.279600 master-0 kubenswrapper[4172]: I0220 14:44:38.279600 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:38.285509 master-0 kubenswrapper[4172]: I0220 14:44:38.285455 4172 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:38.293492 master-0 kubenswrapper[4172]: I0220 14:44:38.293409 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:38.293703 master-0 kubenswrapper[4172]: I0220 14:44:38.293645 4172 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:38.294661 master-0 kubenswrapper[4172]: I0220 14:44:38.294609 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:38.294791 master-0 kubenswrapper[4172]: I0220 14:44:38.294676 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:38.294791 master-0 kubenswrapper[4172]: I0220 14:44:38.294701 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:38.300270 master-0 kubenswrapper[4172]: I0220 14:44:38.300215 4172 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:44:39.066617 master-0 kubenswrapper[4172]: I0220 14:44:39.066520 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:39.295699 master-0 kubenswrapper[4172]: I0220 14:44:39.295611 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:39.296854 master-0 kubenswrapper[4172]: I0220 14:44:39.296801 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:39.296989 master-0 kubenswrapper[4172]: I0220 14:44:39.296907 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:39.296989 master-0 kubenswrapper[4172]: I0220 14:44:39.296949 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:39.649742 master-0 kubenswrapper[4172]: I0220 14:44:39.649667 4172 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:39.650053 master-0 kubenswrapper[4172]: I0220 14:44:39.649866 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:39.651278 master-0 kubenswrapper[4172]: I0220 14:44:39.651213 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:39.651278 master-0 kubenswrapper[4172]: I0220 14:44:39.651284 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:39.651514 master-0 kubenswrapper[4172]: I0220 14:44:39.651338 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:39.655890 master-0 kubenswrapper[4172]: I0220 14:44:39.655850 4172 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:39.980634 master-0 kubenswrapper[4172]: E0220 14:44:39.980358 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9850feb2cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.052628171 +0000 UTC m=+0.607853801,LastTimestamp:2026-02-20 14:44:20.052628171 +0000 UTC m=+0.607853801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:39.987239 master-0 kubenswrapper[4172]: E0220 14:44:39.987111 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9854750fa7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.110716839 +0000 UTC m=+0.665942469,LastTimestamp:2026-02-20 14:44:20.110716839 +0000 UTC m=+0.665942469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:39.994153 master-0 kubenswrapper[4172]: E0220 14:44:39.993971 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9854760674 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.11078002 +0000 UTC m=+0.666005660,LastTimestamp:2026-02-20 14:44:20.11078002 +0000 UTC m=+0.666005660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.001479 master-0 kubenswrapper[4172]: E0220 14:44:40.001065 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb985476a1e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.110819811 +0000 UTC m=+0.666045451,LastTimestamp:2026-02-20 14:44:20.110819811 +0000 UTC m=+0.666045451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.009163 master-0 kubenswrapper[4172]: E0220 14:44:40.008993 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb98599ac3f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.197073911 +0000 UTC m=+0.752299551,LastTimestamp:2026-02-20 14:44:20.197073911 +0000 UTC m=+0.752299551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.019918 master-0 kubenswrapper[4172]: E0220 14:44:40.019774 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb9854750fa7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9854750fa7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.110716839 +0000 UTC m=+0.665942469,LastTimestamp:2026-02-20 14:44:20.293822912 +0000 UTC m=+0.849048542,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.027205 master-0 kubenswrapper[4172]: E0220 14:44:40.027011 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb9854760674\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9854760674 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.11078002 +0000 UTC m=+0.666005660,LastTimestamp:2026-02-20 14:44:20.293862013 +0000 UTC m=+0.849087643,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.033157 master-0 kubenswrapper[4172]: E0220 14:44:40.033007 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb985476a1e3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb985476a1e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.110819811 +0000 UTC m=+0.666045451,LastTimestamp:2026-02-20 14:44:20.293884014 +0000 UTC m=+0.849109654,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.040408 master-0 kubenswrapper[4172]: E0220 14:44:40.040264 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb9854750fa7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9854750fa7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.110716839 +0000 UTC m=+0.665942469,LastTimestamp:2026-02-20 14:44:20.308307548 +0000 UTC m=+0.863533188,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.045508 master-0 kubenswrapper[4172]: I0220 14:44:40.045394 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:40.046539 master-0 kubenswrapper[4172]: E0220 14:44:40.045458 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb9854760674\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9854760674 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.11078002 +0000 UTC m=+0.666005660,LastTimestamp:2026-02-20 14:44:20.308336459 +0000 UTC m=+0.863562099,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.046662 master-0 kubenswrapper[4172]: I0220 14:44:40.046567 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:40.046662 master-0 kubenswrapper[4172]: I0220 14:44:40.046626 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:40.046662 master-0 kubenswrapper[4172]: I0220 14:44:40.046645 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:40.046829 master-0 kubenswrapper[4172]: I0220 14:44:40.046720 4172 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 14:44:40.052578 master-0 kubenswrapper[4172]: E0220 14:44:40.052520 4172 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 20 14:44:40.052685 master-0 kubenswrapper[4172]: E0220 14:44:40.052507 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb985476a1e3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb985476a1e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.110819811 +0000 UTC m=+0.666045451,LastTimestamp:2026-02-20 14:44:20.30835403 +0000 UTC m=+0.863579670,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.053477 master-0 kubenswrapper[4172]: E0220 14:44:40.052998 4172 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 14:44:40.055862 master-0 kubenswrapper[4172]: E0220 14:44:40.055713 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb9854750fa7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9854750fa7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.110716839 +0000 UTC m=+0.665942469,LastTimestamp:2026-02-20 14:44:20.309372226 +0000 UTC m=+0.864597856,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.060748 master-0 kubenswrapper[4172]: I0220 14:44:40.060706 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:40.061225 master-0 kubenswrapper[4172]: E0220 14:44:40.061083 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb9854760674\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9854760674 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.11078002 +0000 UTC m=+0.666005660,LastTimestamp:2026-02-20 14:44:20.309403507 +0000 UTC m=+0.864629147,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.062859 master-0 kubenswrapper[4172]: E0220 14:44:40.062737 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb985476a1e3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb985476a1e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.110819811 +0000 UTC m=+0.666045451,LastTimestamp:2026-02-20 14:44:20.309419447 +0000 UTC m=+0.864645077,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.070008 master-0 kubenswrapper[4172]: E0220 14:44:40.069849 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb9854750fa7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9854750fa7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.110716839 +0000 UTC m=+0.665942469,LastTimestamp:2026-02-20 14:44:20.309716455 +0000 UTC m=+0.864942085,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.076625 master-0 kubenswrapper[4172]: E0220 14:44:40.076502 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb9854760674\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9854760674 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.11078002 +0000 UTC m=+0.666005660,LastTimestamp:2026-02-20 14:44:20.309748056 +0000 UTC m=+0.864973686,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.083310 master-0 kubenswrapper[4172]: E0220 14:44:40.083195 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb985476a1e3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb985476a1e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.110819811 +0000 UTC m=+0.666045451,LastTimestamp:2026-02-20 14:44:20.309767376 +0000 UTC m=+0.864993006,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.090304 master-0 kubenswrapper[4172]: E0220 14:44:40.090164 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb9854750fa7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9854750fa7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.110716839 +0000 UTC m=+0.665942469,LastTimestamp:2026-02-20 14:44:20.310318691 +0000 UTC m=+0.865544331,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.097217 master-0 kubenswrapper[4172]: E0220 14:44:40.097010 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb9854760674\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9854760674 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.11078002 +0000 UTC m=+0.666005660,LastTimestamp:2026-02-20 14:44:20.310336801 +0000 UTC m=+0.865562441,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.104153 master-0 kubenswrapper[4172]: E0220 14:44:40.104008 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb985476a1e3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb985476a1e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.110819811 +0000 UTC m=+0.666045451,LastTimestamp:2026-02-20 14:44:20.310352301 +0000 UTC m=+0.865577931,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.110721 master-0 kubenswrapper[4172]: E0220 14:44:40.110588 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb9854750fa7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9854750fa7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.110716839 +0000 UTC m=+0.665942469,LastTimestamp:2026-02-20 14:44:20.310805453 +0000 UTC m=+0.866031093,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.117581 master-0 kubenswrapper[4172]: E0220 14:44:40.117432 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb9854760674\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9854760674 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.11078002 +0000 UTC m=+0.666005660,LastTimestamp:2026-02-20 14:44:20.310832284 +0000 UTC m=+0.866057914,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.125635 master-0 kubenswrapper[4172]: E0220 14:44:40.125465 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb985476a1e3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb985476a1e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.110819811 +0000 UTC m=+0.666045451,LastTimestamp:2026-02-20 14:44:20.310856384 +0000 UTC m=+0.866082034,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.134081 master-0 kubenswrapper[4172]: E0220 14:44:40.133891 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb9854750fa7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9854750fa7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.110716839 +0000 UTC m=+0.665942469,LastTimestamp:2026-02-20 14:44:20.313314498 +0000 UTC m=+0.868540138,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.140823 master-0 kubenswrapper[4172]: E0220 14:44:40.140661 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1895fb9854760674\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1895fb9854760674 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:20.11078002 +0000 UTC m=+0.666005660,LastTimestamp:2026-02-20 14:44:20.313353029 +0000 UTC m=+0.868578669,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.149449 master-0 kubenswrapper[4172]: E0220 14:44:40.149284 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895fb98a2bbe4f7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:21.423981815 +0000 UTC m=+1.979207625,LastTimestamp:2026-02-20 14:44:21.423981815 +0000 UTC m=+1.979207625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.157156 master-0 kubenswrapper[4172]: E0220 14:44:40.156981 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895fb98a2bbfa05 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:21.423987205 +0000 UTC m=+1.979212845,LastTimestamp:2026-02-20 14:44:21.423987205 +0000 UTC m=+1.979212845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.165637 master-0 kubenswrapper[4172]: E0220 14:44:40.165438 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1895fb98a2f6e2b8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:21.427847864 +0000 UTC m=+1.983073464,LastTimestamp:2026-02-20 14:44:21.427847864 +0000 UTC m=+1.983073464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.173124 master-0 kubenswrapper[4172]: E0220 14:44:40.172909 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1895fb98a6505fea kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:21.484044266 +0000 UTC m=+2.039269906,LastTimestamp:2026-02-20 14:44:21.484044266 +0000 UTC m=+2.039269906,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.179865 master-0 kubenswrapper[4172]: E0220 14:44:40.179729 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895fb98aca66160 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:21.590344032 +0000 UTC m=+2.145569652,LastTimestamp:2026-02-20 14:44:21.590344032 +0000 UTC m=+2.145569652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.187034 master-0 kubenswrapper[4172]: E0220 14:44:40.186809 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895fb99025938fe openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" in 1.437s (1.437s including waiting). Image size: 464984427 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:23.028127998 +0000 UTC m=+3.583353598,LastTimestamp:2026-02-20 14:44:23.028127998 +0000 UTC m=+3.583353598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.194343 master-0 kubenswrapper[4172]: E0220 14:44:40.194187 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895fb9911fcdb78 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:23.2905102 +0000 UTC m=+3.845735810,LastTimestamp:2026-02-20 14:44:23.2905102 +0000 UTC m=+3.845735810,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.194860 master-0 kubenswrapper[4172]: E0220 14:44:40.194802 4172 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 20 14:44:40.202247 master-0 kubenswrapper[4172]: E0220 14:44:40.202056 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895fb9912b5ed02 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:23.30263885 +0000 UTC m=+3.857864460,LastTimestamp:2026-02-20 14:44:23.30263885 +0000 UTC m=+3.857864460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.210314 master-0 kubenswrapper[4172]: E0220 14:44:40.210156 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895fb99646f5934 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:24.673745204 +0000 UTC m=+5.228970844,LastTimestamp:2026-02-20 14:44:24.673745204 +0000 UTC m=+5.228970844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.217814 master-0 kubenswrapper[4172]: E0220 14:44:40.217640 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1895fb996780212f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\" in 3.297s (3.297s including waiting). Image size: 529218694 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:24.725176623 +0000 UTC m=+5.280402223,LastTimestamp:2026-02-20 14:44:24.725176623 +0000 UTC m=+5.280402223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.225329 master-0 kubenswrapper[4172]: E0220 14:44:40.224968 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895fb99726078b9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:24.907651257 +0000 UTC m=+5.462876857,LastTimestamp:2026-02-20 14:44:24.907651257 +0000 UTC m=+5.462876857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.232541 master-0 kubenswrapper[4172]: E0220 14:44:40.232308 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1895fb997265c2a4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:24.90799786 +0000 UTC m=+5.463223460,LastTimestamp:2026-02-20 14:44:24.90799786 +0000 UTC m=+5.463223460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.239827 master-0 kubenswrapper[4172]: E0220 14:44:40.239624 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895fb997340dbc2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:24.922356674 +0000 UTC m=+5.477582274,LastTimestamp:2026-02-20 14:44:24.922356674 +0000 UTC m=+5.477582274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.247425 master-0 kubenswrapper[4172]: E0220 14:44:40.247277 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1895fb9973e99063 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:24.933412963 +0000 UTC m=+5.488638563,LastTimestamp:2026-02-20 14:44:24.933412963 +0000 UTC m=+5.488638563,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.254481 master-0 kubenswrapper[4172]: E0220 14:44:40.254333 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1895fb9974232010 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:24.937185296 +0000 UTC m=+5.492410896,LastTimestamp:2026-02-20 14:44:24.937185296 +0000 UTC m=+5.492410896,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.261617 master-0 kubenswrapper[4172]: E0220 14:44:40.261485 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1895fb997e068c94 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:25.103084692 +0000 UTC m=+5.658310292,LastTimestamp:2026-02-20 14:44:25.103084692 +0000 UTC m=+5.658310292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.271836 master-0 kubenswrapper[4172]: E0220 14:44:40.271593 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1895fb997ee0587b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:25.117358203 +0000 UTC m=+5.672583803,LastTimestamp:2026-02-20 14:44:25.117358203 +0000 UTC m=+5.672583803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.284642 master-0 kubenswrapper[4172]: E0220 14:44:40.284458 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895fb99646f5934\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895fb99646f5934 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:24.673745204 +0000 UTC m=+5.228970844,LastTimestamp:2026-02-20 14:44:25.232277308 +0000 UTC m=+5.787502898,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.292117 master-0 kubenswrapper[4172]: E0220 14:44:40.291851 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895fb99726078b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895fb99726078b9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:24.907651257 +0000 UTC m=+5.462876857,LastTimestamp:2026-02-20 14:44:25.388647083 +0000 UTC m=+5.943872683,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.299594 master-0 kubenswrapper[4172]: I0220 14:44:40.299522 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:40.300591 master-0 kubenswrapper[4172]: I0220 14:44:40.299641 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:40.300591 master-0 kubenswrapper[4172]: E0220 14:44:40.299536 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895fb997340dbc2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895fb997340dbc2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:24.922356674 +0000 UTC m=+5.477582274,LastTimestamp:2026-02-20 14:44:25.399555337 +0000 UTC m=+5.954780937,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.300906 master-0 kubenswrapper[4172]: I0220 14:44:40.300847 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:40.300906 master-0 kubenswrapper[4172]: I0220 14:44:40.300847 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:40.300906 master-0 kubenswrapper[4172]: I0220 14:44:40.300893 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:40.300906 master-0 kubenswrapper[4172]: I0220 14:44:40.300908 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:40.301276 master-0 kubenswrapper[4172]: I0220 14:44:40.300959 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:40.301276 master-0 kubenswrapper[4172]: I0220 14:44:40.300912 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:40.308117 master-0 kubenswrapper[4172]: E0220 14:44:40.307919 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895fb99c180ca14 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:26.2351693 +0000 UTC m=+6.790394900,LastTimestamp:2026-02-20 14:44:26.2351693 +0000 UTC m=+6.790394900,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.315747 master-0 kubenswrapper[4172]: E0220 14:44:40.315575 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895fb99c180ca14\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895fb99c180ca14 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:26.2351693 +0000 UTC m=+6.790394900,LastTimestamp:2026-02-20 14:44:27.240883485 +0000 UTC m=+7.796109085,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.323636 master-0 kubenswrapper[4172]: E0220 14:44:40.323506 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895fb9ac2f4a064 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 9.13s (9.13s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:30.554505316 +0000 UTC m=+11.109730916,LastTimestamp:2026-02-20 14:44:30.554505316 +0000 UTC m=+11.109730916,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.331829 master-0 kubenswrapper[4172]: E0220 14:44:40.331584 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895fb9ac4f7496a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 9.164s (9.164s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:30.58823409 +0000 UTC m=+11.143459730,LastTimestamp:2026-02-20 14:44:30.58823409 +0000 UTC m=+11.143459730,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.339647 master-0 kubenswrapper[4172]: E0220 14:44:40.339512 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1895fb9ac664c385 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 9.128s (9.128s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:30.612185989 +0000 UTC m=+11.167411629,LastTimestamp:2026-02-20 14:44:30.612185989 +0000 UTC m=+11.167411629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.346596 master-0 kubenswrapper[4172]: E0220 14:44:40.346457 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895fb9ad26b88d1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:30.813956305 +0000 UTC m=+11.369181925,LastTimestamp:2026-02-20 14:44:30.813956305 +0000 UTC m=+11.369181925,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.353589 master-0 kubenswrapper[4172]: E0220 14:44:40.353450 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895fb9ad31bf966 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:30.825519462 +0000 UTC m=+11.380745072,LastTimestamp:2026-02-20 14:44:30.825519462 +0000 UTC m=+11.380745072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.361322 master-0 kubenswrapper[4172]: E0220 14:44:40.361163 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895fb9ad32deae4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:30.826695396 +0000 UTC m=+11.381921006,LastTimestamp:2026-02-20 14:44:30.826695396 +0000 UTC m=+11.381921006,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.369320 master-0 kubenswrapper[4172]: E0220 14:44:40.369176 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895fb9ad3a99d67 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:30.834802023 +0000 UTC m=+11.390027633,LastTimestamp:2026-02-20 14:44:30.834802023 +0000 UTC m=+11.390027633,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.376189 master-0 kubenswrapper[4172]: E0220 14:44:40.376037 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1895fb9ad46923db kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:30.847353819 +0000 UTC m=+11.402579429,LastTimestamp:2026-02-20 14:44:30.847353819 +0000 UTC m=+11.402579429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.383326 master-0 kubenswrapper[4172]: E0220 14:44:40.383174 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895fb9ad480d04e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:30.848905294 +0000 UTC m=+11.404130904,LastTimestamp:2026-02-20 14:44:30.848905294 +0000 UTC m=+11.404130904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.390127 master-0 kubenswrapper[4172]: E0220 14:44:40.389993 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1895fb9ad5259134 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:30.85970258 +0000 UTC m=+11.414928190,LastTimestamp:2026-02-20 14:44:30.85970258 +0000 UTC m=+11.414928190,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.397504 master-0 kubenswrapper[4172]: E0220 14:44:40.397323 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895fb9aed68fe09 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:31.266774537 +0000 UTC m=+11.822000177,LastTimestamp:2026-02-20 14:44:31.266774537 +0000 UTC m=+11.822000177,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.404616 master-0 kubenswrapper[4172]: E0220 14:44:40.404465 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895fb9aff317e5d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:31.565127261 +0000 UTC m=+12.120352861,LastTimestamp:2026-02-20 14:44:31.565127261 +0000 UTC m=+12.120352861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.411725 master-0 kubenswrapper[4172]: E0220 14:44:40.411519 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895fb9b06b89c10 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:31.691422736 +0000 UTC m=+12.246648366,LastTimestamp:2026-02-20 14:44:31.691422736 +0000 UTC m=+12.246648366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.418466 master-0 kubenswrapper[4172]: E0220 14:44:40.418311 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895fb9b06cc1fcc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:31.692701644 +0000 UTC m=+12.247927284,LastTimestamp:2026-02-20 14:44:31.692701644 +0000 UTC m=+12.247927284,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.426138 master-0 kubenswrapper[4172]: E0220 14:44:40.425976 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895fb9b525cb60d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\" in 2.133s (2.133s including waiting). Image size: 505137106 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:32.960468493 +0000 UTC m=+13.515694113,LastTimestamp:2026-02-20 14:44:32.960468493 +0000 UTC m=+13.515694113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.432762 master-0 kubenswrapper[4172]: E0220 14:44:40.432612 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895fb9b5f7c2b01 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:33.180633857 +0000 UTC m=+13.735859467,LastTimestamp:2026-02-20 14:44:33.180633857 +0000 UTC m=+13.735859467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.440955 master-0 kubenswrapper[4172]: E0220 14:44:40.440703 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895fb9b606a7587 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:33.196250503 +0000 UTC m=+13.751476113,LastTimestamp:2026-02-20 14:44:33.196250503 +0000 UTC m=+13.751476113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.449407 master-0 kubenswrapper[4172]: E0220 14:44:40.449232 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895fb9b65966012 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:33.283014674 +0000 UTC m=+13.838240284,LastTimestamp:2026-02-20 14:44:33.283014674 +0000 UTC m=+13.838240284,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.456319 master-0 kubenswrapper[4172]: E0220 14:44:40.456115 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.1895fb9ad26b88d1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895fb9ad26b88d1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:30.813956305 +0000 UTC m=+11.369181925,LastTimestamp:2026-02-20 14:44:33.537157499 +0000 UTC m=+14.092383109,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.466355 master-0 kubenswrapper[4172]: E0220 14:44:40.466159 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.1895fb9ad31bf966\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1895fb9ad31bf966 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:30.825519462 +0000 UTC m=+11.380745072,LastTimestamp:2026-02-20 14:44:33.549674674 +0000 UTC m=+14.104900274,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.474284 master-0 kubenswrapper[4172]: E0220 14:44:40.474064 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895fb9b9ed41fea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" in 2.55s (2.55s including waiting). Image size: 514875199 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:34.243362794 +0000 UTC m=+14.798588394,LastTimestamp:2026-02-20 14:44:34.243362794 +0000 UTC m=+14.798588394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.481458 master-0 kubenswrapper[4172]: E0220 14:44:40.481328 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895fb9ba9d3f7bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:34.427901887 +0000 UTC m=+14.983127527,LastTimestamp:2026-02-20 14:44:34.427901887 +0000 UTC m=+14.983127527,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.488653 master-0 kubenswrapper[4172]: E0220 14:44:40.488543 4172 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1895fb9baa8d4b51 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:34.440047441 +0000 UTC m=+14.995273071,LastTimestamp:2026-02-20 14:44:34.440047441 +0000 UTC m=+14.995273071,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:40.968977 master-0 kubenswrapper[4172]: W0220 14:44:40.968743 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 20 14:44:40.968977 master-0 kubenswrapper[4172]: E0220 14:44:40.968812 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 20 14:44:41.070978 master-0 kubenswrapper[4172]: I0220 14:44:41.068224 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:42.068128 master-0 kubenswrapper[4172]: I0220 14:44:42.067974 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:42.207453 master-0 kubenswrapper[4172]: I0220 14:44:42.207328 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:42.209251 master-0 kubenswrapper[4172]: I0220 14:44:42.209183 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:42.209251 master-0 kubenswrapper[4172]: I0220 14:44:42.209253 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:42.209485 master-0 kubenswrapper[4172]: I0220 14:44:42.209273 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:42.209843 master-0 kubenswrapper[4172]: I0220 14:44:42.209798 4172 scope.go:117] "RemoveContainer" containerID="bb798ca2d5a26455ed20f988214c4091a2110223c74f07cdf2f44a8af1cef396" Feb 20 14:44:42.221850 master-0 kubenswrapper[4172]: E0220 14:44:42.221686 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895fb99646f5934\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895fb99646f5934 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:24.673745204 +0000 UTC m=+5.228970844,LastTimestamp:2026-02-20 14:44:42.213908957 +0000 UTC m=+22.769134597,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:42.490944 master-0 kubenswrapper[4172]: E0220 14:44:42.490459 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895fb99726078b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895fb99726078b9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:24.907651257 +0000 UTC m=+5.462876857,LastTimestamp:2026-02-20 14:44:42.482443942 +0000 UTC m=+23.037669582,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:42.505117 master-0 kubenswrapper[4172]: E0220 14:44:42.504907 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895fb997340dbc2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895fb997340dbc2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:24.922356674 +0000 UTC m=+5.477582274,LastTimestamp:2026-02-20 14:44:42.497642416 +0000 UTC m=+23.052868046,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:43.065576 master-0 kubenswrapper[4172]: I0220 14:44:43.065489 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:43.310488 master-0 kubenswrapper[4172]: I0220 14:44:43.310384 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 20 14:44:43.311547 master-0 kubenswrapper[4172]: I0220 14:44:43.311261 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 20 14:44:43.311949 master-0 kubenswrapper[4172]: I0220 14:44:43.311851 4172 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="45a697749c461413b0722aa1be0b316cc858779a0e80c5ef44f0a3c27a2f1822" exitCode=1 Feb 20 14:44:43.312087 master-0 kubenswrapper[4172]: I0220 14:44:43.311906 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"45a697749c461413b0722aa1be0b316cc858779a0e80c5ef44f0a3c27a2f1822"} Feb 20 14:44:43.312087 master-0 kubenswrapper[4172]: I0220 14:44:43.312054 4172 scope.go:117] "RemoveContainer" containerID="bb798ca2d5a26455ed20f988214c4091a2110223c74f07cdf2f44a8af1cef396" Feb 20 14:44:43.312282 master-0 kubenswrapper[4172]: I0220 14:44:43.312229 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:43.313527 master-0 kubenswrapper[4172]: I0220 14:44:43.313467 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:43.313527 master-0 kubenswrapper[4172]: I0220 14:44:43.313526 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:43.313527 master-0 kubenswrapper[4172]: I0220 14:44:43.313545 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:43.314899 master-0 kubenswrapper[4172]: I0220 14:44:43.314061 4172 scope.go:117] "RemoveContainer" containerID="45a697749c461413b0722aa1be0b316cc858779a0e80c5ef44f0a3c27a2f1822" Feb 20 14:44:43.314899 master-0 kubenswrapper[4172]: E0220 14:44:43.314456 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 20 14:44:43.325380 master-0 kubenswrapper[4172]: E0220 14:44:43.322578 4172 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1895fb99c180ca14\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1895fb99c180ca14 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:44:26.2351693 +0000 UTC m=+6.790394900,LastTimestamp:2026-02-20 14:44:43.314410236 +0000 UTC m=+23.869635866,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:44:44.063609 master-0 kubenswrapper[4172]: I0220 14:44:44.063518 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:44.317128 master-0 kubenswrapper[4172]: I0220 14:44:44.317006 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 20 14:44:44.432701 master-0 kubenswrapper[4172]: I0220 14:44:44.432633 4172 csr.go:261] certificate signing request csr-hbqnt is approved, waiting to be issued Feb 20 14:44:44.585020 master-0 kubenswrapper[4172]: I0220 14:44:44.584840 4172 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:44.585234 master-0 kubenswrapper[4172]: I0220 14:44:44.585064 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:44.587022 master-0 kubenswrapper[4172]: I0220 14:44:44.586951 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:44.587022 master-0 kubenswrapper[4172]: I0220 14:44:44.587014 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:44.587022 master-0 kubenswrapper[4172]: I0220 14:44:44.587033 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:44.593087 master-0 kubenswrapper[4172]: I0220 14:44:44.593033 4172 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:44:45.063960 master-0 kubenswrapper[4172]: I0220 14:44:45.063812 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:45.321505 master-0 kubenswrapper[4172]: I0220 14:44:45.321336 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:45.322489 master-0 kubenswrapper[4172]: I0220 14:44:45.322435 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:45.322489 master-0 kubenswrapper[4172]: I0220 14:44:45.322486 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:45.322672 master-0 kubenswrapper[4172]: I0220 14:44:45.322538 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:46.064046 master-0 kubenswrapper[4172]: I0220 14:44:46.063953 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:47.052950 master-0 kubenswrapper[4172]: I0220 14:44:47.052838 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:47.053870 master-0 kubenswrapper[4172]: I0220 14:44:47.053829 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:47.053870 master-0 kubenswrapper[4172]: I0220 14:44:47.053853 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:47.053870 master-0 kubenswrapper[4172]: I0220 14:44:47.053861 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:47.054073 master-0 kubenswrapper[4172]: I0220 14:44:47.053901 4172 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 14:44:47.059877 master-0 kubenswrapper[4172]: I0220 14:44:47.059805 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:47.059877 master-0 kubenswrapper[4172]: E0220 14:44:47.059828 4172 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 20 14:44:47.060199 master-0 kubenswrapper[4172]: E0220 14:44:47.060141 4172 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 20 14:44:48.058431 master-0 kubenswrapper[4172]: I0220 14:44:48.058331 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:49.061898 master-0 kubenswrapper[4172]: I0220 14:44:49.061789 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:49.420265 master-0 kubenswrapper[4172]: W0220 14:44:49.420111 4172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 20 14:44:49.420265 master-0 kubenswrapper[4172]: E0220 14:44:49.420186 4172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 20 14:44:50.064677 master-0 kubenswrapper[4172]: I0220 14:44:50.064574 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:50.195386 master-0 kubenswrapper[4172]: E0220 14:44:50.195254 4172 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 20 14:44:51.064181 master-0 kubenswrapper[4172]: I0220 14:44:51.064070 4172 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 20 14:44:51.672720 master-0 kubenswrapper[4172]: I0220 14:44:51.672659 4172 csr.go:257] certificate signing request csr-hbqnt is issued Feb 20 14:44:51.726592 master-0 kubenswrapper[4172]: I0220 14:44:51.726534 4172 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 20 14:44:51.906581 master-0 kubenswrapper[4172]: I0220 14:44:51.906494 4172 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 20 14:44:51.906889 master-0 kubenswrapper[4172]: W0220 14:44:51.906815 4172 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 20 14:44:52.072162 master-0 kubenswrapper[4172]: I0220 14:44:52.072116 4172 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 14:44:52.087479 master-0 kubenswrapper[4172]: I0220 14:44:52.087417 4172 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 14:44:52.143686 master-0 kubenswrapper[4172]: I0220 14:44:52.143611 4172 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 14:44:52.410357 master-0 kubenswrapper[4172]: I0220 14:44:52.410208 4172 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 14:44:52.410357 master-0 kubenswrapper[4172]: E0220 14:44:52.410248 4172 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 20 14:44:52.432651 master-0 kubenswrapper[4172]: I0220 14:44:52.432559 4172 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 14:44:52.452898 master-0 kubenswrapper[4172]: I0220 14:44:52.452793 4172 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 14:44:52.514484 master-0 kubenswrapper[4172]: I0220 14:44:52.514372 4172 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 14:44:52.674980 master-0 kubenswrapper[4172]: I0220 14:44:52.674815 4172 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-21 14:36:18 +0000 UTC, rotation deadline is 2026-02-21 08:37:53.402295851 +0000 UTC Feb 20 14:44:52.674980 master-0 kubenswrapper[4172]: I0220 14:44:52.674873 4172 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h53m0.727428181s for next certificate rotation Feb 20 14:44:52.776439 master-0 kubenswrapper[4172]: I0220 14:44:52.776394 4172 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 14:44:52.776439 master-0 kubenswrapper[4172]: E0220 14:44:52.776430 4172 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 20 14:44:52.896662 master-0 kubenswrapper[4172]: I0220 14:44:52.896597 4172 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 14:44:52.922026 master-0 kubenswrapper[4172]: I0220 14:44:52.921977 4172 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 14:44:52.979705 master-0 kubenswrapper[4172]: I0220 14:44:52.979607 4172 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 14:44:53.241318 master-0 kubenswrapper[4172]: I0220 14:44:53.241248 4172 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 14:44:53.241318 master-0 kubenswrapper[4172]: E0220 14:44:53.241314 4172 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 20 14:44:53.840693 master-0 kubenswrapper[4172]: I0220 14:44:53.840606 4172 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 14:44:53.856668 master-0 kubenswrapper[4172]: I0220 14:44:53.856600 4172 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 14:44:53.914755 master-0 kubenswrapper[4172]: I0220 14:44:53.914664 4172 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 20 14:44:54.060856 master-0 kubenswrapper[4172]: I0220 14:44:54.060782 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:54.062461 master-0 kubenswrapper[4172]: I0220 14:44:54.062417 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:54.062597 master-0 kubenswrapper[4172]: I0220 14:44:54.062476 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:54.062597 master-0 kubenswrapper[4172]: I0220 14:44:54.062501 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:54.062597 master-0 kubenswrapper[4172]: I0220 14:44:54.062585 4172 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 14:44:54.066555 master-0 kubenswrapper[4172]: E0220 14:44:54.066493 4172 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Feb 20 14:44:54.074010 master-0 kubenswrapper[4172]: I0220 14:44:54.073919 4172 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 20 14:44:54.074010 master-0 kubenswrapper[4172]: E0220 14:44:54.074012 4172 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Feb 20 14:44:54.088055 master-0 kubenswrapper[4172]: E0220 14:44:54.088000 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:54.189103 master-0 kubenswrapper[4172]: E0220 14:44:54.188913 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:54.290177 master-0 kubenswrapper[4172]: E0220 14:44:54.290055 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:54.391180 master-0 kubenswrapper[4172]: E0220 14:44:54.391095 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:54.491376 master-0 kubenswrapper[4172]: E0220 14:44:54.491277 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:54.591737 master-0 kubenswrapper[4172]: E0220 14:44:54.591641 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:54.692765 master-0 kubenswrapper[4172]: E0220 14:44:54.692699 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:54.793830 master-0 kubenswrapper[4172]: E0220 14:44:54.793657 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:54.894778 master-0 kubenswrapper[4172]: E0220 14:44:54.894664 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:54.994941 master-0 kubenswrapper[4172]: E0220 14:44:54.994871 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:55.087665 master-0 kubenswrapper[4172]: I0220 14:44:55.087016 4172 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 20 14:44:55.095169 master-0 kubenswrapper[4172]: E0220 14:44:55.095096 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:55.100723 master-0 kubenswrapper[4172]: I0220 14:44:55.100660 4172 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 20 14:44:55.195999 master-0 kubenswrapper[4172]: E0220 14:44:55.195847 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:55.297066 master-0 kubenswrapper[4172]: E0220 14:44:55.297011 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:55.398314 master-0 kubenswrapper[4172]: E0220 14:44:55.398150 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:55.498765 master-0 kubenswrapper[4172]: E0220 14:44:55.498682 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:55.599834 master-0 kubenswrapper[4172]: E0220 14:44:55.599720 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:55.700485 master-0 kubenswrapper[4172]: E0220 14:44:55.700269 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:55.800482 master-0 kubenswrapper[4172]: E0220 14:44:55.800390 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:55.901445 master-0 kubenswrapper[4172]: E0220 14:44:55.901320 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:56.001636 master-0 kubenswrapper[4172]: E0220 14:44:56.001519 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:56.102466 master-0 kubenswrapper[4172]: E0220 14:44:56.102323 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:56.203409 master-0 kubenswrapper[4172]: E0220 14:44:56.203259 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:56.207811 master-0 kubenswrapper[4172]: I0220 14:44:56.207751 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:44:56.209115 master-0 kubenswrapper[4172]: I0220 14:44:56.209053 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:44:56.209252 master-0 kubenswrapper[4172]: I0220 14:44:56.209119 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:44:56.209252 master-0 kubenswrapper[4172]: I0220 14:44:56.209192 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:44:56.209784 master-0 kubenswrapper[4172]: I0220 14:44:56.209740 4172 scope.go:117] "RemoveContainer" containerID="45a697749c461413b0722aa1be0b316cc858779a0e80c5ef44f0a3c27a2f1822" Feb 20 14:44:56.210111 master-0 kubenswrapper[4172]: E0220 14:44:56.210058 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 20 14:44:56.304447 master-0 kubenswrapper[4172]: E0220 14:44:56.304257 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:56.405440 master-0 kubenswrapper[4172]: E0220 14:44:56.405370 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:56.505667 master-0 kubenswrapper[4172]: E0220 14:44:56.505561 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:56.605859 master-0 kubenswrapper[4172]: E0220 14:44:56.605633 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:56.706616 master-0 kubenswrapper[4172]: E0220 14:44:56.706536 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:56.807599 master-0 kubenswrapper[4172]: E0220 14:44:56.807493 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:56.908459 master-0 kubenswrapper[4172]: E0220 14:44:56.908304 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:57.009234 master-0 kubenswrapper[4172]: E0220 14:44:57.009153 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:57.109821 master-0 kubenswrapper[4172]: E0220 14:44:57.109757 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:57.210090 master-0 kubenswrapper[4172]: E0220 14:44:57.209956 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:57.311126 master-0 kubenswrapper[4172]: E0220 14:44:57.311045 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:57.412205 master-0 kubenswrapper[4172]: E0220 14:44:57.412086 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:57.421181 master-0 kubenswrapper[4172]: I0220 14:44:57.421041 4172 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 20 14:44:57.512421 master-0 kubenswrapper[4172]: E0220 14:44:57.512296 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:57.613603 master-0 kubenswrapper[4172]: E0220 14:44:57.613470 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:57.714284 master-0 kubenswrapper[4172]: E0220 14:44:57.714186 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:57.815226 master-0 kubenswrapper[4172]: E0220 14:44:57.815065 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:57.915642 master-0 kubenswrapper[4172]: E0220 14:44:57.915538 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:58.016121 master-0 kubenswrapper[4172]: E0220 14:44:58.016011 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:58.116596 master-0 kubenswrapper[4172]: E0220 14:44:58.116425 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:58.216659 master-0 kubenswrapper[4172]: E0220 14:44:58.216560 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:58.316967 master-0 kubenswrapper[4172]: E0220 14:44:58.316864 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:58.417536 master-0 kubenswrapper[4172]: E0220 14:44:58.417377 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:58.518491 master-0 kubenswrapper[4172]: E0220 14:44:58.518382 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:58.619506 master-0 kubenswrapper[4172]: E0220 14:44:58.619417 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:58.720606 master-0 kubenswrapper[4172]: E0220 14:44:58.720389 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:58.821458 master-0 kubenswrapper[4172]: E0220 14:44:58.821341 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:58.922538 master-0 kubenswrapper[4172]: E0220 14:44:58.922430 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:59.022739 master-0 kubenswrapper[4172]: E0220 14:44:59.022643 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:59.123155 master-0 kubenswrapper[4172]: E0220 14:44:59.123016 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:59.223956 master-0 kubenswrapper[4172]: E0220 14:44:59.223838 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:59.325188 master-0 kubenswrapper[4172]: E0220 14:44:59.325026 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:59.425972 master-0 kubenswrapper[4172]: E0220 14:44:59.425873 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:59.526239 master-0 kubenswrapper[4172]: E0220 14:44:59.526102 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:59.627264 master-0 kubenswrapper[4172]: E0220 14:44:59.627088 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:59.728257 master-0 kubenswrapper[4172]: E0220 14:44:59.728144 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:59.829292 master-0 kubenswrapper[4172]: E0220 14:44:59.829184 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:44:59.930060 master-0 kubenswrapper[4172]: E0220 14:44:59.929860 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:00.030575 master-0 kubenswrapper[4172]: E0220 14:45:00.030335 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:00.130958 master-0 kubenswrapper[4172]: E0220 14:45:00.130805 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:00.195977 master-0 kubenswrapper[4172]: E0220 14:45:00.195776 4172 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 20 14:45:00.231960 master-0 kubenswrapper[4172]: E0220 14:45:00.231855 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:00.332798 master-0 kubenswrapper[4172]: E0220 14:45:00.332696 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:00.433832 master-0 kubenswrapper[4172]: E0220 14:45:00.433673 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:00.534106 master-0 kubenswrapper[4172]: E0220 14:45:00.533904 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:00.635290 master-0 kubenswrapper[4172]: E0220 14:45:00.635185 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:00.736385 master-0 kubenswrapper[4172]: E0220 14:45:00.736195 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:00.837707 master-0 kubenswrapper[4172]: E0220 14:45:00.837485 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:00.938548 master-0 kubenswrapper[4172]: E0220 14:45:00.938438 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:01.039271 master-0 kubenswrapper[4172]: E0220 14:45:01.039145 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:01.139722 master-0 kubenswrapper[4172]: E0220 14:45:01.139510 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:01.240580 master-0 kubenswrapper[4172]: E0220 14:45:01.240469 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:01.341644 master-0 kubenswrapper[4172]: E0220 14:45:01.341554 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:01.442809 master-0 kubenswrapper[4172]: E0220 14:45:01.442657 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:01.543372 master-0 kubenswrapper[4172]: E0220 14:45:01.543306 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:01.644424 master-0 kubenswrapper[4172]: E0220 14:45:01.644343 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:01.744578 master-0 kubenswrapper[4172]: E0220 14:45:01.744500 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:01.845732 master-0 kubenswrapper[4172]: E0220 14:45:01.845656 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:01.946856 master-0 kubenswrapper[4172]: E0220 14:45:01.946804 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:02.047257 master-0 kubenswrapper[4172]: E0220 14:45:02.047121 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:02.148355 master-0 kubenswrapper[4172]: E0220 14:45:02.148293 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:02.248804 master-0 kubenswrapper[4172]: E0220 14:45:02.248755 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:02.349882 master-0 kubenswrapper[4172]: E0220 14:45:02.349702 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:02.450259 master-0 kubenswrapper[4172]: E0220 14:45:02.450149 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:02.550435 master-0 kubenswrapper[4172]: E0220 14:45:02.550320 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:02.650861 master-0 kubenswrapper[4172]: E0220 14:45:02.650690 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:02.751768 master-0 kubenswrapper[4172]: E0220 14:45:02.751667 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:02.852966 master-0 kubenswrapper[4172]: E0220 14:45:02.852851 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:02.954025 master-0 kubenswrapper[4172]: E0220 14:45:02.953876 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:03.054443 master-0 kubenswrapper[4172]: E0220 14:45:03.054354 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:03.155208 master-0 kubenswrapper[4172]: E0220 14:45:03.155122 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:03.255543 master-0 kubenswrapper[4172]: E0220 14:45:03.255445 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:03.356155 master-0 kubenswrapper[4172]: E0220 14:45:03.356057 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:03.457295 master-0 kubenswrapper[4172]: E0220 14:45:03.457209 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:03.557598 master-0 kubenswrapper[4172]: E0220 14:45:03.557418 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:03.658127 master-0 kubenswrapper[4172]: E0220 14:45:03.658073 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:03.758886 master-0 kubenswrapper[4172]: E0220 14:45:03.758836 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:03.859716 master-0 kubenswrapper[4172]: E0220 14:45:03.859590 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:03.960823 master-0 kubenswrapper[4172]: E0220 14:45:03.960740 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:04.061888 master-0 kubenswrapper[4172]: E0220 14:45:04.061817 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:04.162159 master-0 kubenswrapper[4172]: E0220 14:45:04.161990 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:04.219700 master-0 kubenswrapper[4172]: I0220 14:45:04.219621 4172 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 20 14:45:04.262975 master-0 kubenswrapper[4172]: E0220 14:45:04.262861 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:04.363975 master-0 kubenswrapper[4172]: E0220 14:45:04.363883 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:04.406377 master-0 kubenswrapper[4172]: E0220 14:45:04.406288 4172 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Feb 20 14:45:04.814055 master-0 kubenswrapper[4172]: E0220 14:45:04.813973 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:04.914652 master-0 kubenswrapper[4172]: E0220 14:45:04.914513 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:05.015790 master-0 kubenswrapper[4172]: E0220 14:45:05.015610 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:05.117019 master-0 kubenswrapper[4172]: E0220 14:45:05.116848 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:05.217078 master-0 kubenswrapper[4172]: E0220 14:45:05.217015 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:05.318231 master-0 kubenswrapper[4172]: E0220 14:45:05.318092 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:05.419337 master-0 kubenswrapper[4172]: E0220 14:45:05.419195 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:05.448859 master-0 kubenswrapper[4172]: I0220 14:45:05.448782 4172 csr.go:261] certificate signing request csr-wmbq2 is approved, waiting to be issued Feb 20 14:45:05.463520 master-0 kubenswrapper[4172]: I0220 14:45:05.463452 4172 csr.go:257] certificate signing request csr-wmbq2 is issued Feb 20 14:45:05.520114 master-0 kubenswrapper[4172]: E0220 14:45:05.520017 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:05.620272 master-0 kubenswrapper[4172]: E0220 14:45:05.620173 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:05.721644 master-0 kubenswrapper[4172]: E0220 14:45:05.721411 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:05.822605 master-0 kubenswrapper[4172]: E0220 14:45:05.822499 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:05.923103 master-0 kubenswrapper[4172]: E0220 14:45:05.922994 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:06.023251 master-0 kubenswrapper[4172]: E0220 14:45:06.023139 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:06.123961 master-0 kubenswrapper[4172]: E0220 14:45:06.123826 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:06.224418 master-0 kubenswrapper[4172]: E0220 14:45:06.224327 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:06.324865 master-0 kubenswrapper[4172]: E0220 14:45:06.324639 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:06.424916 master-0 kubenswrapper[4172]: E0220 14:45:06.424814 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:06.465952 master-0 kubenswrapper[4172]: I0220 14:45:06.465791 4172 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-21 14:36:18 +0000 UTC, rotation deadline is 2026-02-21 10:43:55.927917737 +0000 UTC Feb 20 14:45:06.465952 master-0 kubenswrapper[4172]: I0220 14:45:06.465841 4172 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h58m49.462081124s for next certificate rotation Feb 20 14:45:06.525651 master-0 kubenswrapper[4172]: E0220 14:45:06.525541 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:06.626867 master-0 kubenswrapper[4172]: E0220 14:45:06.626695 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:06.727641 master-0 kubenswrapper[4172]: E0220 14:45:06.727560 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:06.828555 master-0 kubenswrapper[4172]: E0220 14:45:06.828451 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:06.929669 master-0 kubenswrapper[4172]: E0220 14:45:06.929464 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:07.029759 master-0 kubenswrapper[4172]: E0220 14:45:07.029635 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:07.130859 master-0 kubenswrapper[4172]: E0220 14:45:07.130756 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:07.231976 master-0 kubenswrapper[4172]: E0220 14:45:07.231730 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:07.332853 master-0 kubenswrapper[4172]: E0220 14:45:07.332729 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:07.433593 master-0 kubenswrapper[4172]: E0220 14:45:07.433490 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:07.467191 master-0 kubenswrapper[4172]: I0220 14:45:07.467101 4172 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-21 14:36:18 +0000 UTC, rotation deadline is 2026-02-21 10:33:01.532194005 +0000 UTC Feb 20 14:45:07.467191 master-0 kubenswrapper[4172]: I0220 14:45:07.467157 4172 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h47m54.065042607s for next certificate rotation Feb 20 14:45:07.534768 master-0 kubenswrapper[4172]: E0220 14:45:07.534676 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:07.635295 master-0 kubenswrapper[4172]: E0220 14:45:07.635195 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:07.736159 master-0 kubenswrapper[4172]: E0220 14:45:07.736031 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:07.837264 master-0 kubenswrapper[4172]: E0220 14:45:07.837128 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:07.938225 master-0 kubenswrapper[4172]: E0220 14:45:07.938131 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:08.039371 master-0 kubenswrapper[4172]: E0220 14:45:08.039300 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:08.139558 master-0 kubenswrapper[4172]: E0220 14:45:08.139467 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:08.240746 master-0 kubenswrapper[4172]: E0220 14:45:08.240631 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:08.341718 master-0 kubenswrapper[4172]: E0220 14:45:08.341612 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:08.442645 master-0 kubenswrapper[4172]: E0220 14:45:08.442450 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:08.543171 master-0 kubenswrapper[4172]: E0220 14:45:08.543047 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:08.643343 master-0 kubenswrapper[4172]: E0220 14:45:08.643200 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:08.744320 master-0 kubenswrapper[4172]: E0220 14:45:08.744171 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:08.845232 master-0 kubenswrapper[4172]: E0220 14:45:08.845075 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:08.945563 master-0 kubenswrapper[4172]: E0220 14:45:08.945460 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:09.045874 master-0 kubenswrapper[4172]: E0220 14:45:09.045649 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:09.146789 master-0 kubenswrapper[4172]: E0220 14:45:09.146682 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:09.246970 master-0 kubenswrapper[4172]: E0220 14:45:09.246818 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:09.348077 master-0 kubenswrapper[4172]: E0220 14:45:09.347773 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:09.448733 master-0 kubenswrapper[4172]: E0220 14:45:09.448652 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:09.549178 master-0 kubenswrapper[4172]: E0220 14:45:09.549100 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:09.649748 master-0 kubenswrapper[4172]: E0220 14:45:09.649618 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:09.750206 master-0 kubenswrapper[4172]: E0220 14:45:09.750106 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:09.850356 master-0 kubenswrapper[4172]: E0220 14:45:09.850278 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:09.951032 master-0 kubenswrapper[4172]: E0220 14:45:09.950904 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:10.051744 master-0 kubenswrapper[4172]: E0220 14:45:10.051708 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:10.152669 master-0 kubenswrapper[4172]: E0220 14:45:10.152596 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:10.196628 master-0 kubenswrapper[4172]: E0220 14:45:10.196554 4172 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 20 14:45:10.253260 master-0 kubenswrapper[4172]: E0220 14:45:10.253191 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:10.354345 master-0 kubenswrapper[4172]: E0220 14:45:10.354237 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:10.455447 master-0 kubenswrapper[4172]: E0220 14:45:10.455345 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:10.556147 master-0 kubenswrapper[4172]: E0220 14:45:10.556005 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:10.657117 master-0 kubenswrapper[4172]: E0220 14:45:10.657033 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:10.758097 master-0 kubenswrapper[4172]: E0220 14:45:10.757991 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:10.859033 master-0 kubenswrapper[4172]: E0220 14:45:10.858869 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:10.959293 master-0 kubenswrapper[4172]: E0220 14:45:10.959198 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:11.059684 master-0 kubenswrapper[4172]: E0220 14:45:11.059529 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:11.159968 master-0 kubenswrapper[4172]: E0220 14:45:11.159723 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:11.207544 master-0 kubenswrapper[4172]: I0220 14:45:11.207450 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:45:11.208893 master-0 kubenswrapper[4172]: I0220 14:45:11.208812 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:45:11.208893 master-0 kubenswrapper[4172]: I0220 14:45:11.208886 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:45:11.209112 master-0 kubenswrapper[4172]: I0220 14:45:11.208907 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:45:11.209503 master-0 kubenswrapper[4172]: I0220 14:45:11.209457 4172 scope.go:117] "RemoveContainer" containerID="45a697749c461413b0722aa1be0b316cc858779a0e80c5ef44f0a3c27a2f1822" Feb 20 14:45:11.260132 master-0 kubenswrapper[4172]: E0220 14:45:11.260065 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:11.361051 master-0 kubenswrapper[4172]: E0220 14:45:11.360969 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:11.461613 master-0 kubenswrapper[4172]: E0220 14:45:11.461536 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:11.562128 master-0 kubenswrapper[4172]: E0220 14:45:11.562055 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:11.662825 master-0 kubenswrapper[4172]: E0220 14:45:11.662769 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:11.763496 master-0 kubenswrapper[4172]: E0220 14:45:11.763432 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:11.864260 master-0 kubenswrapper[4172]: E0220 14:45:11.864212 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:11.964567 master-0 kubenswrapper[4172]: E0220 14:45:11.964503 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:12.065709 master-0 kubenswrapper[4172]: E0220 14:45:12.065524 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:12.166768 master-0 kubenswrapper[4172]: E0220 14:45:12.166706 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:12.267462 master-0 kubenswrapper[4172]: E0220 14:45:12.267357 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:12.368449 master-0 kubenswrapper[4172]: E0220 14:45:12.368253 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:12.391546 master-0 kubenswrapper[4172]: I0220 14:45:12.391490 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 20 14:45:12.392136 master-0 kubenswrapper[4172]: I0220 14:45:12.392095 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"246a4a72bf2ebfa2d43942f255f719a181c7fa6fae84b5f564297d3cc7eff684"} Feb 20 14:45:12.392273 master-0 kubenswrapper[4172]: I0220 14:45:12.392248 4172 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:45:12.393335 master-0 kubenswrapper[4172]: I0220 14:45:12.393286 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:45:12.393410 master-0 kubenswrapper[4172]: I0220 14:45:12.393349 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:45:12.393410 master-0 kubenswrapper[4172]: I0220 14:45:12.393373 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:45:12.468809 master-0 kubenswrapper[4172]: E0220 14:45:12.468740 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:12.568986 master-0 kubenswrapper[4172]: E0220 14:45:12.568878 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:12.669868 master-0 kubenswrapper[4172]: E0220 14:45:12.669723 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:12.769969 master-0 kubenswrapper[4172]: E0220 14:45:12.769880 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:12.871043 master-0 kubenswrapper[4172]: E0220 14:45:12.870973 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:12.972156 master-0 kubenswrapper[4172]: E0220 14:45:12.972028 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:13.073136 master-0 kubenswrapper[4172]: E0220 14:45:13.073092 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:13.173277 master-0 kubenswrapper[4172]: E0220 14:45:13.173192 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:13.273662 master-0 kubenswrapper[4172]: E0220 14:45:13.273582 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:13.374310 master-0 kubenswrapper[4172]: E0220 14:45:13.374237 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:13.475334 master-0 kubenswrapper[4172]: E0220 14:45:13.475252 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:13.576497 master-0 kubenswrapper[4172]: E0220 14:45:13.576345 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:13.676769 master-0 kubenswrapper[4172]: E0220 14:45:13.676678 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:13.777637 master-0 kubenswrapper[4172]: E0220 14:45:13.777539 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:13.878753 master-0 kubenswrapper[4172]: E0220 14:45:13.878566 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:13.979747 master-0 kubenswrapper[4172]: E0220 14:45:13.979652 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:14.080492 master-0 kubenswrapper[4172]: E0220 14:45:14.080383 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:14.181610 master-0 kubenswrapper[4172]: E0220 14:45:14.181444 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:14.281947 master-0 kubenswrapper[4172]: E0220 14:45:14.281826 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:14.382825 master-0 kubenswrapper[4172]: E0220 14:45:14.382746 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:14.484033 master-0 kubenswrapper[4172]: E0220 14:45:14.483857 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:14.584367 master-0 kubenswrapper[4172]: E0220 14:45:14.584326 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:14.685445 master-0 kubenswrapper[4172]: E0220 14:45:14.685340 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:14.786598 master-0 kubenswrapper[4172]: E0220 14:45:14.786507 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:14.887746 master-0 kubenswrapper[4172]: E0220 14:45:14.887623 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:14.988642 master-0 kubenswrapper[4172]: E0220 14:45:14.988530 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:15.018781 master-0 kubenswrapper[4172]: E0220 14:45:15.018713 4172 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Feb 20 14:45:15.089819 master-0 kubenswrapper[4172]: E0220 14:45:15.089659 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:15.190545 master-0 kubenswrapper[4172]: E0220 14:45:15.190427 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:15.291635 master-0 kubenswrapper[4172]: E0220 14:45:15.291562 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:15.392660 master-0 kubenswrapper[4172]: E0220 14:45:15.392494 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:15.493157 master-0 kubenswrapper[4172]: E0220 14:45:15.492799 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:15.594104 master-0 kubenswrapper[4172]: E0220 14:45:15.594042 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:15.695354 master-0 kubenswrapper[4172]: E0220 14:45:15.695193 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:15.795659 master-0 kubenswrapper[4172]: E0220 14:45:15.795552 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:15.896642 master-0 kubenswrapper[4172]: E0220 14:45:15.896555 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:15.997764 master-0 kubenswrapper[4172]: E0220 14:45:15.997655 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:16.098580 master-0 kubenswrapper[4172]: E0220 14:45:16.098456 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:16.199656 master-0 kubenswrapper[4172]: E0220 14:45:16.199567 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:16.300530 master-0 kubenswrapper[4172]: E0220 14:45:16.300306 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:16.400663 master-0 kubenswrapper[4172]: E0220 14:45:16.400520 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:16.501706 master-0 kubenswrapper[4172]: E0220 14:45:16.501580 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:16.602958 master-0 kubenswrapper[4172]: E0220 14:45:16.602760 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:16.703957 master-0 kubenswrapper[4172]: E0220 14:45:16.703821 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:16.804948 master-0 kubenswrapper[4172]: E0220 14:45:16.804799 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:16.906206 master-0 kubenswrapper[4172]: E0220 14:45:16.906020 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:17.007291 master-0 kubenswrapper[4172]: E0220 14:45:17.007168 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:17.107736 master-0 kubenswrapper[4172]: E0220 14:45:17.107611 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:17.208881 master-0 kubenswrapper[4172]: E0220 14:45:17.208688 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:17.308981 master-0 kubenswrapper[4172]: E0220 14:45:17.308799 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:17.409770 master-0 kubenswrapper[4172]: E0220 14:45:17.409681 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:17.510799 master-0 kubenswrapper[4172]: E0220 14:45:17.510700 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:17.611963 master-0 kubenswrapper[4172]: E0220 14:45:17.611825 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:17.712608 master-0 kubenswrapper[4172]: E0220 14:45:17.712517 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:17.813567 master-0 kubenswrapper[4172]: E0220 14:45:17.813386 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:17.914234 master-0 kubenswrapper[4172]: E0220 14:45:17.914106 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:18.014365 master-0 kubenswrapper[4172]: E0220 14:45:18.014217 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:18.115031 master-0 kubenswrapper[4172]: E0220 14:45:18.114803 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:18.216136 master-0 kubenswrapper[4172]: E0220 14:45:18.216011 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:18.316873 master-0 kubenswrapper[4172]: E0220 14:45:18.316787 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:18.417623 master-0 kubenswrapper[4172]: E0220 14:45:18.417442 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:18.517868 master-0 kubenswrapper[4172]: E0220 14:45:18.517775 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:18.619028 master-0 kubenswrapper[4172]: E0220 14:45:18.618902 4172 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 20 14:45:18.644760 master-0 kubenswrapper[4172]: I0220 14:45:18.644695 4172 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 20 14:45:19.072087 master-0 kubenswrapper[4172]: I0220 14:45:19.071381 4172 apiserver.go:52] "Watching apiserver" Feb 20 14:45:19.076028 master-0 kubenswrapper[4172]: I0220 14:45:19.075919 4172 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 20 14:45:19.076201 master-0 kubenswrapper[4172]: I0220 14:45:19.076141 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-wtxfh","openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9","openshift-network-operator/network-operator-7d7db75979-tj8fx"] Feb 20 14:45:19.076606 master-0 kubenswrapper[4172]: I0220 14:45:19.076527 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.077581 master-0 kubenswrapper[4172]: I0220 14:45:19.077494 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:19.078036 master-0 kubenswrapper[4172]: I0220 14:45:19.077648 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:45:19.080424 master-0 kubenswrapper[4172]: I0220 14:45:19.080319 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Feb 20 14:45:19.081275 master-0 kubenswrapper[4172]: I0220 14:45:19.080902 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 20 14:45:19.081275 master-0 kubenswrapper[4172]: I0220 14:45:19.081029 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 20 14:45:19.081275 master-0 kubenswrapper[4172]: I0220 14:45:19.081116 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Feb 20 14:45:19.082965 master-0 kubenswrapper[4172]: I0220 14:45:19.082901 4172 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Feb 20 14:45:19.083087 master-0 kubenswrapper[4172]: I0220 14:45:19.083042 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 20 14:45:19.083183 master-0 kubenswrapper[4172]: I0220 14:45:19.083125 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Feb 20 14:45:19.083724 master-0 kubenswrapper[4172]: I0220 14:45:19.083537 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 20 14:45:19.083724 master-0 kubenswrapper[4172]: I0220 14:45:19.083723 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 20 14:45:19.084081 master-0 kubenswrapper[4172]: I0220 14:45:19.084025 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 20 14:45:19.167783 master-0 kubenswrapper[4172]: I0220 14:45:19.167715 4172 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 20 14:45:19.179039 master-0 kubenswrapper[4172]: I0220 14:45:19.178898 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:19.179039 master-0 kubenswrapper[4172]: I0220 14:45:19.179019 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-ca-bundle\") pod \"assisted-installer-controller-wtxfh\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.179279 master-0 kubenswrapper[4172]: I0220 14:45:19.179100 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8b94\" (UniqueName: \"kubernetes.io/projected/014f3913-ac7e-431a-880c-91d979a5dfc7-kube-api-access-w8b94\") pod \"assisted-installer-controller-wtxfh\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.179279 master-0 kubenswrapper[4172]: I0220 14:45:19.179148 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9fd9f419-2cdc-4991-8fb9-87d76ac58976-host-etc-kube\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:45:19.179279 master-0 kubenswrapper[4172]: I0220 14:45:19.179223 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-var-run-resolv-conf\") pod \"assisted-installer-controller-wtxfh\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.179460 master-0 kubenswrapper[4172]: I0220 14:45:19.179349 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cede061-d85a-4366-9f1e-90be51f726fc-service-ca\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:19.179460 master-0 kubenswrapper[4172]: I0220 14:45:19.179423 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:19.179579 master-0 kubenswrapper[4172]: I0220 14:45:19.179461 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svlzf\" (UniqueName: \"kubernetes.io/projected/9fd9f419-2cdc-4991-8fb9-87d76ac58976-kube-api-access-svlzf\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:45:19.179579 master-0 kubenswrapper[4172]: I0220 14:45:19.179505 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-sno-bootstrap-files\") pod \"assisted-installer-controller-wtxfh\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.179579 master-0 kubenswrapper[4172]: I0220 14:45:19.179540 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:19.179742 master-0 kubenswrapper[4172]: I0220 14:45:19.179582 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-resolv-conf\") pod \"assisted-installer-controller-wtxfh\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.179742 master-0 kubenswrapper[4172]: I0220 14:45:19.179638 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9fd9f419-2cdc-4991-8fb9-87d76ac58976-metrics-tls\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:45:19.179742 master-0 kubenswrapper[4172]: I0220 14:45:19.179691 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cede061-d85a-4366-9f1e-90be51f726fc-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:19.281138 master-0 kubenswrapper[4172]: I0220 14:45:19.281010 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-ca-bundle\") pod \"assisted-installer-controller-wtxfh\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.281437 master-0 kubenswrapper[4172]: I0220 14:45:19.281274 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-ca-bundle\") pod \"assisted-installer-controller-wtxfh\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.281437 master-0 kubenswrapper[4172]: I0220 14:45:19.281344 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8b94\" (UniqueName: \"kubernetes.io/projected/014f3913-ac7e-431a-880c-91d979a5dfc7-kube-api-access-w8b94\") pod \"assisted-installer-controller-wtxfh\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.281623 master-0 kubenswrapper[4172]: I0220 14:45:19.281573 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9fd9f419-2cdc-4991-8fb9-87d76ac58976-host-etc-kube\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:45:19.281702 master-0 kubenswrapper[4172]: I0220 14:45:19.281634 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-var-run-resolv-conf\") pod \"assisted-installer-controller-wtxfh\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.281702 master-0 kubenswrapper[4172]: I0220 14:45:19.281673 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cede061-d85a-4366-9f1e-90be51f726fc-service-ca\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:19.281702 master-0 kubenswrapper[4172]: I0220 14:45:19.281676 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9fd9f419-2cdc-4991-8fb9-87d76ac58976-host-etc-kube\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:45:19.281891 master-0 kubenswrapper[4172]: I0220 14:45:19.281748 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-var-run-resolv-conf\") pod \"assisted-installer-controller-wtxfh\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.281891 master-0 kubenswrapper[4172]: I0220 14:45:19.281757 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-sno-bootstrap-files\") pod \"assisted-installer-controller-wtxfh\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.281891 master-0 kubenswrapper[4172]: I0220 14:45:19.281804 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-sno-bootstrap-files\") pod \"assisted-installer-controller-wtxfh\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.281891 master-0 kubenswrapper[4172]: I0220 14:45:19.281815 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:19.281891 master-0 kubenswrapper[4172]: I0220 14:45:19.281851 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:19.281891 master-0 kubenswrapper[4172]: I0220 14:45:19.281884 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svlzf\" (UniqueName: \"kubernetes.io/projected/9fd9f419-2cdc-4991-8fb9-87d76ac58976-kube-api-access-svlzf\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:45:19.282315 master-0 kubenswrapper[4172]: I0220 14:45:19.281955 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-resolv-conf\") pod \"assisted-installer-controller-wtxfh\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.282315 master-0 kubenswrapper[4172]: I0220 14:45:19.281992 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9fd9f419-2cdc-4991-8fb9-87d76ac58976-metrics-tls\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:45:19.282315 master-0 kubenswrapper[4172]: I0220 14:45:19.282026 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cede061-d85a-4366-9f1e-90be51f726fc-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:19.282315 master-0 kubenswrapper[4172]: I0220 14:45:19.282062 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:19.282315 master-0 kubenswrapper[4172]: I0220 14:45:19.282131 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:19.282315 master-0 kubenswrapper[4172]: I0220 14:45:19.282163 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-resolv-conf\") pod \"assisted-installer-controller-wtxfh\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.282682 master-0 kubenswrapper[4172]: I0220 14:45:19.282490 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:19.283156 master-0 kubenswrapper[4172]: I0220 14:45:19.283105 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cede061-d85a-4366-9f1e-90be51f726fc-service-ca\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:19.283504 master-0 kubenswrapper[4172]: E0220 14:45:19.283457 4172 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 14:45:19.283998 master-0 kubenswrapper[4172]: E0220 14:45:19.283957 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert podName:4cede061-d85a-4366-9f1e-90be51f726fc nodeName:}" failed. No retries permitted until 2026-02-20 14:45:19.78355327 +0000 UTC m=+60.338778980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert") pod "cluster-version-operator-5cfd9759cf-jf2s9" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc") : secret "cluster-version-operator-serving-cert" not found Feb 20 14:45:19.284182 master-0 kubenswrapper[4172]: I0220 14:45:19.284131 4172 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 20 14:45:19.292086 master-0 kubenswrapper[4172]: I0220 14:45:19.292030 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9fd9f419-2cdc-4991-8fb9-87d76ac58976-metrics-tls\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:45:19.312948 master-0 kubenswrapper[4172]: I0220 14:45:19.312854 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cede061-d85a-4366-9f1e-90be51f726fc-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:19.313123 master-0 kubenswrapper[4172]: I0220 14:45:19.313033 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8b94\" (UniqueName: \"kubernetes.io/projected/014f3913-ac7e-431a-880c-91d979a5dfc7-kube-api-access-w8b94\") pod \"assisted-installer-controller-wtxfh\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.313123 master-0 kubenswrapper[4172]: I0220 14:45:19.313070 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svlzf\" (UniqueName: \"kubernetes.io/projected/9fd9f419-2cdc-4991-8fb9-87d76ac58976-kube-api-access-svlzf\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:45:19.429635 master-0 kubenswrapper[4172]: I0220 14:45:19.429423 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:19.445224 master-0 kubenswrapper[4172]: W0220 14:45:19.445149 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod014f3913_ac7e_431a_880c_91d979a5dfc7.slice/crio-617ccef4b48beb8ed1f21a9b1c418d8de1fbc1ee6e5e89c3998a1f0b78051407 WatchSource:0}: Error finding container 617ccef4b48beb8ed1f21a9b1c418d8de1fbc1ee6e5e89c3998a1f0b78051407: Status 404 returned error can't find the container with id 617ccef4b48beb8ed1f21a9b1c418d8de1fbc1ee6e5e89c3998a1f0b78051407 Feb 20 14:45:19.460123 master-0 kubenswrapper[4172]: I0220 14:45:19.460080 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:45:19.471479 master-0 kubenswrapper[4172]: W0220 14:45:19.471400 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fd9f419_2cdc_4991_8fb9_87d76ac58976.slice/crio-469af398b29095aa460373b4a9d58261db50995525853368aaa76c2198d9753f WatchSource:0}: Error finding container 469af398b29095aa460373b4a9d58261db50995525853368aaa76c2198d9753f: Status 404 returned error can't find the container with id 469af398b29095aa460373b4a9d58261db50995525853368aaa76c2198d9753f Feb 20 14:45:19.786178 master-0 kubenswrapper[4172]: I0220 14:45:19.786036 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:19.786797 master-0 kubenswrapper[4172]: E0220 14:45:19.786231 4172 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 14:45:19.786797 master-0 kubenswrapper[4172]: E0220 14:45:19.786320 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert podName:4cede061-d85a-4366-9f1e-90be51f726fc nodeName:}" failed. No retries permitted until 2026-02-20 14:45:20.786294058 +0000 UTC m=+61.341519688 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert") pod "cluster-version-operator-5cfd9759cf-jf2s9" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc") : secret "cluster-version-operator-serving-cert" not found Feb 20 14:45:20.414150 master-0 kubenswrapper[4172]: I0220 14:45:20.414077 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" event={"ID":"9fd9f419-2cdc-4991-8fb9-87d76ac58976","Type":"ContainerStarted","Data":"469af398b29095aa460373b4a9d58261db50995525853368aaa76c2198d9753f"} Feb 20 14:45:20.415455 master-0 kubenswrapper[4172]: I0220 14:45:20.415132 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-wtxfh" event={"ID":"014f3913-ac7e-431a-880c-91d979a5dfc7","Type":"ContainerStarted","Data":"617ccef4b48beb8ed1f21a9b1c418d8de1fbc1ee6e5e89c3998a1f0b78051407"} Feb 20 14:45:20.795109 master-0 kubenswrapper[4172]: I0220 14:45:20.795032 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:20.795355 master-0 kubenswrapper[4172]: E0220 14:45:20.795244 4172 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 14:45:20.795414 master-0 kubenswrapper[4172]: E0220 14:45:20.795370 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert podName:4cede061-d85a-4366-9f1e-90be51f726fc nodeName:}" failed. No retries permitted until 2026-02-20 14:45:22.795337259 +0000 UTC m=+63.350562899 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert") pod "cluster-version-operator-5cfd9759cf-jf2s9" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc") : secret "cluster-version-operator-serving-cert" not found Feb 20 14:45:22.810995 master-0 kubenswrapper[4172]: I0220 14:45:22.810901 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:22.811631 master-0 kubenswrapper[4172]: E0220 14:45:22.811110 4172 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 14:45:22.811631 master-0 kubenswrapper[4172]: E0220 14:45:22.811225 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert podName:4cede061-d85a-4366-9f1e-90be51f726fc nodeName:}" failed. No retries permitted until 2026-02-20 14:45:26.811195394 +0000 UTC m=+67.366421024 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert") pod "cluster-version-operator-5cfd9759cf-jf2s9" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc") : secret "cluster-version-operator-serving-cert" not found Feb 20 14:45:24.198256 master-0 kubenswrapper[4172]: I0220 14:45:24.198175 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Feb 20 14:45:24.207098 master-0 kubenswrapper[4172]: I0220 14:45:24.207057 4172 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Feb 20 14:45:24.425484 master-0 kubenswrapper[4172]: I0220 14:45:24.425413 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" event={"ID":"9fd9f419-2cdc-4991-8fb9-87d76ac58976","Type":"ContainerStarted","Data":"206ff74dbf8ac205b7526aba69f67598c7eb64c83ff678f0e12a41fa367def5c"} Feb 20 14:45:24.429447 master-0 kubenswrapper[4172]: I0220 14:45:24.429385 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-wtxfh" event={"ID":"014f3913-ac7e-431a-880c-91d979a5dfc7","Type":"ContainerStarted","Data":"d0525760cb8ba3e4a202836682905e3209d011265d322e121763f9e03af800fb"} Feb 20 14:45:24.463329 master-0 kubenswrapper[4172]: I0220 14:45:24.463121 4172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" podStartSLOduration=27.103903841 podStartE2EDuration="30.463068009s" podCreationTimestamp="2026-02-20 14:44:54 +0000 UTC" firstStartedPulling="2026-02-20 14:45:19.474388678 +0000 UTC m=+60.029614308" lastFinishedPulling="2026-02-20 14:45:22.833552836 +0000 UTC m=+63.388778476" observedRunningTime="2026-02-20 14:45:24.446162537 +0000 UTC m=+65.001388147" watchObservedRunningTime="2026-02-20 14:45:24.463068009 +0000 UTC m=+65.018293619" Feb 20 14:45:24.463722 master-0 kubenswrapper[4172]: I0220 14:45:24.463527 4172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="assisted-installer/assisted-installer-controller-wtxfh" podStartSLOduration=303.716059299 podStartE2EDuration="5m8.463520993s" podCreationTimestamp="2026-02-20 14:40:16 +0000 UTC" firstStartedPulling="2026-02-20 14:45:19.44737604 +0000 UTC m=+60.002601650" lastFinishedPulling="2026-02-20 14:45:24.194837734 +0000 UTC m=+64.750063344" observedRunningTime="2026-02-20 14:45:24.463018738 +0000 UTC m=+65.018244348" watchObservedRunningTime="2026-02-20 14:45:24.463520993 +0000 UTC m=+65.018746603" Feb 20 14:45:24.731584 master-0 kubenswrapper[4172]: I0220 14:45:24.731349 4172 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 20 14:45:25.433157 master-0 kubenswrapper[4172]: I0220 14:45:25.433051 4172 generic.go:334] "Generic (PLEG): container finished" podID="014f3913-ac7e-431a-880c-91d979a5dfc7" containerID="d0525760cb8ba3e4a202836682905e3209d011265d322e121763f9e03af800fb" exitCode=0 Feb 20 14:45:25.434973 master-0 kubenswrapper[4172]: I0220 14:45:25.433186 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-wtxfh" event={"ID":"014f3913-ac7e-431a-880c-91d979a5dfc7","Type":"ContainerDied","Data":"d0525760cb8ba3e4a202836682905e3209d011265d322e121763f9e03af800fb"} Feb 20 14:45:26.459098 master-0 kubenswrapper[4172]: I0220 14:45:26.459014 4172 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:26.539180 master-0 kubenswrapper[4172]: I0220 14:45:26.539133 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-sno-bootstrap-files\") pod \"014f3913-ac7e-431a-880c-91d979a5dfc7\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " Feb 20 14:45:26.539511 master-0 kubenswrapper[4172]: I0220 14:45:26.539228 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "014f3913-ac7e-431a-880c-91d979a5dfc7" (UID: "014f3913-ac7e-431a-880c-91d979a5dfc7"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:45:26.539605 master-0 kubenswrapper[4172]: I0220 14:45:26.539452 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-ca-bundle\") pod \"014f3913-ac7e-431a-880c-91d979a5dfc7\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " Feb 20 14:45:26.539708 master-0 kubenswrapper[4172]: I0220 14:45:26.539632 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-resolv-conf\") pod \"014f3913-ac7e-431a-880c-91d979a5dfc7\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " Feb 20 14:45:26.539708 master-0 kubenswrapper[4172]: I0220 14:45:26.539689 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8b94\" (UniqueName: \"kubernetes.io/projected/014f3913-ac7e-431a-880c-91d979a5dfc7-kube-api-access-w8b94\") pod \"014f3913-ac7e-431a-880c-91d979a5dfc7\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " Feb 20 14:45:26.539980 master-0 kubenswrapper[4172]: I0220 14:45:26.539731 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-var-run-resolv-conf\") pod \"014f3913-ac7e-431a-880c-91d979a5dfc7\" (UID: \"014f3913-ac7e-431a-880c-91d979a5dfc7\") " Feb 20 14:45:26.539980 master-0 kubenswrapper[4172]: I0220 14:45:26.539739 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "014f3913-ac7e-431a-880c-91d979a5dfc7" (UID: "014f3913-ac7e-431a-880c-91d979a5dfc7"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:45:26.539980 master-0 kubenswrapper[4172]: I0220 14:45:26.539849 4172 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Feb 20 14:45:26.539980 master-0 kubenswrapper[4172]: I0220 14:45:26.539883 4172 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 20 14:45:26.539980 master-0 kubenswrapper[4172]: I0220 14:45:26.539834 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "014f3913-ac7e-431a-880c-91d979a5dfc7" (UID: "014f3913-ac7e-431a-880c-91d979a5dfc7"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:45:26.540324 master-0 kubenswrapper[4172]: I0220 14:45:26.540049 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "014f3913-ac7e-431a-880c-91d979a5dfc7" (UID: "014f3913-ac7e-431a-880c-91d979a5dfc7"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:45:26.544969 master-0 kubenswrapper[4172]: I0220 14:45:26.544867 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/014f3913-ac7e-431a-880c-91d979a5dfc7-kube-api-access-w8b94" (OuterVolumeSpecName: "kube-api-access-w8b94") pod "014f3913-ac7e-431a-880c-91d979a5dfc7" (UID: "014f3913-ac7e-431a-880c-91d979a5dfc7"). InnerVolumeSpecName "kube-api-access-w8b94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:45:26.632178 master-0 kubenswrapper[4172]: I0220 14:45:26.632082 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-s9znb"] Feb 20 14:45:26.632486 master-0 kubenswrapper[4172]: E0220 14:45:26.632201 4172 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014f3913-ac7e-431a-880c-91d979a5dfc7" containerName="assisted-installer-controller" Feb 20 14:45:26.632486 master-0 kubenswrapper[4172]: I0220 14:45:26.632223 4172 state_mem.go:107] "Deleted CPUSet assignment" podUID="014f3913-ac7e-431a-880c-91d979a5dfc7" containerName="assisted-installer-controller" Feb 20 14:45:26.632486 master-0 kubenswrapper[4172]: I0220 14:45:26.632295 4172 memory_manager.go:354] "RemoveStaleState removing state" podUID="014f3913-ac7e-431a-880c-91d979a5dfc7" containerName="assisted-installer-controller" Feb 20 14:45:26.632801 master-0 kubenswrapper[4172]: I0220 14:45:26.632681 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-s9znb" Feb 20 14:45:26.641098 master-0 kubenswrapper[4172]: I0220 14:45:26.641027 4172 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 20 14:45:26.641098 master-0 kubenswrapper[4172]: I0220 14:45:26.641082 4172 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/014f3913-ac7e-431a-880c-91d979a5dfc7-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 14:45:26.641380 master-0 kubenswrapper[4172]: I0220 14:45:26.641110 4172 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8b94\" (UniqueName: \"kubernetes.io/projected/014f3913-ac7e-431a-880c-91d979a5dfc7-kube-api-access-w8b94\") on node \"master-0\" DevicePath \"\"" Feb 20 14:45:26.741513 master-0 kubenswrapper[4172]: I0220 14:45:26.741380 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8mmd\" (UniqueName: \"kubernetes.io/projected/cbc6343c-22ec-4cf8-904f-6a93cd251993-kube-api-access-w8mmd\") pod \"mtu-prober-s9znb\" (UID: \"cbc6343c-22ec-4cf8-904f-6a93cd251993\") " pod="openshift-network-operator/mtu-prober-s9znb" Feb 20 14:45:26.842059 master-0 kubenswrapper[4172]: I0220 14:45:26.841962 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:26.842059 master-0 kubenswrapper[4172]: I0220 14:45:26.842043 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8mmd\" (UniqueName: \"kubernetes.io/projected/cbc6343c-22ec-4cf8-904f-6a93cd251993-kube-api-access-w8mmd\") pod \"mtu-prober-s9znb\" (UID: \"cbc6343c-22ec-4cf8-904f-6a93cd251993\") " pod="openshift-network-operator/mtu-prober-s9znb" Feb 20 14:45:26.842362 master-0 kubenswrapper[4172]: E0220 14:45:26.842284 4172 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 14:45:26.842458 master-0 kubenswrapper[4172]: E0220 14:45:26.842426 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert podName:4cede061-d85a-4366-9f1e-90be51f726fc nodeName:}" failed. No retries permitted until 2026-02-20 14:45:34.84238832 +0000 UTC m=+75.397613960 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert") pod "cluster-version-operator-5cfd9759cf-jf2s9" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc") : secret "cluster-version-operator-serving-cert" not found Feb 20 14:45:26.870438 master-0 kubenswrapper[4172]: I0220 14:45:26.870361 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8mmd\" (UniqueName: \"kubernetes.io/projected/cbc6343c-22ec-4cf8-904f-6a93cd251993-kube-api-access-w8mmd\") pod \"mtu-prober-s9znb\" (UID: \"cbc6343c-22ec-4cf8-904f-6a93cd251993\") " pod="openshift-network-operator/mtu-prober-s9znb" Feb 20 14:45:26.953685 master-0 kubenswrapper[4172]: I0220 14:45:26.953552 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-s9znb" Feb 20 14:45:26.968460 master-0 kubenswrapper[4172]: W0220 14:45:26.968367 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbc6343c_22ec_4cf8_904f_6a93cd251993.slice/crio-f15a77dc38bd5f18a50d69cdc8939c3167e1a8020322420263668615a817067f WatchSource:0}: Error finding container f15a77dc38bd5f18a50d69cdc8939c3167e1a8020322420263668615a817067f: Status 404 returned error can't find the container with id f15a77dc38bd5f18a50d69cdc8939c3167e1a8020322420263668615a817067f Feb 20 14:45:27.439755 master-0 kubenswrapper[4172]: I0220 14:45:27.439334 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-wtxfh" event={"ID":"014f3913-ac7e-431a-880c-91d979a5dfc7","Type":"ContainerDied","Data":"617ccef4b48beb8ed1f21a9b1c418d8de1fbc1ee6e5e89c3998a1f0b78051407"} Feb 20 14:45:27.439755 master-0 kubenswrapper[4172]: I0220 14:45:27.439722 4172 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="617ccef4b48beb8ed1f21a9b1c418d8de1fbc1ee6e5e89c3998a1f0b78051407" Feb 20 14:45:27.439755 master-0 kubenswrapper[4172]: I0220 14:45:27.439388 4172 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:45:27.441100 master-0 kubenswrapper[4172]: I0220 14:45:27.441053 4172 generic.go:334] "Generic (PLEG): container finished" podID="cbc6343c-22ec-4cf8-904f-6a93cd251993" containerID="7d3284edf21995a27c89886a69cea12c9862e571d30d2baf2f5e1bce4a1984d8" exitCode=0 Feb 20 14:45:27.441199 master-0 kubenswrapper[4172]: I0220 14:45:27.441107 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-s9znb" event={"ID":"cbc6343c-22ec-4cf8-904f-6a93cd251993","Type":"ContainerDied","Data":"7d3284edf21995a27c89886a69cea12c9862e571d30d2baf2f5e1bce4a1984d8"} Feb 20 14:45:27.441269 master-0 kubenswrapper[4172]: I0220 14:45:27.441190 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-s9znb" event={"ID":"cbc6343c-22ec-4cf8-904f-6a93cd251993","Type":"ContainerStarted","Data":"f15a77dc38bd5f18a50d69cdc8939c3167e1a8020322420263668615a817067f"} Feb 20 14:45:28.470431 master-0 kubenswrapper[4172]: I0220 14:45:28.470351 4172 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-s9znb" Feb 20 14:45:28.555986 master-0 kubenswrapper[4172]: I0220 14:45:28.555893 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8mmd\" (UniqueName: \"kubernetes.io/projected/cbc6343c-22ec-4cf8-904f-6a93cd251993-kube-api-access-w8mmd\") pod \"cbc6343c-22ec-4cf8-904f-6a93cd251993\" (UID: \"cbc6343c-22ec-4cf8-904f-6a93cd251993\") " Feb 20 14:45:28.560627 master-0 kubenswrapper[4172]: I0220 14:45:28.560541 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbc6343c-22ec-4cf8-904f-6a93cd251993-kube-api-access-w8mmd" (OuterVolumeSpecName: "kube-api-access-w8mmd") pod "cbc6343c-22ec-4cf8-904f-6a93cd251993" (UID: "cbc6343c-22ec-4cf8-904f-6a93cd251993"). InnerVolumeSpecName "kube-api-access-w8mmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:45:28.656465 master-0 kubenswrapper[4172]: I0220 14:45:28.656384 4172 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8mmd\" (UniqueName: \"kubernetes.io/projected/cbc6343c-22ec-4cf8-904f-6a93cd251993-kube-api-access-w8mmd\") on node \"master-0\" DevicePath \"\"" Feb 20 14:45:29.448395 master-0 kubenswrapper[4172]: I0220 14:45:29.448292 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-s9znb" event={"ID":"cbc6343c-22ec-4cf8-904f-6a93cd251993","Type":"ContainerDied","Data":"f15a77dc38bd5f18a50d69cdc8939c3167e1a8020322420263668615a817067f"} Feb 20 14:45:29.448395 master-0 kubenswrapper[4172]: I0220 14:45:29.448360 4172 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f15a77dc38bd5f18a50d69cdc8939c3167e1a8020322420263668615a817067f" Feb 20 14:45:29.448395 master-0 kubenswrapper[4172]: I0220 14:45:29.448398 4172 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-s9znb" Feb 20 14:45:31.648037 master-0 kubenswrapper[4172]: I0220 14:45:31.647979 4172 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-s9znb"] Feb 20 14:45:31.654037 master-0 kubenswrapper[4172]: I0220 14:45:31.653982 4172 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-s9znb"] Feb 20 14:45:32.213147 master-0 kubenswrapper[4172]: I0220 14:45:32.213069 4172 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbc6343c-22ec-4cf8-904f-6a93cd251993" path="/var/lib/kubelet/pods/cbc6343c-22ec-4cf8-904f-6a93cd251993/volumes" Feb 20 14:45:34.902874 master-0 kubenswrapper[4172]: I0220 14:45:34.902791 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:34.903737 master-0 kubenswrapper[4172]: E0220 14:45:34.903016 4172 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 14:45:34.903737 master-0 kubenswrapper[4172]: E0220 14:45:34.903117 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert podName:4cede061-d85a-4366-9f1e-90be51f726fc nodeName:}" failed. No retries permitted until 2026-02-20 14:45:50.903085476 +0000 UTC m=+91.458311116 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert") pod "cluster-version-operator-5cfd9759cf-jf2s9" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc") : secret "cluster-version-operator-serving-cert" not found Feb 20 14:45:36.520863 master-0 kubenswrapper[4172]: I0220 14:45:36.517374 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-m6hpf"] Feb 20 14:45:36.520863 master-0 kubenswrapper[4172]: E0220 14:45:36.517466 4172 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc6343c-22ec-4cf8-904f-6a93cd251993" containerName="prober" Feb 20 14:45:36.520863 master-0 kubenswrapper[4172]: I0220 14:45:36.517479 4172 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc6343c-22ec-4cf8-904f-6a93cd251993" containerName="prober" Feb 20 14:45:36.520863 master-0 kubenswrapper[4172]: I0220 14:45:36.517505 4172 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbc6343c-22ec-4cf8-904f-6a93cd251993" containerName="prober" Feb 20 14:45:36.520863 master-0 kubenswrapper[4172]: I0220 14:45:36.517681 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.520863 master-0 kubenswrapper[4172]: I0220 14:45:36.520157 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 20 14:45:36.520863 master-0 kubenswrapper[4172]: I0220 14:45:36.521264 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 20 14:45:36.520863 master-0 kubenswrapper[4172]: I0220 14:45:36.521394 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 20 14:45:36.524549 master-0 kubenswrapper[4172]: I0220 14:45:36.524140 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 20 14:45:36.615754 master-0 kubenswrapper[4172]: I0220 14:45:36.615661 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-system-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.615754 master-0 kubenswrapper[4172]: I0220 14:45:36.615734 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-cni-binary-copy\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.616137 master-0 kubenswrapper[4172]: I0220 14:45:36.615773 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-cnibin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.616137 master-0 kubenswrapper[4172]: I0220 14:45:36.615953 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.616137 master-0 kubenswrapper[4172]: I0220 14:45:36.616080 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-bin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.616319 master-0 kubenswrapper[4172]: I0220 14:45:36.616174 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-multus\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.616319 master-0 kubenswrapper[4172]: I0220 14:45:36.616228 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkq7j\" (UniqueName: \"kubernetes.io/projected/32a79fe0-e619-4a66-8617-e8111bdc7e96-kube-api-access-jkq7j\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.616319 master-0 kubenswrapper[4172]: I0220 14:45:36.616286 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-hostroot\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.616485 master-0 kubenswrapper[4172]: I0220 14:45:36.616342 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-os-release\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.616485 master-0 kubenswrapper[4172]: I0220 14:45:36.616387 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-etc-kubernetes\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.616485 master-0 kubenswrapper[4172]: I0220 14:45:36.616421 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-k8s-cni-cncf-io\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.616485 master-0 kubenswrapper[4172]: I0220 14:45:36.616454 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-kubelet\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.616485 master-0 kubenswrapper[4172]: I0220 14:45:36.616485 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-conf-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.616765 master-0 kubenswrapper[4172]: I0220 14:45:36.616518 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-daemon-config\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.616765 master-0 kubenswrapper[4172]: I0220 14:45:36.616571 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-multus-certs\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.616765 master-0 kubenswrapper[4172]: I0220 14:45:36.616659 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-socket-dir-parent\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.616765 master-0 kubenswrapper[4172]: I0220 14:45:36.616715 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-netns\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.717423 master-0 kubenswrapper[4172]: I0220 14:45:36.717281 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-cnibin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.717630 master-0 kubenswrapper[4172]: I0220 14:45:36.717468 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-cnibin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.717630 master-0 kubenswrapper[4172]: I0220 14:45:36.717581 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkq7j\" (UniqueName: \"kubernetes.io/projected/32a79fe0-e619-4a66-8617-e8111bdc7e96-kube-api-access-jkq7j\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.717771 master-0 kubenswrapper[4172]: I0220 14:45:36.717703 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.717836 master-0 kubenswrapper[4172]: I0220 14:45:36.717769 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-bin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.717836 master-0 kubenswrapper[4172]: I0220 14:45:36.717802 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-multus\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.717836 master-0 kubenswrapper[4172]: I0220 14:45:36.717810 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.718058 master-0 kubenswrapper[4172]: I0220 14:45:36.717838 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-hostroot\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.718058 master-0 kubenswrapper[4172]: I0220 14:45:36.717876 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-bin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.718058 master-0 kubenswrapper[4172]: I0220 14:45:36.717880 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-os-release\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.718058 master-0 kubenswrapper[4172]: I0220 14:45:36.717985 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-etc-kubernetes\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.718284 master-0 kubenswrapper[4172]: I0220 14:45:36.718035 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-multus\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.718284 master-0 kubenswrapper[4172]: I0220 14:45:36.718078 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-hostroot\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.718284 master-0 kubenswrapper[4172]: I0220 14:45:36.718127 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-multus-certs\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.718453 master-0 kubenswrapper[4172]: I0220 14:45:36.718292 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-etc-kubernetes\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.718453 master-0 kubenswrapper[4172]: I0220 14:45:36.718429 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-os-release\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.718569 master-0 kubenswrapper[4172]: I0220 14:45:36.718464 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-multus-certs\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.718569 master-0 kubenswrapper[4172]: I0220 14:45:36.718511 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-k8s-cni-cncf-io\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.718712 master-0 kubenswrapper[4172]: I0220 14:45:36.718613 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-k8s-cni-cncf-io\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.718712 master-0 kubenswrapper[4172]: I0220 14:45:36.718642 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-kubelet\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.718712 master-0 kubenswrapper[4172]: I0220 14:45:36.718686 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-conf-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.718886 master-0 kubenswrapper[4172]: I0220 14:45:36.718759 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-kubelet\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.718886 master-0 kubenswrapper[4172]: I0220 14:45:36.718824 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-daemon-config\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.719058 master-0 kubenswrapper[4172]: I0220 14:45:36.718909 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-conf-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.719124 master-0 kubenswrapper[4172]: I0220 14:45:36.719065 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-socket-dir-parent\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.719181 master-0 kubenswrapper[4172]: I0220 14:45:36.719130 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-netns\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.719249 master-0 kubenswrapper[4172]: I0220 14:45:36.719187 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-system-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.719249 master-0 kubenswrapper[4172]: I0220 14:45:36.719236 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-cni-binary-copy\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.719359 master-0 kubenswrapper[4172]: I0220 14:45:36.719269 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-socket-dir-parent\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.719359 master-0 kubenswrapper[4172]: I0220 14:45:36.719274 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-netns\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.719611 master-0 kubenswrapper[4172]: I0220 14:45:36.719555 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-system-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.720209 master-0 kubenswrapper[4172]: I0220 14:45:36.720166 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-daemon-config\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.720765 master-0 kubenswrapper[4172]: I0220 14:45:36.720705 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-cni-binary-copy\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.721110 master-0 kubenswrapper[4172]: I0220 14:45:36.721051 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-6ts4p"] Feb 20 14:45:36.722072 master-0 kubenswrapper[4172]: I0220 14:45:36.722021 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.724429 master-0 kubenswrapper[4172]: I0220 14:45:36.724376 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 20 14:45:36.724726 master-0 kubenswrapper[4172]: I0220 14:45:36.724675 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 20 14:45:36.747813 master-0 kubenswrapper[4172]: I0220 14:45:36.747750 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkq7j\" (UniqueName: \"kubernetes.io/projected/32a79fe0-e619-4a66-8617-e8111bdc7e96-kube-api-access-jkq7j\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.819797 master-0 kubenswrapper[4172]: I0220 14:45:36.819702 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-whereabouts-configmap\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.820132 master-0 kubenswrapper[4172]: I0220 14:45:36.819800 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-cnibin\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.820132 master-0 kubenswrapper[4172]: I0220 14:45:36.819884 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.820132 master-0 kubenswrapper[4172]: I0220 14:45:36.820020 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-os-release\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.820132 master-0 kubenswrapper[4172]: I0220 14:45:36.820056 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.820377 master-0 kubenswrapper[4172]: I0220 14:45:36.820194 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-system-cni-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.820377 master-0 kubenswrapper[4172]: I0220 14:45:36.820257 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-binary-copy\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.820377 master-0 kubenswrapper[4172]: I0220 14:45:36.820290 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psd59\" (UniqueName: \"kubernetes.io/projected/b6e6d218-d969-40b5-a32b-9b2093089dbf-kube-api-access-psd59\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.844549 master-0 kubenswrapper[4172]: I0220 14:45:36.844472 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-m6hpf" Feb 20 14:45:36.861226 master-0 kubenswrapper[4172]: W0220 14:45:36.861140 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32a79fe0_e619_4a66_8617_e8111bdc7e96.slice/crio-1489b48b9281848030ac8650ba6a4f51919e00d3276dcba9cb79f43f94b0f041 WatchSource:0}: Error finding container 1489b48b9281848030ac8650ba6a4f51919e00d3276dcba9cb79f43f94b0f041: Status 404 returned error can't find the container with id 1489b48b9281848030ac8650ba6a4f51919e00d3276dcba9cb79f43f94b0f041 Feb 20 14:45:36.921502 master-0 kubenswrapper[4172]: I0220 14:45:36.921391 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-system-cni-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.921502 master-0 kubenswrapper[4172]: I0220 14:45:36.921489 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-binary-copy\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.922115 master-0 kubenswrapper[4172]: I0220 14:45:36.921591 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-system-cni-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.922115 master-0 kubenswrapper[4172]: I0220 14:45:36.921668 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psd59\" (UniqueName: \"kubernetes.io/projected/b6e6d218-d969-40b5-a32b-9b2093089dbf-kube-api-access-psd59\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.922115 master-0 kubenswrapper[4172]: I0220 14:45:36.921723 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-whereabouts-configmap\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.922115 master-0 kubenswrapper[4172]: I0220 14:45:36.921913 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-cnibin\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.922115 master-0 kubenswrapper[4172]: I0220 14:45:36.922033 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-cnibin\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.922398 master-0 kubenswrapper[4172]: I0220 14:45:36.922313 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.922463 master-0 kubenswrapper[4172]: I0220 14:45:36.922397 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-os-release\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.922532 master-0 kubenswrapper[4172]: I0220 14:45:36.922455 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.922623 master-0 kubenswrapper[4172]: I0220 14:45:36.922581 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-os-release\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.922806 master-0 kubenswrapper[4172]: I0220 14:45:36.922731 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.923137 master-0 kubenswrapper[4172]: I0220 14:45:36.923081 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-binary-copy\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.923222 master-0 kubenswrapper[4172]: I0220 14:45:36.923148 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-whereabouts-configmap\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.923664 master-0 kubenswrapper[4172]: I0220 14:45:36.923613 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:36.951192 master-0 kubenswrapper[4172]: I0220 14:45:36.951121 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psd59\" (UniqueName: \"kubernetes.io/projected/b6e6d218-d969-40b5-a32b-9b2093089dbf-kube-api-access-psd59\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:37.041194 master-0 kubenswrapper[4172]: I0220 14:45:37.041100 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:45:37.058008 master-0 kubenswrapper[4172]: W0220 14:45:37.057899 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6e6d218_d969_40b5_a32b_9b2093089dbf.slice/crio-95115710de33578fe832a95630e8d98eba6ecc806a442bdc7740ad889ac1e80b WatchSource:0}: Error finding container 95115710de33578fe832a95630e8d98eba6ecc806a442bdc7740ad889ac1e80b: Status 404 returned error can't find the container with id 95115710de33578fe832a95630e8d98eba6ecc806a442bdc7740ad889ac1e80b Feb 20 14:45:37.473167 master-0 kubenswrapper[4172]: I0220 14:45:37.472971 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerStarted","Data":"95115710de33578fe832a95630e8d98eba6ecc806a442bdc7740ad889ac1e80b"} Feb 20 14:45:37.474677 master-0 kubenswrapper[4172]: I0220 14:45:37.474614 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m6hpf" event={"ID":"32a79fe0-e619-4a66-8617-e8111bdc7e96","Type":"ContainerStarted","Data":"1489b48b9281848030ac8650ba6a4f51919e00d3276dcba9cb79f43f94b0f041"} Feb 20 14:45:37.506329 master-0 kubenswrapper[4172]: I0220 14:45:37.505587 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-99lkv"] Feb 20 14:45:37.506329 master-0 kubenswrapper[4172]: I0220 14:45:37.506091 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:37.506329 master-0 kubenswrapper[4172]: E0220 14:45:37.506178 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:45:37.627971 master-0 kubenswrapper[4172]: I0220 14:45:37.627785 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:37.627971 master-0 kubenswrapper[4172]: I0220 14:45:37.627871 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nlf9\" (UniqueName: \"kubernetes.io/projected/5ea4c132-b6d0-4dc9-942d-48e359eed418-kube-api-access-7nlf9\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:37.728694 master-0 kubenswrapper[4172]: I0220 14:45:37.728573 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:37.728694 master-0 kubenswrapper[4172]: I0220 14:45:37.728653 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nlf9\" (UniqueName: \"kubernetes.io/projected/5ea4c132-b6d0-4dc9-942d-48e359eed418-kube-api-access-7nlf9\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:37.728978 master-0 kubenswrapper[4172]: E0220 14:45:37.728892 4172 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 14:45:37.729111 master-0 kubenswrapper[4172]: E0220 14:45:37.729074 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs podName:5ea4c132-b6d0-4dc9-942d-48e359eed418 nodeName:}" failed. No retries permitted until 2026-02-20 14:45:38.229041106 +0000 UTC m=+78.784266716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs") pod "network-metrics-daemon-99lkv" (UID: "5ea4c132-b6d0-4dc9-942d-48e359eed418") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 14:45:37.746347 master-0 kubenswrapper[4172]: I0220 14:45:37.746208 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nlf9\" (UniqueName: \"kubernetes.io/projected/5ea4c132-b6d0-4dc9-942d-48e359eed418-kube-api-access-7nlf9\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:38.224799 master-0 kubenswrapper[4172]: W0220 14:45:38.224701 4172 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 20 14:45:38.225647 master-0 kubenswrapper[4172]: I0220 14:45:38.225573 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 20 14:45:38.232970 master-0 kubenswrapper[4172]: I0220 14:45:38.232905 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:38.233137 master-0 kubenswrapper[4172]: E0220 14:45:38.233069 4172 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 14:45:38.233137 master-0 kubenswrapper[4172]: E0220 14:45:38.233130 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs podName:5ea4c132-b6d0-4dc9-942d-48e359eed418 nodeName:}" failed. No retries permitted until 2026-02-20 14:45:39.233115322 +0000 UTC m=+79.788340922 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs") pod "network-metrics-daemon-99lkv" (UID: "5ea4c132-b6d0-4dc9-942d-48e359eed418") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 14:45:39.207122 master-0 kubenswrapper[4172]: I0220 14:45:39.207050 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:39.207664 master-0 kubenswrapper[4172]: E0220 14:45:39.207198 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:45:39.241340 master-0 kubenswrapper[4172]: I0220 14:45:39.241293 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:39.241553 master-0 kubenswrapper[4172]: E0220 14:45:39.241486 4172 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 14:45:39.241649 master-0 kubenswrapper[4172]: E0220 14:45:39.241626 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs podName:5ea4c132-b6d0-4dc9-942d-48e359eed418 nodeName:}" failed. No retries permitted until 2026-02-20 14:45:41.241594411 +0000 UTC m=+81.796820051 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs") pod "network-metrics-daemon-99lkv" (UID: "5ea4c132-b6d0-4dc9-942d-48e359eed418") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 14:45:40.224282 master-0 kubenswrapper[4172]: I0220 14:45:40.224019 4172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=2.223996434 podStartE2EDuration="2.223996434s" podCreationTimestamp="2026-02-20 14:45:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:45:40.223452552 +0000 UTC m=+80.778678162" watchObservedRunningTime="2026-02-20 14:45:40.223996434 +0000 UTC m=+80.779222074" Feb 20 14:45:40.485705 master-0 kubenswrapper[4172]: I0220 14:45:40.485588 4172 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="c092110b72556c746170c7d0567154da90861fa9515b4bc320e9e6d1cc856cd6" exitCode=0 Feb 20 14:45:40.485705 master-0 kubenswrapper[4172]: I0220 14:45:40.485664 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerDied","Data":"c092110b72556c746170c7d0567154da90861fa9515b4bc320e9e6d1cc856cd6"} Feb 20 14:45:41.207085 master-0 kubenswrapper[4172]: I0220 14:45:41.207033 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:41.207308 master-0 kubenswrapper[4172]: E0220 14:45:41.207208 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:45:41.257692 master-0 kubenswrapper[4172]: I0220 14:45:41.257634 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:41.258251 master-0 kubenswrapper[4172]: E0220 14:45:41.257856 4172 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 14:45:41.258251 master-0 kubenswrapper[4172]: E0220 14:45:41.257951 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs podName:5ea4c132-b6d0-4dc9-942d-48e359eed418 nodeName:}" failed. No retries permitted until 2026-02-20 14:45:45.257906336 +0000 UTC m=+85.813131956 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs") pod "network-metrics-daemon-99lkv" (UID: "5ea4c132-b6d0-4dc9-942d-48e359eed418") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 14:45:43.216178 master-0 kubenswrapper[4172]: I0220 14:45:43.206991 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:43.216178 master-0 kubenswrapper[4172]: E0220 14:45:43.207196 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:45:43.567432 master-0 kubenswrapper[4172]: I0220 14:45:43.567291 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 20 14:45:45.207851 master-0 kubenswrapper[4172]: I0220 14:45:45.207796 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:45.208512 master-0 kubenswrapper[4172]: E0220 14:45:45.208004 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:45:45.292177 master-0 kubenswrapper[4172]: I0220 14:45:45.292045 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:45.292414 master-0 kubenswrapper[4172]: E0220 14:45:45.292230 4172 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 14:45:45.292414 master-0 kubenswrapper[4172]: E0220 14:45:45.292310 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs podName:5ea4c132-b6d0-4dc9-942d-48e359eed418 nodeName:}" failed. No retries permitted until 2026-02-20 14:45:53.292292326 +0000 UTC m=+93.847517926 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs") pod "network-metrics-daemon-99lkv" (UID: "5ea4c132-b6d0-4dc9-942d-48e359eed418") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 14:45:47.207373 master-0 kubenswrapper[4172]: I0220 14:45:47.207210 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:47.207792 master-0 kubenswrapper[4172]: E0220 14:45:47.207458 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:45:48.910599 master-0 kubenswrapper[4172]: I0220 14:45:48.910529 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx"] Feb 20 14:45:48.911502 master-0 kubenswrapper[4172]: I0220 14:45:48.910852 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:45:48.912888 master-0 kubenswrapper[4172]: I0220 14:45:48.912839 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 20 14:45:48.913680 master-0 kubenswrapper[4172]: I0220 14:45:48.913660 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 20 14:45:48.913793 master-0 kubenswrapper[4172]: I0220 14:45:48.913753 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 20 14:45:48.913839 master-0 kubenswrapper[4172]: I0220 14:45:48.913800 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 20 14:45:48.914061 master-0 kubenswrapper[4172]: I0220 14:45:48.914036 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 20 14:45:48.925113 master-0 kubenswrapper[4172]: I0220 14:45:48.925024 4172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=5.924995673 podStartE2EDuration="5.924995673s" podCreationTimestamp="2026-02-20 14:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:45:48.924680046 +0000 UTC m=+89.479905646" watchObservedRunningTime="2026-02-20 14:45:48.924995673 +0000 UTC m=+89.480221303" Feb 20 14:45:49.020945 master-0 kubenswrapper[4172]: I0220 14:45:49.020859 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:45:49.021163 master-0 kubenswrapper[4172]: I0220 14:45:49.021022 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mclrj\" (UniqueName: \"kubernetes.io/projected/5d2b154b-de63-4c9b-99d8-487fb3035fb9-kube-api-access-mclrj\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:45:49.021163 master-0 kubenswrapper[4172]: I0220 14:45:49.021092 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:45:49.021330 master-0 kubenswrapper[4172]: I0220 14:45:49.021281 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:45:49.107305 master-0 kubenswrapper[4172]: I0220 14:45:49.107229 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5mwbb"] Feb 20 14:45:49.108563 master-0 kubenswrapper[4172]: I0220 14:45:49.108459 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.110529 master-0 kubenswrapper[4172]: I0220 14:45:49.110336 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 20 14:45:49.110969 master-0 kubenswrapper[4172]: I0220 14:45:49.110534 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 20 14:45:49.122474 master-0 kubenswrapper[4172]: I0220 14:45:49.122427 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:45:49.122611 master-0 kubenswrapper[4172]: I0220 14:45:49.122579 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mclrj\" (UniqueName: \"kubernetes.io/projected/5d2b154b-de63-4c9b-99d8-487fb3035fb9-kube-api-access-mclrj\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:45:49.122611 master-0 kubenswrapper[4172]: I0220 14:45:49.122601 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:45:49.123579 master-0 kubenswrapper[4172]: I0220 14:45:49.122868 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:45:49.123579 master-0 kubenswrapper[4172]: I0220 14:45:49.123536 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:45:49.123697 master-0 kubenswrapper[4172]: I0220 14:45:49.123646 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:45:49.129222 master-0 kubenswrapper[4172]: I0220 14:45:49.129178 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:45:49.144875 master-0 kubenswrapper[4172]: I0220 14:45:49.144827 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mclrj\" (UniqueName: \"kubernetes.io/projected/5d2b154b-de63-4c9b-99d8-487fb3035fb9-kube-api-access-mclrj\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:45:49.207506 master-0 kubenswrapper[4172]: I0220 14:45:49.207316 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:49.207768 master-0 kubenswrapper[4172]: E0220 14:45:49.207501 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:45:49.219917 master-0 kubenswrapper[4172]: I0220 14:45:49.219795 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 20 14:45:49.223244 master-0 kubenswrapper[4172]: I0220 14:45:49.223205 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-run-ovn-kubernetes\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.223244 master-0 kubenswrapper[4172]: I0220 14:45:49.223244 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-slash\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.223374 master-0 kubenswrapper[4172]: I0220 14:45:49.223265 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-openvswitch\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.223374 master-0 kubenswrapper[4172]: I0220 14:45:49.223282 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-log-socket\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.223458 master-0 kubenswrapper[4172]: I0220 14:45:49.223418 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-env-overrides\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.223506 master-0 kubenswrapper[4172]: I0220 14:45:49.223465 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnrxr\" (UniqueName: \"kubernetes.io/projected/2eebd463-cbbd-4546-8a7c-4621c45c87a0-kube-api-access-rnrxr\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.223547 master-0 kubenswrapper[4172]: I0220 14:45:49.223521 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-cni-bin\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.223588 master-0 kubenswrapper[4172]: I0220 14:45:49.223558 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-systemd\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.223633 master-0 kubenswrapper[4172]: I0220 14:45:49.223602 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovn-node-metrics-cert\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.223674 master-0 kubenswrapper[4172]: I0220 14:45:49.223630 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovnkube-script-lib\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.223716 master-0 kubenswrapper[4172]: I0220 14:45:49.223672 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-ovn\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.223756 master-0 kubenswrapper[4172]: I0220 14:45:49.223712 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-etc-openvswitch\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.223800 master-0 kubenswrapper[4172]: I0220 14:45:49.223772 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-node-log\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.224410 master-0 kubenswrapper[4172]: I0220 14:45:49.223833 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-kubelet\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.224410 master-0 kubenswrapper[4172]: I0220 14:45:49.223883 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.224410 master-0 kubenswrapper[4172]: I0220 14:45:49.223942 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovnkube-config\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.224410 master-0 kubenswrapper[4172]: I0220 14:45:49.223980 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-run-netns\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.224410 master-0 kubenswrapper[4172]: I0220 14:45:49.224009 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-cni-netd\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.224410 master-0 kubenswrapper[4172]: I0220 14:45:49.224046 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-var-lib-openvswitch\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.224410 master-0 kubenswrapper[4172]: I0220 14:45:49.224080 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-systemd-units\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.244321 master-0 kubenswrapper[4172]: I0220 14:45:49.244204 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:45:49.325006 master-0 kubenswrapper[4172]: I0220 14:45:49.324965 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325243 master-0 kubenswrapper[4172]: I0220 14:45:49.325040 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325243 master-0 kubenswrapper[4172]: I0220 14:45:49.325121 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovnkube-config\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325367 master-0 kubenswrapper[4172]: I0220 14:45:49.325246 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-run-netns\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325367 master-0 kubenswrapper[4172]: I0220 14:45:49.325264 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-cni-netd\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325490 master-0 kubenswrapper[4172]: I0220 14:45:49.325344 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-run-netns\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325490 master-0 kubenswrapper[4172]: I0220 14:45:49.325405 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-var-lib-openvswitch\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325490 master-0 kubenswrapper[4172]: I0220 14:45:49.325430 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-systemd-units\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325490 master-0 kubenswrapper[4172]: I0220 14:45:49.325441 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-cni-netd\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325490 master-0 kubenswrapper[4172]: I0220 14:45:49.325449 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-slash\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325803 master-0 kubenswrapper[4172]: I0220 14:45:49.325564 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-var-lib-openvswitch\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325803 master-0 kubenswrapper[4172]: I0220 14:45:49.325625 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-systemd-units\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325803 master-0 kubenswrapper[4172]: I0220 14:45:49.325676 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-openvswitch\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325803 master-0 kubenswrapper[4172]: I0220 14:45:49.325717 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-slash\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325803 master-0 kubenswrapper[4172]: I0220 14:45:49.325746 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-openvswitch\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325803 master-0 kubenswrapper[4172]: I0220 14:45:49.325765 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-log-socket\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325803 master-0 kubenswrapper[4172]: I0220 14:45:49.325804 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-run-ovn-kubernetes\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.325803 master-0 kubenswrapper[4172]: I0220 14:45:49.325806 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovnkube-config\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.325819 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-log-socket\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.325865 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-env-overrides\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.325883 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnrxr\" (UniqueName: \"kubernetes.io/projected/2eebd463-cbbd-4546-8a7c-4621c45c87a0-kube-api-access-rnrxr\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.325896 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-run-ovn-kubernetes\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.325952 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-cni-bin\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.325981 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-systemd\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.326001 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovn-node-metrics-cert\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.326021 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovnkube-script-lib\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.326062 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-ovn\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.326086 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-kubelet\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.326102 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-etc-openvswitch\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.326062 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-cni-bin\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.326134 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-kubelet\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.326189 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-ovn\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.326187 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-node-log\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.326220 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-systemd\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.326211 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-etc-openvswitch\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.326314 master-0 kubenswrapper[4172]: I0220 14:45:49.326299 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-node-log\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.329114 master-0 kubenswrapper[4172]: I0220 14:45:49.326472 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-env-overrides\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.329114 master-0 kubenswrapper[4172]: I0220 14:45:49.326721 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovnkube-script-lib\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.329114 master-0 kubenswrapper[4172]: I0220 14:45:49.328738 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovn-node-metrics-cert\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.344896 master-0 kubenswrapper[4172]: I0220 14:45:49.344833 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnrxr\" (UniqueName: \"kubernetes.io/projected/2eebd463-cbbd-4546-8a7c-4621c45c87a0-kube-api-access-rnrxr\") pod \"ovnkube-node-5mwbb\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.428051 master-0 kubenswrapper[4172]: I0220 14:45:49.428014 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:45:49.931651 master-0 kubenswrapper[4172]: W0220 14:45:49.931247 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2eebd463_cbbd_4546_8a7c_4621c45c87a0.slice/crio-58af561de977e92642e6fc5cdaf1d69bc8b83ef4eff8e5da7ccf5c477049d9be WatchSource:0}: Error finding container 58af561de977e92642e6fc5cdaf1d69bc8b83ef4eff8e5da7ccf5c477049d9be: Status 404 returned error can't find the container with id 58af561de977e92642e6fc5cdaf1d69bc8b83ef4eff8e5da7ccf5c477049d9be Feb 20 14:45:49.934118 master-0 kubenswrapper[4172]: W0220 14:45:49.934067 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d2b154b_de63_4c9b_99d8_487fb3035fb9.slice/crio-329b7497d730cc1438c1c88bd3563dab745cc5c71baf09835af567df43aee00e WatchSource:0}: Error finding container 329b7497d730cc1438c1c88bd3563dab745cc5c71baf09835af567df43aee00e: Status 404 returned error can't find the container with id 329b7497d730cc1438c1c88bd3563dab745cc5c71baf09835af567df43aee00e Feb 20 14:45:50.225441 master-0 kubenswrapper[4172]: I0220 14:45:50.224954 4172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=1.224887538 podStartE2EDuration="1.224887538s" podCreationTimestamp="2026-02-20 14:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:45:50.224442597 +0000 UTC m=+90.779668227" watchObservedRunningTime="2026-02-20 14:45:50.224887538 +0000 UTC m=+90.780113218" Feb 20 14:45:50.517452 master-0 kubenswrapper[4172]: I0220 14:45:50.517373 4172 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="86aaca74eb46c2a67484d7ed32bbe3315e4c31acc5fa267db57dbe7175337821" exitCode=0 Feb 20 14:45:50.517667 master-0 kubenswrapper[4172]: I0220 14:45:50.517451 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerDied","Data":"86aaca74eb46c2a67484d7ed32bbe3315e4c31acc5fa267db57dbe7175337821"} Feb 20 14:45:50.519854 master-0 kubenswrapper[4172]: I0220 14:45:50.519655 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m6hpf" event={"ID":"32a79fe0-e619-4a66-8617-e8111bdc7e96","Type":"ContainerStarted","Data":"337ba8f0ea63092ae9d8ede824c31eaf5e84ca8f14eaf03b8e8583029c921325"} Feb 20 14:45:50.523464 master-0 kubenswrapper[4172]: I0220 14:45:50.521666 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" event={"ID":"5d2b154b-de63-4c9b-99d8-487fb3035fb9","Type":"ContainerStarted","Data":"708920ea2d1be46cb95e4867b2c05c1f808d669a1169cbd70df0ac5377ecd8d6"} Feb 20 14:45:50.523464 master-0 kubenswrapper[4172]: I0220 14:45:50.521762 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" event={"ID":"5d2b154b-de63-4c9b-99d8-487fb3035fb9","Type":"ContainerStarted","Data":"329b7497d730cc1438c1c88bd3563dab745cc5c71baf09835af567df43aee00e"} Feb 20 14:45:50.523464 master-0 kubenswrapper[4172]: I0220 14:45:50.522531 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerStarted","Data":"58af561de977e92642e6fc5cdaf1d69bc8b83ef4eff8e5da7ccf5c477049d9be"} Feb 20 14:45:50.554259 master-0 kubenswrapper[4172]: I0220 14:45:50.554096 4172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-m6hpf" podStartSLOduration=1.369560231 podStartE2EDuration="14.554070836s" podCreationTimestamp="2026-02-20 14:45:36 +0000 UTC" firstStartedPulling="2026-02-20 14:45:36.864807469 +0000 UTC m=+77.420033099" lastFinishedPulling="2026-02-20 14:45:50.049318104 +0000 UTC m=+90.604543704" observedRunningTime="2026-02-20 14:45:50.553737578 +0000 UTC m=+91.108963188" watchObservedRunningTime="2026-02-20 14:45:50.554070836 +0000 UTC m=+91.109296466" Feb 20 14:45:50.943055 master-0 kubenswrapper[4172]: I0220 14:45:50.942870 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:45:50.943055 master-0 kubenswrapper[4172]: E0220 14:45:50.943036 4172 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 14:45:50.944433 master-0 kubenswrapper[4172]: E0220 14:45:50.943089 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert podName:4cede061-d85a-4366-9f1e-90be51f726fc nodeName:}" failed. No retries permitted until 2026-02-20 14:46:22.94307452 +0000 UTC m=+123.498300120 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert") pod "cluster-version-operator-5cfd9759cf-jf2s9" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc") : secret "cluster-version-operator-serving-cert" not found Feb 20 14:45:51.207508 master-0 kubenswrapper[4172]: I0220 14:45:51.207396 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:51.207663 master-0 kubenswrapper[4172]: E0220 14:45:51.207510 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:45:52.088717 master-0 kubenswrapper[4172]: I0220 14:45:52.088627 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-ljvkb"] Feb 20 14:45:52.089613 master-0 kubenswrapper[4172]: I0220 14:45:52.088939 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:45:52.089613 master-0 kubenswrapper[4172]: E0220 14:45:52.089000 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:45:52.151651 master-0 kubenswrapper[4172]: I0220 14:45:52.151600 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mpr8\" (UniqueName: \"kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8\") pod \"network-check-target-ljvkb\" (UID: \"929dffba-46da-4d81-a437-bc6a9fe79811\") " pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:45:52.252341 master-0 kubenswrapper[4172]: I0220 14:45:52.252301 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mpr8\" (UniqueName: \"kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8\") pod \"network-check-target-ljvkb\" (UID: \"929dffba-46da-4d81-a437-bc6a9fe79811\") " pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:45:52.263004 master-0 kubenswrapper[4172]: E0220 14:45:52.262976 4172 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 14:45:52.263004 master-0 kubenswrapper[4172]: E0220 14:45:52.262999 4172 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 14:45:52.263130 master-0 kubenswrapper[4172]: E0220 14:45:52.263010 4172 projected.go:194] Error preparing data for projected volume kube-api-access-9mpr8 for pod openshift-network-diagnostics/network-check-target-ljvkb: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 14:45:52.263185 master-0 kubenswrapper[4172]: E0220 14:45:52.263146 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8 podName:929dffba-46da-4d81-a437-bc6a9fe79811 nodeName:}" failed. No retries permitted until 2026-02-20 14:45:52.763131542 +0000 UTC m=+93.318357142 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9mpr8" (UniqueName: "kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8") pod "network-check-target-ljvkb" (UID: "929dffba-46da-4d81-a437-bc6a9fe79811") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 14:45:52.857671 master-0 kubenswrapper[4172]: I0220 14:45:52.857624 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mpr8\" (UniqueName: \"kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8\") pod \"network-check-target-ljvkb\" (UID: \"929dffba-46da-4d81-a437-bc6a9fe79811\") " pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:45:52.857979 master-0 kubenswrapper[4172]: E0220 14:45:52.857764 4172 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 14:45:52.857979 master-0 kubenswrapper[4172]: E0220 14:45:52.857780 4172 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 14:45:52.857979 master-0 kubenswrapper[4172]: E0220 14:45:52.857790 4172 projected.go:194] Error preparing data for projected volume kube-api-access-9mpr8 for pod openshift-network-diagnostics/network-check-target-ljvkb: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 14:45:52.857979 master-0 kubenswrapper[4172]: E0220 14:45:52.857832 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8 podName:929dffba-46da-4d81-a437-bc6a9fe79811 nodeName:}" failed. No retries permitted until 2026-02-20 14:45:53.857818131 +0000 UTC m=+94.413043731 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9mpr8" (UniqueName: "kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8") pod "network-check-target-ljvkb" (UID: "929dffba-46da-4d81-a437-bc6a9fe79811") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 14:45:53.208361 master-0 kubenswrapper[4172]: I0220 14:45:53.207763 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:53.208361 master-0 kubenswrapper[4172]: E0220 14:45:53.207940 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:45:53.301252 master-0 kubenswrapper[4172]: I0220 14:45:53.301021 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:53.301252 master-0 kubenswrapper[4172]: E0220 14:45:53.301174 4172 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 14:45:53.301252 master-0 kubenswrapper[4172]: E0220 14:45:53.301235 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs podName:5ea4c132-b6d0-4dc9-942d-48e359eed418 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:09.301223091 +0000 UTC m=+109.856448691 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs") pod "network-metrics-daemon-99lkv" (UID: "5ea4c132-b6d0-4dc9-942d-48e359eed418") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 14:45:53.533723 master-0 kubenswrapper[4172]: I0220 14:45:53.533613 4172 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="187177cba6632230d116641fd3dad458ff096f751d761a5c25483f731b58481b" exitCode=0 Feb 20 14:45:53.533723 master-0 kubenswrapper[4172]: I0220 14:45:53.533671 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerDied","Data":"187177cba6632230d116641fd3dad458ff096f751d761a5c25483f731b58481b"} Feb 20 14:45:53.908074 master-0 kubenswrapper[4172]: I0220 14:45:53.907837 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mpr8\" (UniqueName: \"kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8\") pod \"network-check-target-ljvkb\" (UID: \"929dffba-46da-4d81-a437-bc6a9fe79811\") " pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:45:53.908074 master-0 kubenswrapper[4172]: E0220 14:45:53.908047 4172 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 14:45:53.908074 master-0 kubenswrapper[4172]: E0220 14:45:53.908068 4172 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 14:45:53.908074 master-0 kubenswrapper[4172]: E0220 14:45:53.908082 4172 projected.go:194] Error preparing data for projected volume kube-api-access-9mpr8 for pod openshift-network-diagnostics/network-check-target-ljvkb: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 14:45:53.908596 master-0 kubenswrapper[4172]: E0220 14:45:53.908138 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8 podName:929dffba-46da-4d81-a437-bc6a9fe79811 nodeName:}" failed. No retries permitted until 2026-02-20 14:45:55.90811902 +0000 UTC m=+96.463344630 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9mpr8" (UniqueName: "kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8") pod "network-check-target-ljvkb" (UID: "929dffba-46da-4d81-a437-bc6a9fe79811") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 14:45:54.207347 master-0 kubenswrapper[4172]: I0220 14:45:54.207214 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:45:54.207589 master-0 kubenswrapper[4172]: E0220 14:45:54.207370 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:45:54.538352 master-0 kubenswrapper[4172]: I0220 14:45:54.538299 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerStarted","Data":"177aef91a6eb47e06724759a7ce69757e5533636be520f8861b5d3c44d7c4272"} Feb 20 14:45:54.704948 master-0 kubenswrapper[4172]: I0220 14:45:54.704841 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-gprr4"] Feb 20 14:45:54.705539 master-0 kubenswrapper[4172]: I0220 14:45:54.705489 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:45:54.708410 master-0 kubenswrapper[4172]: I0220 14:45:54.708364 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 20 14:45:54.708410 master-0 kubenswrapper[4172]: I0220 14:45:54.708388 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 20 14:45:54.708534 master-0 kubenswrapper[4172]: I0220 14:45:54.708498 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 20 14:45:54.710761 master-0 kubenswrapper[4172]: I0220 14:45:54.710711 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 20 14:45:54.711680 master-0 kubenswrapper[4172]: I0220 14:45:54.711647 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 20 14:45:54.817430 master-0 kubenswrapper[4172]: I0220 14:45:54.817370 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-ovnkube-identity-cm\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:45:54.817430 master-0 kubenswrapper[4172]: I0220 14:45:54.817437 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbw6n\" (UniqueName: \"kubernetes.io/projected/33675e96-ce49-49be-9117-954ac7cca5d5-kube-api-access-hbw6n\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:45:54.817778 master-0 kubenswrapper[4172]: I0220 14:45:54.817474 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33675e96-ce49-49be-9117-954ac7cca5d5-webhook-cert\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:45:54.817778 master-0 kubenswrapper[4172]: I0220 14:45:54.817513 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-env-overrides\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:45:54.918409 master-0 kubenswrapper[4172]: I0220 14:45:54.918341 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-ovnkube-identity-cm\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:45:54.918409 master-0 kubenswrapper[4172]: I0220 14:45:54.918410 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbw6n\" (UniqueName: \"kubernetes.io/projected/33675e96-ce49-49be-9117-954ac7cca5d5-kube-api-access-hbw6n\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:45:54.918621 master-0 kubenswrapper[4172]: I0220 14:45:54.918448 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33675e96-ce49-49be-9117-954ac7cca5d5-webhook-cert\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:45:54.918948 master-0 kubenswrapper[4172]: I0220 14:45:54.918896 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-env-overrides\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:45:54.919774 master-0 kubenswrapper[4172]: I0220 14:45:54.919729 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-ovnkube-identity-cm\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:45:54.919942 master-0 kubenswrapper[4172]: I0220 14:45:54.919880 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-env-overrides\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:45:54.924812 master-0 kubenswrapper[4172]: I0220 14:45:54.924759 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33675e96-ce49-49be-9117-954ac7cca5d5-webhook-cert\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:45:54.940719 master-0 kubenswrapper[4172]: I0220 14:45:54.940658 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbw6n\" (UniqueName: \"kubernetes.io/projected/33675e96-ce49-49be-9117-954ac7cca5d5-kube-api-access-hbw6n\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:45:55.019678 master-0 kubenswrapper[4172]: I0220 14:45:55.019627 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:45:55.207313 master-0 kubenswrapper[4172]: I0220 14:45:55.207184 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:55.207523 master-0 kubenswrapper[4172]: E0220 14:45:55.207350 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:45:55.545119 master-0 kubenswrapper[4172]: I0220 14:45:55.545063 4172 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="177aef91a6eb47e06724759a7ce69757e5533636be520f8861b5d3c44d7c4272" exitCode=0 Feb 20 14:45:55.546971 master-0 kubenswrapper[4172]: I0220 14:45:55.545188 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerDied","Data":"177aef91a6eb47e06724759a7ce69757e5533636be520f8861b5d3c44d7c4272"} Feb 20 14:45:55.547554 master-0 kubenswrapper[4172]: I0220 14:45:55.546955 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-gprr4" event={"ID":"33675e96-ce49-49be-9117-954ac7cca5d5","Type":"ContainerStarted","Data":"3595d9d8fc957b18c48383f1ad0fcfa521ef5e3e33c6ab788b51ff8638981630"} Feb 20 14:45:55.929557 master-0 kubenswrapper[4172]: I0220 14:45:55.929384 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mpr8\" (UniqueName: \"kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8\") pod \"network-check-target-ljvkb\" (UID: \"929dffba-46da-4d81-a437-bc6a9fe79811\") " pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:45:55.929850 master-0 kubenswrapper[4172]: E0220 14:45:55.929677 4172 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 14:45:55.929850 master-0 kubenswrapper[4172]: E0220 14:45:55.929739 4172 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 14:45:55.929850 master-0 kubenswrapper[4172]: E0220 14:45:55.929755 4172 projected.go:194] Error preparing data for projected volume kube-api-access-9mpr8 for pod openshift-network-diagnostics/network-check-target-ljvkb: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 14:45:55.929850 master-0 kubenswrapper[4172]: E0220 14:45:55.929844 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8 podName:929dffba-46da-4d81-a437-bc6a9fe79811 nodeName:}" failed. No retries permitted until 2026-02-20 14:45:59.929816302 +0000 UTC m=+100.485041902 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9mpr8" (UniqueName: "kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8") pod "network-check-target-ljvkb" (UID: "929dffba-46da-4d81-a437-bc6a9fe79811") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 14:45:56.207982 master-0 kubenswrapper[4172]: I0220 14:45:56.207762 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:45:56.207982 master-0 kubenswrapper[4172]: E0220 14:45:56.207974 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:45:57.206949 master-0 kubenswrapper[4172]: I0220 14:45:57.206892 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:57.207381 master-0 kubenswrapper[4172]: E0220 14:45:57.207040 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:45:58.207275 master-0 kubenswrapper[4172]: I0220 14:45:58.207210 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:45:58.207813 master-0 kubenswrapper[4172]: E0220 14:45:58.207339 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:45:59.207246 master-0 kubenswrapper[4172]: I0220 14:45:59.207184 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:45:59.207641 master-0 kubenswrapper[4172]: E0220 14:45:59.207514 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:45:59.218744 master-0 kubenswrapper[4172]: I0220 14:45:59.218637 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 20 14:45:59.960787 master-0 kubenswrapper[4172]: I0220 14:45:59.960707 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mpr8\" (UniqueName: \"kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8\") pod \"network-check-target-ljvkb\" (UID: \"929dffba-46da-4d81-a437-bc6a9fe79811\") " pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:45:59.961119 master-0 kubenswrapper[4172]: E0220 14:45:59.961002 4172 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 14:45:59.961119 master-0 kubenswrapper[4172]: E0220 14:45:59.961028 4172 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 14:45:59.961119 master-0 kubenswrapper[4172]: E0220 14:45:59.961045 4172 projected.go:194] Error preparing data for projected volume kube-api-access-9mpr8 for pod openshift-network-diagnostics/network-check-target-ljvkb: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 14:45:59.961119 master-0 kubenswrapper[4172]: E0220 14:45:59.961116 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8 podName:929dffba-46da-4d81-a437-bc6a9fe79811 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:07.961094909 +0000 UTC m=+108.516320519 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9mpr8" (UniqueName: "kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8") pod "network-check-target-ljvkb" (UID: "929dffba-46da-4d81-a437-bc6a9fe79811") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 14:46:00.207583 master-0 kubenswrapper[4172]: I0220 14:46:00.207477 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:00.208521 master-0 kubenswrapper[4172]: E0220 14:46:00.208481 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:00.221071 master-0 kubenswrapper[4172]: I0220 14:46:00.220895 4172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=1.220876305 podStartE2EDuration="1.220876305s" podCreationTimestamp="2026-02-20 14:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:46:00.220255841 +0000 UTC m=+100.775481441" watchObservedRunningTime="2026-02-20 14:46:00.220876305 +0000 UTC m=+100.776101915" Feb 20 14:46:01.207320 master-0 kubenswrapper[4172]: I0220 14:46:01.207273 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:01.207512 master-0 kubenswrapper[4172]: E0220 14:46:01.207406 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:46:02.207734 master-0 kubenswrapper[4172]: I0220 14:46:02.207689 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:02.208204 master-0 kubenswrapper[4172]: E0220 14:46:02.207795 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:03.207982 master-0 kubenswrapper[4172]: I0220 14:46:03.206913 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:03.207982 master-0 kubenswrapper[4172]: E0220 14:46:03.207073 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:46:04.207160 master-0 kubenswrapper[4172]: I0220 14:46:04.206792 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:04.207160 master-0 kubenswrapper[4172]: E0220 14:46:04.206976 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:05.207676 master-0 kubenswrapper[4172]: I0220 14:46:05.207613 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:05.208452 master-0 kubenswrapper[4172]: E0220 14:46:05.207752 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:46:06.207139 master-0 kubenswrapper[4172]: I0220 14:46:06.207067 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:06.207362 master-0 kubenswrapper[4172]: E0220 14:46:06.207203 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:07.206892 master-0 kubenswrapper[4172]: I0220 14:46:07.206828 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:07.207281 master-0 kubenswrapper[4172]: E0220 14:46:07.207036 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:46:08.027876 master-0 kubenswrapper[4172]: I0220 14:46:08.027336 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mpr8\" (UniqueName: \"kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8\") pod \"network-check-target-ljvkb\" (UID: \"929dffba-46da-4d81-a437-bc6a9fe79811\") " pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:08.027876 master-0 kubenswrapper[4172]: E0220 14:46:08.027600 4172 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 14:46:08.027876 master-0 kubenswrapper[4172]: E0220 14:46:08.027813 4172 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 14:46:08.027876 master-0 kubenswrapper[4172]: E0220 14:46:08.027841 4172 projected.go:194] Error preparing data for projected volume kube-api-access-9mpr8 for pod openshift-network-diagnostics/network-check-target-ljvkb: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 14:46:08.028416 master-0 kubenswrapper[4172]: E0220 14:46:08.028000 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8 podName:929dffba-46da-4d81-a437-bc6a9fe79811 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:24.027975354 +0000 UTC m=+124.583200994 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9mpr8" (UniqueName: "kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8") pod "network-check-target-ljvkb" (UID: "929dffba-46da-4d81-a437-bc6a9fe79811") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 14:46:08.207720 master-0 kubenswrapper[4172]: I0220 14:46:08.207588 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:08.208726 master-0 kubenswrapper[4172]: E0220 14:46:08.207823 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:08.586756 master-0 kubenswrapper[4172]: I0220 14:46:08.586690 4172 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="7caadc72530799fe020f6b0140bace32e6cb7e8ebbe6207315d6d035384c83d6" exitCode=0 Feb 20 14:46:08.586888 master-0 kubenswrapper[4172]: I0220 14:46:08.586753 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerDied","Data":"7caadc72530799fe020f6b0140bace32e6cb7e8ebbe6207315d6d035384c83d6"} Feb 20 14:46:09.207264 master-0 kubenswrapper[4172]: I0220 14:46:09.207005 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:09.207633 master-0 kubenswrapper[4172]: E0220 14:46:09.207351 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:46:09.343478 master-0 kubenswrapper[4172]: I0220 14:46:09.343417 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:09.344304 master-0 kubenswrapper[4172]: E0220 14:46:09.343710 4172 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 14:46:09.344304 master-0 kubenswrapper[4172]: E0220 14:46:09.343897 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs podName:5ea4c132-b6d0-4dc9-942d-48e359eed418 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:41.343854157 +0000 UTC m=+141.899079917 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs") pod "network-metrics-daemon-99lkv" (UID: "5ea4c132-b6d0-4dc9-942d-48e359eed418") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 20 14:46:09.592082 master-0 kubenswrapper[4172]: I0220 14:46:09.592009 4172 generic.go:334] "Generic (PLEG): container finished" podID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerID="3053cba1a9dd3bf2f89a83fc28942f27d075c9d218487ad546375069cf4f63bf" exitCode=0 Feb 20 14:46:09.592222 master-0 kubenswrapper[4172]: I0220 14:46:09.592080 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerDied","Data":"3053cba1a9dd3bf2f89a83fc28942f27d075c9d218487ad546375069cf4f63bf"} Feb 20 14:46:10.208249 master-0 kubenswrapper[4172]: I0220 14:46:10.207720 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:10.208995 master-0 kubenswrapper[4172]: E0220 14:46:10.208951 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:10.601278 master-0 kubenswrapper[4172]: I0220 14:46:10.601098 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerStarted","Data":"d4028d8197e8222efa0b2d5cd4058ec9e32593743cbeb5a63914f3f5319abea8"} Feb 20 14:46:10.601278 master-0 kubenswrapper[4172]: I0220 14:46:10.601164 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerStarted","Data":"c28097dc6a652034b5d588b2b3b129d2c2024867e287e252c3a143d563acc3d1"} Feb 20 14:46:10.601278 master-0 kubenswrapper[4172]: I0220 14:46:10.601184 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerStarted","Data":"bc4c3be5183efbd6157a006026a1e8fb1ea2037e416fe318d6436aa5b3c2c4ae"} Feb 20 14:46:10.601278 master-0 kubenswrapper[4172]: I0220 14:46:10.601211 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerStarted","Data":"19f1ee7381af80c76e4b28872d81e46510b61cd585ab99297781f3d21d729d67"} Feb 20 14:46:10.604054 master-0 kubenswrapper[4172]: I0220 14:46:10.603576 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" event={"ID":"5d2b154b-de63-4c9b-99d8-487fb3035fb9","Type":"ContainerStarted","Data":"c3644a2305f2cac790098fa61dc92fdcede4316b05ab9e68ec6a558810ecdfcf"} Feb 20 14:46:10.606468 master-0 kubenswrapper[4172]: I0220 14:46:10.606044 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-gprr4" event={"ID":"33675e96-ce49-49be-9117-954ac7cca5d5","Type":"ContainerStarted","Data":"4e27eb5860cdd7ddac83a0d0bd7cc2ce5f678c93e28b4ef780b63b34098f4c71"} Feb 20 14:46:10.606468 master-0 kubenswrapper[4172]: I0220 14:46:10.606124 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-gprr4" event={"ID":"33675e96-ce49-49be-9117-954ac7cca5d5","Type":"ContainerStarted","Data":"30db4aa1175c82b753660c5597ea88a713c4233cb318dbcdd55159d329e5e404"} Feb 20 14:46:10.611758 master-0 kubenswrapper[4172]: I0220 14:46:10.611610 4172 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="6e7cd59de9caeb6625ff93f951dca8b15c57f96db1e17aebced0a5231f411d3f" exitCode=0 Feb 20 14:46:10.611758 master-0 kubenswrapper[4172]: I0220 14:46:10.611655 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerDied","Data":"6e7cd59de9caeb6625ff93f951dca8b15c57f96db1e17aebced0a5231f411d3f"} Feb 20 14:46:10.623870 master-0 kubenswrapper[4172]: I0220 14:46:10.623154 4172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" podStartSLOduration=3.423405455 podStartE2EDuration="22.623135034s" podCreationTimestamp="2026-02-20 14:45:48 +0000 UTC" firstStartedPulling="2026-02-20 14:45:50.219565612 +0000 UTC m=+90.774791252" lastFinishedPulling="2026-02-20 14:46:09.419295221 +0000 UTC m=+109.974520831" observedRunningTime="2026-02-20 14:46:10.62298972 +0000 UTC m=+111.178215340" watchObservedRunningTime="2026-02-20 14:46:10.623135034 +0000 UTC m=+111.178360644" Feb 20 14:46:10.668851 master-0 kubenswrapper[4172]: I0220 14:46:10.668615 4172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-gprr4" podStartSLOduration=1.724036433 podStartE2EDuration="16.668584789s" podCreationTimestamp="2026-02-20 14:45:54 +0000 UTC" firstStartedPulling="2026-02-20 14:45:55.036346713 +0000 UTC m=+95.591572313" lastFinishedPulling="2026-02-20 14:46:09.980895059 +0000 UTC m=+110.536120669" observedRunningTime="2026-02-20 14:46:10.667509553 +0000 UTC m=+111.222735183" watchObservedRunningTime="2026-02-20 14:46:10.668584789 +0000 UTC m=+111.223810469" Feb 20 14:46:11.207702 master-0 kubenswrapper[4172]: I0220 14:46:11.207529 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:11.207915 master-0 kubenswrapper[4172]: E0220 14:46:11.207713 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:46:11.622435 master-0 kubenswrapper[4172]: I0220 14:46:11.622321 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerStarted","Data":"6edbad64d87976b1f93118bbd0e1e9a7e395f45b91ea14e4ae685cc9850e8c3c"} Feb 20 14:46:11.628688 master-0 kubenswrapper[4172]: I0220 14:46:11.628626 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerStarted","Data":"b6f8160a2b0b8311c6114939c085470005a6749f835a8c4d6d5c2fcf31afcca3"} Feb 20 14:46:11.628823 master-0 kubenswrapper[4172]: I0220 14:46:11.628693 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerStarted","Data":"dd4755c9bde272bff332ee42866c0d4a7f3a3eded7cfea10c0f4a8f1a44e3081"} Feb 20 14:46:11.647695 master-0 kubenswrapper[4172]: I0220 14:46:11.647592 4172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" podStartSLOduration=5.521785318 podStartE2EDuration="35.647535059s" podCreationTimestamp="2026-02-20 14:45:36 +0000 UTC" firstStartedPulling="2026-02-20 14:45:37.061363099 +0000 UTC m=+77.616588749" lastFinishedPulling="2026-02-20 14:46:07.18711286 +0000 UTC m=+107.742338490" observedRunningTime="2026-02-20 14:46:11.647415016 +0000 UTC m=+112.202640656" watchObservedRunningTime="2026-02-20 14:46:11.647535059 +0000 UTC m=+112.202760699" Feb 20 14:46:12.207899 master-0 kubenswrapper[4172]: I0220 14:46:12.207810 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:12.208185 master-0 kubenswrapper[4172]: E0220 14:46:12.208091 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:13.207825 master-0 kubenswrapper[4172]: I0220 14:46:13.207253 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:13.207825 master-0 kubenswrapper[4172]: E0220 14:46:13.207796 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:46:13.646356 master-0 kubenswrapper[4172]: I0220 14:46:13.646265 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerStarted","Data":"7068e5d613d2d6125eab5ba509945d32aebbb8cb263e3857c8cd181ee64e7196"} Feb 20 14:46:14.207872 master-0 kubenswrapper[4172]: I0220 14:46:14.207838 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:14.208573 master-0 kubenswrapper[4172]: E0220 14:46:14.208536 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:14.361956 master-0 kubenswrapper[4172]: I0220 14:46:14.361870 4172 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5mwbb"] Feb 20 14:46:15.207465 master-0 kubenswrapper[4172]: I0220 14:46:15.207412 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:15.207659 master-0 kubenswrapper[4172]: E0220 14:46:15.207603 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:46:15.659077 master-0 kubenswrapper[4172]: I0220 14:46:15.659006 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerStarted","Data":"2b1556cc9b1f74ca4a29d6699e6d191d3cee04268a1ec757e94fd79a4124d168"} Feb 20 14:46:15.660132 master-0 kubenswrapper[4172]: I0220 14:46:15.659197 4172 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="ovn-controller" containerID="cri-o://19f1ee7381af80c76e4b28872d81e46510b61cd585ab99297781f3d21d729d67" gracePeriod=30 Feb 20 14:46:15.660132 master-0 kubenswrapper[4172]: I0220 14:46:15.659248 4172 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d4028d8197e8222efa0b2d5cd4058ec9e32593743cbeb5a63914f3f5319abea8" gracePeriod=30 Feb 20 14:46:15.660132 master-0 kubenswrapper[4172]: I0220 14:46:15.659268 4172 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="kube-rbac-proxy-node" containerID="cri-o://c28097dc6a652034b5d588b2b3b129d2c2024867e287e252c3a143d563acc3d1" gracePeriod=30 Feb 20 14:46:15.660132 master-0 kubenswrapper[4172]: I0220 14:46:15.659341 4172 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="ovn-acl-logging" containerID="cri-o://bc4c3be5183efbd6157a006026a1e8fb1ea2037e416fe318d6436aa5b3c2c4ae" gracePeriod=30 Feb 20 14:46:15.660132 master-0 kubenswrapper[4172]: I0220 14:46:15.659293 4172 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="sbdb" containerID="cri-o://7068e5d613d2d6125eab5ba509945d32aebbb8cb263e3857c8cd181ee64e7196" gracePeriod=30 Feb 20 14:46:15.660132 master-0 kubenswrapper[4172]: I0220 14:46:15.659518 4172 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:46:15.660132 master-0 kubenswrapper[4172]: I0220 14:46:15.659382 4172 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="northd" containerID="cri-o://dd4755c9bde272bff332ee42866c0d4a7f3a3eded7cfea10c0f4a8f1a44e3081" gracePeriod=30 Feb 20 14:46:15.660132 master-0 kubenswrapper[4172]: I0220 14:46:15.659356 4172 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="nbdb" containerID="cri-o://b6f8160a2b0b8311c6114939c085470005a6749f835a8c4d6d5c2fcf31afcca3" gracePeriod=30 Feb 20 14:46:15.660132 master-0 kubenswrapper[4172]: I0220 14:46:15.659711 4172 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:46:15.660132 master-0 kubenswrapper[4172]: I0220 14:46:15.659790 4172 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:46:15.662823 master-0 kubenswrapper[4172]: E0220 14:46:15.662745 4172 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6f8160a2b0b8311c6114939c085470005a6749f835a8c4d6d5c2fcf31afcca3" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 20 14:46:15.663000 master-0 kubenswrapper[4172]: E0220 14:46:15.662778 4172 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7068e5d613d2d6125eab5ba509945d32aebbb8cb263e3857c8cd181ee64e7196" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 20 14:46:15.665608 master-0 kubenswrapper[4172]: E0220 14:46:15.665527 4172 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6f8160a2b0b8311c6114939c085470005a6749f835a8c4d6d5c2fcf31afcca3" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 20 14:46:15.665733 master-0 kubenswrapper[4172]: E0220 14:46:15.665546 4172 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7068e5d613d2d6125eab5ba509945d32aebbb8cb263e3857c8cd181ee64e7196" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 20 14:46:15.668806 master-0 kubenswrapper[4172]: E0220 14:46:15.667878 4172 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7068e5d613d2d6125eab5ba509945d32aebbb8cb263e3857c8cd181ee64e7196" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 20 14:46:15.668806 master-0 kubenswrapper[4172]: E0220 14:46:15.668001 4172 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="sbdb" Feb 20 14:46:15.669527 master-0 kubenswrapper[4172]: E0220 14:46:15.668784 4172 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6f8160a2b0b8311c6114939c085470005a6749f835a8c4d6d5c2fcf31afcca3" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 20 14:46:15.669527 master-0 kubenswrapper[4172]: E0220 14:46:15.668849 4172 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="nbdb" Feb 20 14:46:15.688569 master-0 kubenswrapper[4172]: I0220 14:46:15.688464 4172 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="ovnkube-controller" containerID="cri-o://2b1556cc9b1f74ca4a29d6699e6d191d3cee04268a1ec757e94fd79a4124d168" gracePeriod=30 Feb 20 14:46:16.207597 master-0 kubenswrapper[4172]: I0220 14:46:16.207511 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:16.207749 master-0 kubenswrapper[4172]: E0220 14:46:16.207687 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:16.669387 master-0 kubenswrapper[4172]: I0220 14:46:16.669025 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5mwbb_2eebd463-cbbd-4546-8a7c-4621c45c87a0/kube-rbac-proxy-ovn-metrics/0.log" Feb 20 14:46:16.670344 master-0 kubenswrapper[4172]: I0220 14:46:16.669945 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5mwbb_2eebd463-cbbd-4546-8a7c-4621c45c87a0/kube-rbac-proxy-node/0.log" Feb 20 14:46:16.670667 master-0 kubenswrapper[4172]: I0220 14:46:16.670614 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5mwbb_2eebd463-cbbd-4546-8a7c-4621c45c87a0/ovn-acl-logging/0.log" Feb 20 14:46:16.671359 master-0 kubenswrapper[4172]: I0220 14:46:16.671320 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5mwbb_2eebd463-cbbd-4546-8a7c-4621c45c87a0/ovn-controller/0.log" Feb 20 14:46:16.672070 master-0 kubenswrapper[4172]: I0220 14:46:16.672029 4172 generic.go:334] "Generic (PLEG): container finished" podID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerID="7068e5d613d2d6125eab5ba509945d32aebbb8cb263e3857c8cd181ee64e7196" exitCode=0 Feb 20 14:46:16.672070 master-0 kubenswrapper[4172]: I0220 14:46:16.672054 4172 generic.go:334] "Generic (PLEG): container finished" podID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerID="b6f8160a2b0b8311c6114939c085470005a6749f835a8c4d6d5c2fcf31afcca3" exitCode=0 Feb 20 14:46:16.672070 master-0 kubenswrapper[4172]: I0220 14:46:16.672064 4172 generic.go:334] "Generic (PLEG): container finished" podID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerID="dd4755c9bde272bff332ee42866c0d4a7f3a3eded7cfea10c0f4a8f1a44e3081" exitCode=0 Feb 20 14:46:16.672070 master-0 kubenswrapper[4172]: I0220 14:46:16.672078 4172 generic.go:334] "Generic (PLEG): container finished" podID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerID="d4028d8197e8222efa0b2d5cd4058ec9e32593743cbeb5a63914f3f5319abea8" exitCode=143 Feb 20 14:46:16.672427 master-0 kubenswrapper[4172]: I0220 14:46:16.672092 4172 generic.go:334] "Generic (PLEG): container finished" podID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerID="c28097dc6a652034b5d588b2b3b129d2c2024867e287e252c3a143d563acc3d1" exitCode=143 Feb 20 14:46:16.672427 master-0 kubenswrapper[4172]: I0220 14:46:16.672103 4172 generic.go:334] "Generic (PLEG): container finished" podID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerID="bc4c3be5183efbd6157a006026a1e8fb1ea2037e416fe318d6436aa5b3c2c4ae" exitCode=143 Feb 20 14:46:16.672427 master-0 kubenswrapper[4172]: I0220 14:46:16.672113 4172 generic.go:334] "Generic (PLEG): container finished" podID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerID="19f1ee7381af80c76e4b28872d81e46510b61cd585ab99297781f3d21d729d67" exitCode=143 Feb 20 14:46:16.672427 master-0 kubenswrapper[4172]: I0220 14:46:16.672123 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerDied","Data":"7068e5d613d2d6125eab5ba509945d32aebbb8cb263e3857c8cd181ee64e7196"} Feb 20 14:46:16.672427 master-0 kubenswrapper[4172]: I0220 14:46:16.672210 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerDied","Data":"b6f8160a2b0b8311c6114939c085470005a6749f835a8c4d6d5c2fcf31afcca3"} Feb 20 14:46:16.672427 master-0 kubenswrapper[4172]: I0220 14:46:16.672236 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerDied","Data":"dd4755c9bde272bff332ee42866c0d4a7f3a3eded7cfea10c0f4a8f1a44e3081"} Feb 20 14:46:16.672427 master-0 kubenswrapper[4172]: I0220 14:46:16.672261 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerDied","Data":"d4028d8197e8222efa0b2d5cd4058ec9e32593743cbeb5a63914f3f5319abea8"} Feb 20 14:46:16.672427 master-0 kubenswrapper[4172]: I0220 14:46:16.672308 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerDied","Data":"c28097dc6a652034b5d588b2b3b129d2c2024867e287e252c3a143d563acc3d1"} Feb 20 14:46:16.672427 master-0 kubenswrapper[4172]: I0220 14:46:16.672331 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerDied","Data":"bc4c3be5183efbd6157a006026a1e8fb1ea2037e416fe318d6436aa5b3c2c4ae"} Feb 20 14:46:16.672427 master-0 kubenswrapper[4172]: I0220 14:46:16.672357 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerDied","Data":"19f1ee7381af80c76e4b28872d81e46510b61cd585ab99297781f3d21d729d67"} Feb 20 14:46:16.870396 master-0 kubenswrapper[4172]: I0220 14:46:16.870336 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5mwbb_2eebd463-cbbd-4546-8a7c-4621c45c87a0/ovnkube-controller/0.log" Feb 20 14:46:16.872744 master-0 kubenswrapper[4172]: I0220 14:46:16.872691 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5mwbb_2eebd463-cbbd-4546-8a7c-4621c45c87a0/kube-rbac-proxy-ovn-metrics/0.log" Feb 20 14:46:16.873444 master-0 kubenswrapper[4172]: I0220 14:46:16.873397 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5mwbb_2eebd463-cbbd-4546-8a7c-4621c45c87a0/kube-rbac-proxy-node/0.log" Feb 20 14:46:16.874084 master-0 kubenswrapper[4172]: I0220 14:46:16.874043 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5mwbb_2eebd463-cbbd-4546-8a7c-4621c45c87a0/ovn-acl-logging/0.log" Feb 20 14:46:16.874797 master-0 kubenswrapper[4172]: I0220 14:46:16.874747 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5mwbb_2eebd463-cbbd-4546-8a7c-4621c45c87a0/ovn-controller/0.log" Feb 20 14:46:16.875445 master-0 kubenswrapper[4172]: I0220 14:46:16.875404 4172 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:46:16.937410 master-0 kubenswrapper[4172]: I0220 14:46:16.937267 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5gzs6"] Feb 20 14:46:16.937410 master-0 kubenswrapper[4172]: E0220 14:46:16.937411 4172 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="kube-rbac-proxy-node" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: I0220 14:46:16.937432 4172 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="kube-rbac-proxy-node" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: E0220 14:46:16.937447 4172 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="kubecfg-setup" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: I0220 14:46:16.937463 4172 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="kubecfg-setup" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: E0220 14:46:16.937477 4172 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="ovn-acl-logging" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: I0220 14:46:16.937490 4172 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="ovn-acl-logging" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: E0220 14:46:16.937502 4172 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="nbdb" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: I0220 14:46:16.937513 4172 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="nbdb" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: E0220 14:46:16.937529 4172 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="sbdb" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: I0220 14:46:16.937541 4172 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="sbdb" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: E0220 14:46:16.937553 4172 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="ovn-controller" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: I0220 14:46:16.937568 4172 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="ovn-controller" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: E0220 14:46:16.937581 4172 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="kube-rbac-proxy-ovn-metrics" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: I0220 14:46:16.937593 4172 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="kube-rbac-proxy-ovn-metrics" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: E0220 14:46:16.937607 4172 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="ovnkube-controller" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: I0220 14:46:16.937622 4172 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="ovnkube-controller" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: E0220 14:46:16.937640 4172 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="northd" Feb 20 14:46:16.937683 master-0 kubenswrapper[4172]: I0220 14:46:16.937656 4172 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="northd" Feb 20 14:46:16.938641 master-0 kubenswrapper[4172]: I0220 14:46:16.937722 4172 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="kube-rbac-proxy-node" Feb 20 14:46:16.938641 master-0 kubenswrapper[4172]: I0220 14:46:16.937742 4172 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="nbdb" Feb 20 14:46:16.938641 master-0 kubenswrapper[4172]: I0220 14:46:16.937757 4172 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="ovnkube-controller" Feb 20 14:46:16.938641 master-0 kubenswrapper[4172]: I0220 14:46:16.937770 4172 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="ovn-controller" Feb 20 14:46:16.938641 master-0 kubenswrapper[4172]: I0220 14:46:16.937782 4172 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="sbdb" Feb 20 14:46:16.938641 master-0 kubenswrapper[4172]: I0220 14:46:16.937794 4172 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="northd" Feb 20 14:46:16.938641 master-0 kubenswrapper[4172]: I0220 14:46:16.937807 4172 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="kube-rbac-proxy-ovn-metrics" Feb 20 14:46:16.938641 master-0 kubenswrapper[4172]: I0220 14:46:16.937820 4172 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerName="ovn-acl-logging" Feb 20 14:46:16.939107 master-0 kubenswrapper[4172]: I0220 14:46:16.938864 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.036045 master-0 kubenswrapper[4172]: I0220 14:46:17.035977 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-slash\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.036086 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-run-netns\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.036148 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-var-lib-openvswitch\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.036090 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-slash" (OuterVolumeSpecName: "host-slash") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.036211 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovn-node-metrics-cert\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.036129 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.036243 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.036498 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-run-ovn-kubernetes\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.036587 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.036612 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-ovn\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.036667 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.036684 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-kubelet\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.036723 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.036872 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovnkube-config\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.036911 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-systemd\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.036973 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-node-log\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.037025 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-node-log" (OuterVolumeSpecName: "node-log") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:46:17.037278 master-0 kubenswrapper[4172]: I0220 14:46:17.037068 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnrxr\" (UniqueName: \"kubernetes.io/projected/2eebd463-cbbd-4546-8a7c-4621c45c87a0-kube-api-access-rnrxr\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037110 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-env-overrides\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037142 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-cni-bin\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037187 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037293 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovnkube-script-lib\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037376 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037427 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-cni-netd\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037515 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037528 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037572 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-log-socket\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037623 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-openvswitch\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037664 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-etc-openvswitch\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037683 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-log-socket" (OuterVolumeSpecName: "log-socket") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037706 4172 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-systemd-units\") pod \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\" (UID: \"2eebd463-cbbd-4546-8a7c-4621c45c87a0\") " Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037751 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037802 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-bin\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037876 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-log-socket\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.039869 master-0 kubenswrapper[4172]: I0220 14:46:17.037829 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:46:17.040838 master-0 kubenswrapper[4172]: I0220 14:46:17.037905 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:46:17.040838 master-0 kubenswrapper[4172]: I0220 14:46:17.037874 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:46:17.040838 master-0 kubenswrapper[4172]: I0220 14:46:17.038006 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-netd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.040838 master-0 kubenswrapper[4172]: I0220 14:46:17.038089 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.040838 master-0 kubenswrapper[4172]: I0220 14:46:17.038084 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:46:17.040838 master-0 kubenswrapper[4172]: I0220 14:46:17.038113 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:46:17.040838 master-0 kubenswrapper[4172]: I0220 14:46:17.038159 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-systemd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.040838 master-0 kubenswrapper[4172]: I0220 14:46:17.038258 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.040838 master-0 kubenswrapper[4172]: I0220 14:46:17.038351 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-systemd-units\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.040838 master-0 kubenswrapper[4172]: I0220 14:46:17.038392 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.040838 master-0 kubenswrapper[4172]: I0220 14:46:17.038530 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-kubelet\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.040838 master-0 kubenswrapper[4172]: I0220 14:46:17.038593 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-script-lib\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.040838 master-0 kubenswrapper[4172]: I0220 14:46:17.038649 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-etc-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.040838 master-0 kubenswrapper[4172]: I0220 14:46:17.038704 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-env-overrides\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.040838 master-0 kubenswrapper[4172]: I0220 14:46:17.038753 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-ovn\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.040838 master-0 kubenswrapper[4172]: I0220 14:46:17.038790 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-slash\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.038826 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-netns\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.038906 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-config\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039079 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr6nr\" (UniqueName: \"kubernetes.io/projected/21384bd0-495c-406a-9462-e9e740c04686-kube-api-access-gr6nr\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039125 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-node-log\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039160 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-var-lib-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039199 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21384bd0-495c-406a-9462-e9e740c04686-ovn-node-metrics-cert\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039330 4172 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039373 4172 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039399 4172 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-kubelet\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039417 4172 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039439 4172 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-node-log\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039458 4172 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-env-overrides\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039475 4172 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039491 4172 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039510 4172 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039531 4172 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039549 4172 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-log-socket\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039568 4172 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039585 4172 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039602 4172 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-systemd-units\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039619 4172 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-slash\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.042180 master-0 kubenswrapper[4172]: I0220 14:46:17.039636 4172 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-host-run-netns\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.043403 master-0 kubenswrapper[4172]: I0220 14:46:17.039652 4172 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.056033 master-0 kubenswrapper[4172]: I0220 14:46:17.055906 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 14:46:17.056033 master-0 kubenswrapper[4172]: I0220 14:46:17.055987 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eebd463-cbbd-4546-8a7c-4621c45c87a0-kube-api-access-rnrxr" (OuterVolumeSpecName: "kube-api-access-rnrxr") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "kube-api-access-rnrxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:46:17.056207 master-0 kubenswrapper[4172]: I0220 14:46:17.056156 4172 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2eebd463-cbbd-4546-8a7c-4621c45c87a0" (UID: "2eebd463-cbbd-4546-8a7c-4621c45c87a0"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:46:17.140438 master-0 kubenswrapper[4172]: I0220 14:46:17.140353 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-kubelet\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.140438 master-0 kubenswrapper[4172]: I0220 14:46:17.140415 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-systemd-units\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.140438 master-0 kubenswrapper[4172]: I0220 14:46:17.140450 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.140804 master-0 kubenswrapper[4172]: I0220 14:46:17.140487 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-script-lib\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.140804 master-0 kubenswrapper[4172]: I0220 14:46:17.140520 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-etc-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.140804 master-0 kubenswrapper[4172]: I0220 14:46:17.140639 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-systemd-units\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.140804 master-0 kubenswrapper[4172]: I0220 14:46:17.140698 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-kubelet\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.141088 master-0 kubenswrapper[4172]: I0220 14:46:17.140800 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-env-overrides\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.141088 master-0 kubenswrapper[4172]: I0220 14:46:17.140812 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-etc-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.141088 master-0 kubenswrapper[4172]: I0220 14:46:17.140870 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.141088 master-0 kubenswrapper[4172]: I0220 14:46:17.140989 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-slash\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.141088 master-0 kubenswrapper[4172]: I0220 14:46:17.141053 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-ovn\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.141130 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-netns\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.141158 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-slash\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.141174 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-config\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.141221 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-netns\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.141220 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-ovn\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.141341 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr6nr\" (UniqueName: \"kubernetes.io/projected/21384bd0-495c-406a-9462-e9e740c04686-kube-api-access-gr6nr\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.141412 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-var-lib-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.141451 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-node-log\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.141496 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21384bd0-495c-406a-9462-e9e740c04686-ovn-node-metrics-cert\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.141616 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-node-log\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.141679 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-var-lib-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.141837 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-log-socket\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.141900 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-bin\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.141975 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-log-socket\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.142005 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.142047 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-bin\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.142491 master-0 kubenswrapper[4172]: I0220 14:46:17.142054 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-netd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.143486 master-0 kubenswrapper[4172]: I0220 14:46:17.142098 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.143486 master-0 kubenswrapper[4172]: I0220 14:46:17.142103 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-systemd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.143486 master-0 kubenswrapper[4172]: I0220 14:46:17.142150 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-netd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.143486 master-0 kubenswrapper[4172]: I0220 14:46:17.142147 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.143486 master-0 kubenswrapper[4172]: I0220 14:46:17.142214 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.143486 master-0 kubenswrapper[4172]: I0220 14:46:17.142228 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-env-overrides\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.143486 master-0 kubenswrapper[4172]: I0220 14:46:17.142235 4172 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2eebd463-cbbd-4546-8a7c-4621c45c87a0-run-systemd\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.143486 master-0 kubenswrapper[4172]: I0220 14:46:17.142289 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-systemd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.143486 master-0 kubenswrapper[4172]: I0220 14:46:17.142301 4172 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnrxr\" (UniqueName: \"kubernetes.io/projected/2eebd463-cbbd-4546-8a7c-4621c45c87a0-kube-api-access-rnrxr\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.143486 master-0 kubenswrapper[4172]: I0220 14:46:17.142399 4172 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2eebd463-cbbd-4546-8a7c-4621c45c87a0-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:17.143486 master-0 kubenswrapper[4172]: I0220 14:46:17.142483 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-script-lib\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.143486 master-0 kubenswrapper[4172]: I0220 14:46:17.142696 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-config\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.147378 master-0 kubenswrapper[4172]: I0220 14:46:17.147320 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21384bd0-495c-406a-9462-e9e740c04686-ovn-node-metrics-cert\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.169682 master-0 kubenswrapper[4172]: I0220 14:46:17.169609 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr6nr\" (UniqueName: \"kubernetes.io/projected/21384bd0-495c-406a-9462-e9e740c04686-kube-api-access-gr6nr\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.207795 master-0 kubenswrapper[4172]: I0220 14:46:17.207644 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:17.208043 master-0 kubenswrapper[4172]: E0220 14:46:17.207985 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:46:17.224115 master-0 kubenswrapper[4172]: I0220 14:46:17.224047 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Feb 20 14:46:17.260092 master-0 kubenswrapper[4172]: I0220 14:46:17.260023 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:17.281578 master-0 kubenswrapper[4172]: W0220 14:46:17.281497 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21384bd0_495c_406a_9462_e9e740c04686.slice/crio-26c5fe83ca44257f00aa75056a5ba23aa71fd99df73033faf567ea11ded1340f WatchSource:0}: Error finding container 26c5fe83ca44257f00aa75056a5ba23aa71fd99df73033faf567ea11ded1340f: Status 404 returned error can't find the container with id 26c5fe83ca44257f00aa75056a5ba23aa71fd99df73033faf567ea11ded1340f Feb 20 14:46:17.678324 master-0 kubenswrapper[4172]: I0220 14:46:17.678250 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5mwbb_2eebd463-cbbd-4546-8a7c-4621c45c87a0/ovnkube-controller/0.log" Feb 20 14:46:17.680906 master-0 kubenswrapper[4172]: I0220 14:46:17.680864 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5mwbb_2eebd463-cbbd-4546-8a7c-4621c45c87a0/kube-rbac-proxy-ovn-metrics/0.log" Feb 20 14:46:17.681571 master-0 kubenswrapper[4172]: I0220 14:46:17.681517 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5mwbb_2eebd463-cbbd-4546-8a7c-4621c45c87a0/kube-rbac-proxy-node/0.log" Feb 20 14:46:17.682288 master-0 kubenswrapper[4172]: I0220 14:46:17.682244 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5mwbb_2eebd463-cbbd-4546-8a7c-4621c45c87a0/ovn-acl-logging/0.log" Feb 20 14:46:17.682952 master-0 kubenswrapper[4172]: I0220 14:46:17.682883 4172 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5mwbb_2eebd463-cbbd-4546-8a7c-4621c45c87a0/ovn-controller/0.log" Feb 20 14:46:17.683546 master-0 kubenswrapper[4172]: I0220 14:46:17.683482 4172 generic.go:334] "Generic (PLEG): container finished" podID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" containerID="2b1556cc9b1f74ca4a29d6699e6d191d3cee04268a1ec757e94fd79a4124d168" exitCode=1 Feb 20 14:46:17.683633 master-0 kubenswrapper[4172]: I0220 14:46:17.683550 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerDied","Data":"2b1556cc9b1f74ca4a29d6699e6d191d3cee04268a1ec757e94fd79a4124d168"} Feb 20 14:46:17.683633 master-0 kubenswrapper[4172]: I0220 14:46:17.683606 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" event={"ID":"2eebd463-cbbd-4546-8a7c-4621c45c87a0","Type":"ContainerDied","Data":"58af561de977e92642e6fc5cdaf1d69bc8b83ef4eff8e5da7ccf5c477049d9be"} Feb 20 14:46:17.683633 master-0 kubenswrapper[4172]: I0220 14:46:17.683607 4172 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5mwbb" Feb 20 14:46:17.683810 master-0 kubenswrapper[4172]: I0220 14:46:17.683635 4172 scope.go:117] "RemoveContainer" containerID="2b1556cc9b1f74ca4a29d6699e6d191d3cee04268a1ec757e94fd79a4124d168" Feb 20 14:46:17.686389 master-0 kubenswrapper[4172]: I0220 14:46:17.686284 4172 generic.go:334] "Generic (PLEG): container finished" podID="21384bd0-495c-406a-9462-e9e740c04686" containerID="325237c1c62eee1b6dbe253582be0281f8aeaa79ed6559821ac6420b7b9c38ca" exitCode=0 Feb 20 14:46:17.686389 master-0 kubenswrapper[4172]: I0220 14:46:17.686378 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerDied","Data":"325237c1c62eee1b6dbe253582be0281f8aeaa79ed6559821ac6420b7b9c38ca"} Feb 20 14:46:17.686865 master-0 kubenswrapper[4172]: I0220 14:46:17.686448 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"26c5fe83ca44257f00aa75056a5ba23aa71fd99df73033faf567ea11ded1340f"} Feb 20 14:46:17.705855 master-0 kubenswrapper[4172]: I0220 14:46:17.704449 4172 scope.go:117] "RemoveContainer" containerID="7068e5d613d2d6125eab5ba509945d32aebbb8cb263e3857c8cd181ee64e7196" Feb 20 14:46:17.708348 master-0 kubenswrapper[4172]: I0220 14:46:17.708251 4172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=0.708221759 podStartE2EDuration="708.221759ms" podCreationTimestamp="2026-02-20 14:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:46:17.705959426 +0000 UTC m=+118.261185056" watchObservedRunningTime="2026-02-20 14:46:17.708221759 +0000 UTC m=+118.263447409" Feb 20 14:46:17.723420 master-0 kubenswrapper[4172]: I0220 14:46:17.723342 4172 scope.go:117] "RemoveContainer" containerID="b6f8160a2b0b8311c6114939c085470005a6749f835a8c4d6d5c2fcf31afcca3" Feb 20 14:46:17.756683 master-0 kubenswrapper[4172]: I0220 14:46:17.756626 4172 scope.go:117] "RemoveContainer" containerID="dd4755c9bde272bff332ee42866c0d4a7f3a3eded7cfea10c0f4a8f1a44e3081" Feb 20 14:46:17.765585 master-0 kubenswrapper[4172]: I0220 14:46:17.765531 4172 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5mwbb"] Feb 20 14:46:17.774118 master-0 kubenswrapper[4172]: I0220 14:46:17.773506 4172 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5mwbb"] Feb 20 14:46:17.778265 master-0 kubenswrapper[4172]: I0220 14:46:17.778205 4172 scope.go:117] "RemoveContainer" containerID="d4028d8197e8222efa0b2d5cd4058ec9e32593743cbeb5a63914f3f5319abea8" Feb 20 14:46:17.819916 master-0 kubenswrapper[4172]: I0220 14:46:17.819368 4172 scope.go:117] "RemoveContainer" containerID="c28097dc6a652034b5d588b2b3b129d2c2024867e287e252c3a143d563acc3d1" Feb 20 14:46:17.831014 master-0 kubenswrapper[4172]: I0220 14:46:17.830921 4172 scope.go:117] "RemoveContainer" containerID="bc4c3be5183efbd6157a006026a1e8fb1ea2037e416fe318d6436aa5b3c2c4ae" Feb 20 14:46:17.845127 master-0 kubenswrapper[4172]: I0220 14:46:17.845078 4172 scope.go:117] "RemoveContainer" containerID="19f1ee7381af80c76e4b28872d81e46510b61cd585ab99297781f3d21d729d67" Feb 20 14:46:17.862171 master-0 kubenswrapper[4172]: I0220 14:46:17.862135 4172 scope.go:117] "RemoveContainer" containerID="3053cba1a9dd3bf2f89a83fc28942f27d075c9d218487ad546375069cf4f63bf" Feb 20 14:46:17.879565 master-0 kubenswrapper[4172]: I0220 14:46:17.879531 4172 scope.go:117] "RemoveContainer" containerID="2b1556cc9b1f74ca4a29d6699e6d191d3cee04268a1ec757e94fd79a4124d168" Feb 20 14:46:17.880314 master-0 kubenswrapper[4172]: E0220 14:46:17.880255 4172 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b1556cc9b1f74ca4a29d6699e6d191d3cee04268a1ec757e94fd79a4124d168\": container with ID starting with 2b1556cc9b1f74ca4a29d6699e6d191d3cee04268a1ec757e94fd79a4124d168 not found: ID does not exist" containerID="2b1556cc9b1f74ca4a29d6699e6d191d3cee04268a1ec757e94fd79a4124d168" Feb 20 14:46:17.880488 master-0 kubenswrapper[4172]: I0220 14:46:17.880317 4172 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b1556cc9b1f74ca4a29d6699e6d191d3cee04268a1ec757e94fd79a4124d168"} err="failed to get container status \"2b1556cc9b1f74ca4a29d6699e6d191d3cee04268a1ec757e94fd79a4124d168\": rpc error: code = NotFound desc = could not find container \"2b1556cc9b1f74ca4a29d6699e6d191d3cee04268a1ec757e94fd79a4124d168\": container with ID starting with 2b1556cc9b1f74ca4a29d6699e6d191d3cee04268a1ec757e94fd79a4124d168 not found: ID does not exist" Feb 20 14:46:17.880488 master-0 kubenswrapper[4172]: I0220 14:46:17.880483 4172 scope.go:117] "RemoveContainer" containerID="7068e5d613d2d6125eab5ba509945d32aebbb8cb263e3857c8cd181ee64e7196" Feb 20 14:46:17.881241 master-0 kubenswrapper[4172]: E0220 14:46:17.881000 4172 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7068e5d613d2d6125eab5ba509945d32aebbb8cb263e3857c8cd181ee64e7196\": container with ID starting with 7068e5d613d2d6125eab5ba509945d32aebbb8cb263e3857c8cd181ee64e7196 not found: ID does not exist" containerID="7068e5d613d2d6125eab5ba509945d32aebbb8cb263e3857c8cd181ee64e7196" Feb 20 14:46:17.881241 master-0 kubenswrapper[4172]: I0220 14:46:17.881055 4172 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7068e5d613d2d6125eab5ba509945d32aebbb8cb263e3857c8cd181ee64e7196"} err="failed to get container status \"7068e5d613d2d6125eab5ba509945d32aebbb8cb263e3857c8cd181ee64e7196\": rpc error: code = NotFound desc = could not find container \"7068e5d613d2d6125eab5ba509945d32aebbb8cb263e3857c8cd181ee64e7196\": container with ID starting with 7068e5d613d2d6125eab5ba509945d32aebbb8cb263e3857c8cd181ee64e7196 not found: ID does not exist" Feb 20 14:46:17.881241 master-0 kubenswrapper[4172]: I0220 14:46:17.881096 4172 scope.go:117] "RemoveContainer" containerID="b6f8160a2b0b8311c6114939c085470005a6749f835a8c4d6d5c2fcf31afcca3" Feb 20 14:46:17.881558 master-0 kubenswrapper[4172]: E0220 14:46:17.881518 4172 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6f8160a2b0b8311c6114939c085470005a6749f835a8c4d6d5c2fcf31afcca3\": container with ID starting with b6f8160a2b0b8311c6114939c085470005a6749f835a8c4d6d5c2fcf31afcca3 not found: ID does not exist" containerID="b6f8160a2b0b8311c6114939c085470005a6749f835a8c4d6d5c2fcf31afcca3" Feb 20 14:46:17.881637 master-0 kubenswrapper[4172]: I0220 14:46:17.881564 4172 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6f8160a2b0b8311c6114939c085470005a6749f835a8c4d6d5c2fcf31afcca3"} err="failed to get container status \"b6f8160a2b0b8311c6114939c085470005a6749f835a8c4d6d5c2fcf31afcca3\": rpc error: code = NotFound desc = could not find container \"b6f8160a2b0b8311c6114939c085470005a6749f835a8c4d6d5c2fcf31afcca3\": container with ID starting with b6f8160a2b0b8311c6114939c085470005a6749f835a8c4d6d5c2fcf31afcca3 not found: ID does not exist" Feb 20 14:46:17.881637 master-0 kubenswrapper[4172]: I0220 14:46:17.881597 4172 scope.go:117] "RemoveContainer" containerID="dd4755c9bde272bff332ee42866c0d4a7f3a3eded7cfea10c0f4a8f1a44e3081" Feb 20 14:46:17.882227 master-0 kubenswrapper[4172]: E0220 14:46:17.882092 4172 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd4755c9bde272bff332ee42866c0d4a7f3a3eded7cfea10c0f4a8f1a44e3081\": container with ID starting with dd4755c9bde272bff332ee42866c0d4a7f3a3eded7cfea10c0f4a8f1a44e3081 not found: ID does not exist" containerID="dd4755c9bde272bff332ee42866c0d4a7f3a3eded7cfea10c0f4a8f1a44e3081" Feb 20 14:46:17.882227 master-0 kubenswrapper[4172]: I0220 14:46:17.882152 4172 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd4755c9bde272bff332ee42866c0d4a7f3a3eded7cfea10c0f4a8f1a44e3081"} err="failed to get container status \"dd4755c9bde272bff332ee42866c0d4a7f3a3eded7cfea10c0f4a8f1a44e3081\": rpc error: code = NotFound desc = could not find container \"dd4755c9bde272bff332ee42866c0d4a7f3a3eded7cfea10c0f4a8f1a44e3081\": container with ID starting with dd4755c9bde272bff332ee42866c0d4a7f3a3eded7cfea10c0f4a8f1a44e3081 not found: ID does not exist" Feb 20 14:46:17.882227 master-0 kubenswrapper[4172]: I0220 14:46:17.882187 4172 scope.go:117] "RemoveContainer" containerID="d4028d8197e8222efa0b2d5cd4058ec9e32593743cbeb5a63914f3f5319abea8" Feb 20 14:46:17.882706 master-0 kubenswrapper[4172]: E0220 14:46:17.882654 4172 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4028d8197e8222efa0b2d5cd4058ec9e32593743cbeb5a63914f3f5319abea8\": container with ID starting with d4028d8197e8222efa0b2d5cd4058ec9e32593743cbeb5a63914f3f5319abea8 not found: ID does not exist" containerID="d4028d8197e8222efa0b2d5cd4058ec9e32593743cbeb5a63914f3f5319abea8" Feb 20 14:46:17.882781 master-0 kubenswrapper[4172]: I0220 14:46:17.882702 4172 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4028d8197e8222efa0b2d5cd4058ec9e32593743cbeb5a63914f3f5319abea8"} err="failed to get container status \"d4028d8197e8222efa0b2d5cd4058ec9e32593743cbeb5a63914f3f5319abea8\": rpc error: code = NotFound desc = could not find container \"d4028d8197e8222efa0b2d5cd4058ec9e32593743cbeb5a63914f3f5319abea8\": container with ID starting with d4028d8197e8222efa0b2d5cd4058ec9e32593743cbeb5a63914f3f5319abea8 not found: ID does not exist" Feb 20 14:46:17.882781 master-0 kubenswrapper[4172]: I0220 14:46:17.882733 4172 scope.go:117] "RemoveContainer" containerID="c28097dc6a652034b5d588b2b3b129d2c2024867e287e252c3a143d563acc3d1" Feb 20 14:46:17.883329 master-0 kubenswrapper[4172]: E0220 14:46:17.883278 4172 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c28097dc6a652034b5d588b2b3b129d2c2024867e287e252c3a143d563acc3d1\": container with ID starting with c28097dc6a652034b5d588b2b3b129d2c2024867e287e252c3a143d563acc3d1 not found: ID does not exist" containerID="c28097dc6a652034b5d588b2b3b129d2c2024867e287e252c3a143d563acc3d1" Feb 20 14:46:17.883401 master-0 kubenswrapper[4172]: I0220 14:46:17.883326 4172 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28097dc6a652034b5d588b2b3b129d2c2024867e287e252c3a143d563acc3d1"} err="failed to get container status \"c28097dc6a652034b5d588b2b3b129d2c2024867e287e252c3a143d563acc3d1\": rpc error: code = NotFound desc = could not find container \"c28097dc6a652034b5d588b2b3b129d2c2024867e287e252c3a143d563acc3d1\": container with ID starting with c28097dc6a652034b5d588b2b3b129d2c2024867e287e252c3a143d563acc3d1 not found: ID does not exist" Feb 20 14:46:17.883401 master-0 kubenswrapper[4172]: I0220 14:46:17.883356 4172 scope.go:117] "RemoveContainer" containerID="bc4c3be5183efbd6157a006026a1e8fb1ea2037e416fe318d6436aa5b3c2c4ae" Feb 20 14:46:17.883759 master-0 kubenswrapper[4172]: E0220 14:46:17.883714 4172 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc4c3be5183efbd6157a006026a1e8fb1ea2037e416fe318d6436aa5b3c2c4ae\": container with ID starting with bc4c3be5183efbd6157a006026a1e8fb1ea2037e416fe318d6436aa5b3c2c4ae not found: ID does not exist" containerID="bc4c3be5183efbd6157a006026a1e8fb1ea2037e416fe318d6436aa5b3c2c4ae" Feb 20 14:46:17.883842 master-0 kubenswrapper[4172]: I0220 14:46:17.883765 4172 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc4c3be5183efbd6157a006026a1e8fb1ea2037e416fe318d6436aa5b3c2c4ae"} err="failed to get container status \"bc4c3be5183efbd6157a006026a1e8fb1ea2037e416fe318d6436aa5b3c2c4ae\": rpc error: code = NotFound desc = could not find container \"bc4c3be5183efbd6157a006026a1e8fb1ea2037e416fe318d6436aa5b3c2c4ae\": container with ID starting with bc4c3be5183efbd6157a006026a1e8fb1ea2037e416fe318d6436aa5b3c2c4ae not found: ID does not exist" Feb 20 14:46:17.883842 master-0 kubenswrapper[4172]: I0220 14:46:17.883804 4172 scope.go:117] "RemoveContainer" containerID="19f1ee7381af80c76e4b28872d81e46510b61cd585ab99297781f3d21d729d67" Feb 20 14:46:17.884381 master-0 kubenswrapper[4172]: E0220 14:46:17.884196 4172 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19f1ee7381af80c76e4b28872d81e46510b61cd585ab99297781f3d21d729d67\": container with ID starting with 19f1ee7381af80c76e4b28872d81e46510b61cd585ab99297781f3d21d729d67 not found: ID does not exist" containerID="19f1ee7381af80c76e4b28872d81e46510b61cd585ab99297781f3d21d729d67" Feb 20 14:46:17.884381 master-0 kubenswrapper[4172]: I0220 14:46:17.884243 4172 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19f1ee7381af80c76e4b28872d81e46510b61cd585ab99297781f3d21d729d67"} err="failed to get container status \"19f1ee7381af80c76e4b28872d81e46510b61cd585ab99297781f3d21d729d67\": rpc error: code = NotFound desc = could not find container \"19f1ee7381af80c76e4b28872d81e46510b61cd585ab99297781f3d21d729d67\": container with ID starting with 19f1ee7381af80c76e4b28872d81e46510b61cd585ab99297781f3d21d729d67 not found: ID does not exist" Feb 20 14:46:17.884381 master-0 kubenswrapper[4172]: I0220 14:46:17.884272 4172 scope.go:117] "RemoveContainer" containerID="3053cba1a9dd3bf2f89a83fc28942f27d075c9d218487ad546375069cf4f63bf" Feb 20 14:46:17.884761 master-0 kubenswrapper[4172]: E0220 14:46:17.884696 4172 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3053cba1a9dd3bf2f89a83fc28942f27d075c9d218487ad546375069cf4f63bf\": container with ID starting with 3053cba1a9dd3bf2f89a83fc28942f27d075c9d218487ad546375069cf4f63bf not found: ID does not exist" containerID="3053cba1a9dd3bf2f89a83fc28942f27d075c9d218487ad546375069cf4f63bf" Feb 20 14:46:17.884761 master-0 kubenswrapper[4172]: I0220 14:46:17.884749 4172 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3053cba1a9dd3bf2f89a83fc28942f27d075c9d218487ad546375069cf4f63bf"} err="failed to get container status \"3053cba1a9dd3bf2f89a83fc28942f27d075c9d218487ad546375069cf4f63bf\": rpc error: code = NotFound desc = could not find container \"3053cba1a9dd3bf2f89a83fc28942f27d075c9d218487ad546375069cf4f63bf\": container with ID starting with 3053cba1a9dd3bf2f89a83fc28942f27d075c9d218487ad546375069cf4f63bf not found: ID does not exist" Feb 20 14:46:18.207406 master-0 kubenswrapper[4172]: I0220 14:46:18.207341 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:18.207555 master-0 kubenswrapper[4172]: E0220 14:46:18.207516 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:18.214746 master-0 kubenswrapper[4172]: I0220 14:46:18.214679 4172 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eebd463-cbbd-4546-8a7c-4621c45c87a0" path="/var/lib/kubelet/pods/2eebd463-cbbd-4546-8a7c-4621c45c87a0/volumes" Feb 20 14:46:18.703769 master-0 kubenswrapper[4172]: I0220 14:46:18.703652 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"62a99730256e3da9df09e2ce0694443877449feb60b369ea0edbb62d7804a6cb"} Feb 20 14:46:18.703769 master-0 kubenswrapper[4172]: I0220 14:46:18.703742 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"ed310b368397e816ac9d5651b95ab2ee3ceabf0ed71343ecfdc11e63fc82bf1d"} Feb 20 14:46:18.703769 master-0 kubenswrapper[4172]: I0220 14:46:18.703768 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"90e4db478907522d61b2829ddc102e1a38c166c49934ed77415fc7f72dda10f0"} Feb 20 14:46:18.704848 master-0 kubenswrapper[4172]: I0220 14:46:18.703795 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"5ca73d36b494abb0df059713a8f5425aebb316af1c2c669bb3690be1c4b60660"} Feb 20 14:46:18.704848 master-0 kubenswrapper[4172]: I0220 14:46:18.703820 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"24d71ae939c27497695911553274ea2034c00a6fd53d16474b7a6d926d474f9c"} Feb 20 14:46:18.704848 master-0 kubenswrapper[4172]: I0220 14:46:18.703842 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"d5988f2df105cf8c575e9b1c55b4b257b1238892c0d445f0831ad99b911ed459"} Feb 20 14:46:19.206961 master-0 kubenswrapper[4172]: I0220 14:46:19.206880 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:19.207233 master-0 kubenswrapper[4172]: E0220 14:46:19.207142 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:46:20.075790 master-0 kubenswrapper[4172]: E0220 14:46:20.075544 4172 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 20 14:46:20.207055 master-0 kubenswrapper[4172]: I0220 14:46:20.206980 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:20.208550 master-0 kubenswrapper[4172]: E0220 14:46:20.208453 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:20.232156 master-0 kubenswrapper[4172]: E0220 14:46:20.232045 4172 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 20 14:46:20.716917 master-0 kubenswrapper[4172]: I0220 14:46:20.716857 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"eb4d015a488a0971e00d7594bd984177653b19f895e48b58140d83ce5c2ae58c"} Feb 20 14:46:21.207446 master-0 kubenswrapper[4172]: I0220 14:46:21.207330 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:21.208628 master-0 kubenswrapper[4172]: E0220 14:46:21.207636 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:46:22.208114 master-0 kubenswrapper[4172]: I0220 14:46:22.207973 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:22.209145 master-0 kubenswrapper[4172]: E0220 14:46:22.208212 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:22.991133 master-0 kubenswrapper[4172]: I0220 14:46:22.990727 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:22.991133 master-0 kubenswrapper[4172]: E0220 14:46:22.990910 4172 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 14:46:22.991133 master-0 kubenswrapper[4172]: E0220 14:46:22.991066 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert podName:4cede061-d85a-4366-9f1e-90be51f726fc nodeName:}" failed. No retries permitted until 2026-02-20 14:47:26.991036307 +0000 UTC m=+187.546261947 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert") pod "cluster-version-operator-5cfd9759cf-jf2s9" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc") : secret "cluster-version-operator-serving-cert" not found Feb 20 14:46:23.207196 master-0 kubenswrapper[4172]: I0220 14:46:23.207083 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:23.207455 master-0 kubenswrapper[4172]: E0220 14:46:23.207381 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:46:23.733696 master-0 kubenswrapper[4172]: I0220 14:46:23.733239 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"a7fc05cab5fa5d76a89a52b0f5a53914558966a5d0c6a7984a6068b77bc7a605"} Feb 20 14:46:23.734552 master-0 kubenswrapper[4172]: I0220 14:46:23.733756 4172 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:23.734552 master-0 kubenswrapper[4172]: I0220 14:46:23.733785 4172 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:23.734552 master-0 kubenswrapper[4172]: I0220 14:46:23.733804 4172 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:23.769604 master-0 kubenswrapper[4172]: I0220 14:46:23.769515 4172 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:23.772195 master-0 kubenswrapper[4172]: I0220 14:46:23.772101 4172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" podStartSLOduration=7.772080986 podStartE2EDuration="7.772080986s" podCreationTimestamp="2026-02-20 14:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:46:23.771539823 +0000 UTC m=+124.326765493" watchObservedRunningTime="2026-02-20 14:46:23.772080986 +0000 UTC m=+124.327306616" Feb 20 14:46:23.772385 master-0 kubenswrapper[4172]: I0220 14:46:23.772311 4172 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:24.101525 master-0 kubenswrapper[4172]: I0220 14:46:24.101431 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mpr8\" (UniqueName: \"kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8\") pod \"network-check-target-ljvkb\" (UID: \"929dffba-46da-4d81-a437-bc6a9fe79811\") " pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:24.101773 master-0 kubenswrapper[4172]: E0220 14:46:24.101705 4172 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 20 14:46:24.101773 master-0 kubenswrapper[4172]: E0220 14:46:24.101767 4172 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 20 14:46:24.101916 master-0 kubenswrapper[4172]: E0220 14:46:24.101797 4172 projected.go:194] Error preparing data for projected volume kube-api-access-9mpr8 for pod openshift-network-diagnostics/network-check-target-ljvkb: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 14:46:24.101916 master-0 kubenswrapper[4172]: E0220 14:46:24.101899 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8 podName:929dffba-46da-4d81-a437-bc6a9fe79811 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:56.101863638 +0000 UTC m=+156.657089308 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9mpr8" (UniqueName: "kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8") pod "network-check-target-ljvkb" (UID: "929dffba-46da-4d81-a437-bc6a9fe79811") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 20 14:46:24.207622 master-0 kubenswrapper[4172]: I0220 14:46:24.207558 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:24.207830 master-0 kubenswrapper[4172]: E0220 14:46:24.207742 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:24.240991 master-0 kubenswrapper[4172]: I0220 14:46:24.240907 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-99lkv"] Feb 20 14:46:24.241251 master-0 kubenswrapper[4172]: I0220 14:46:24.241081 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:24.241251 master-0 kubenswrapper[4172]: E0220 14:46:24.241219 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:46:24.245278 master-0 kubenswrapper[4172]: I0220 14:46:24.245221 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-ljvkb"] Feb 20 14:46:24.737047 master-0 kubenswrapper[4172]: I0220 14:46:24.736824 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:24.737047 master-0 kubenswrapper[4172]: E0220 14:46:24.737011 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:25.233609 master-0 kubenswrapper[4172]: E0220 14:46:25.233540 4172 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 20 14:46:26.208060 master-0 kubenswrapper[4172]: I0220 14:46:26.207885 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:26.209136 master-0 kubenswrapper[4172]: I0220 14:46:26.207899 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:26.209136 master-0 kubenswrapper[4172]: E0220 14:46:26.208159 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:26.209136 master-0 kubenswrapper[4172]: E0220 14:46:26.208336 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:46:28.207040 master-0 kubenswrapper[4172]: I0220 14:46:28.206881 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:28.208132 master-0 kubenswrapper[4172]: E0220 14:46:28.207176 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:46:28.208132 master-0 kubenswrapper[4172]: I0220 14:46:28.207290 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:28.208132 master-0 kubenswrapper[4172]: E0220 14:46:28.207473 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:30.207657 master-0 kubenswrapper[4172]: I0220 14:46:30.207174 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:30.207657 master-0 kubenswrapper[4172]: I0220 14:46:30.207247 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:30.208721 master-0 kubenswrapper[4172]: E0220 14:46:30.208644 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ljvkb" podUID="929dffba-46da-4d81-a437-bc6a9fe79811" Feb 20 14:46:30.208803 master-0 kubenswrapper[4172]: E0220 14:46:30.208773 4172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-99lkv" podUID="5ea4c132-b6d0-4dc9-942d-48e359eed418" Feb 20 14:46:32.206912 master-0 kubenswrapper[4172]: I0220 14:46:32.206832 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:32.208259 master-0 kubenswrapper[4172]: I0220 14:46:32.207198 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:32.210619 master-0 kubenswrapper[4172]: I0220 14:46:32.210541 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 20 14:46:32.210982 master-0 kubenswrapper[4172]: I0220 14:46:32.210719 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 20 14:46:32.211149 master-0 kubenswrapper[4172]: I0220 14:46:32.211090 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 20 14:46:36.946644 master-0 kubenswrapper[4172]: I0220 14:46:36.946546 4172 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Feb 20 14:46:36.989900 master-0 kubenswrapper[4172]: I0220 14:46:36.989801 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm"] Feb 20 14:46:36.990386 master-0 kubenswrapper[4172]: I0220 14:46:36.990313 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:36.994725 master-0 kubenswrapper[4172]: I0220 14:46:36.994656 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 20 14:46:36.994914 master-0 kubenswrapper[4172]: I0220 14:46:36.994834 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 20 14:46:36.994914 master-0 kubenswrapper[4172]: I0220 14:46:36.994864 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 14:46:36.995400 master-0 kubenswrapper[4172]: I0220 14:46:36.995354 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 20 14:46:37.000429 master-0 kubenswrapper[4172]: I0220 14:46:37.000366 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww"] Feb 20 14:46:37.001001 master-0 kubenswrapper[4172]: I0220 14:46:37.000958 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:37.001778 master-0 kubenswrapper[4172]: I0220 14:46:37.001709 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6"] Feb 20 14:46:37.002298 master-0 kubenswrapper[4172]: I0220 14:46:37.002254 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:37.003280 master-0 kubenswrapper[4172]: I0220 14:46:37.003233 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 20 14:46:37.003280 master-0 kubenswrapper[4172]: I0220 14:46:37.003234 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 20 14:46:37.004099 master-0 kubenswrapper[4172]: I0220 14:46:37.003987 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj"] Feb 20 14:46:37.004677 master-0 kubenswrapper[4172]: I0220 14:46:37.004628 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:37.005447 master-0 kubenswrapper[4172]: I0220 14:46:37.005404 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 20 14:46:37.005653 master-0 kubenswrapper[4172]: I0220 14:46:37.005608 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 20 14:46:37.005653 master-0 kubenswrapper[4172]: I0220 14:46:37.005643 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 20 14:46:37.006508 master-0 kubenswrapper[4172]: I0220 14:46:37.006459 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 20 14:46:37.006508 master-0 kubenswrapper[4172]: I0220 14:46:37.006464 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 20 14:46:37.008250 master-0 kubenswrapper[4172]: I0220 14:46:37.008198 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 20 14:46:37.008579 master-0 kubenswrapper[4172]: I0220 14:46:37.008531 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 20 14:46:37.012408 master-0 kubenswrapper[4172]: I0220 14:46:37.012354 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 20 14:46:37.027272 master-0 kubenswrapper[4172]: I0220 14:46:37.027192 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c31b8a7-edcb-403d-9122-7eb740f7d659-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:37.027494 master-0 kubenswrapper[4172]: I0220 14:46:37.027363 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c31b8a7-edcb-403d-9122-7eb740f7d659-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:37.027494 master-0 kubenswrapper[4172]: I0220 14:46:37.027462 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e9807a-859c-44c1-8511-0066b0f59ff8-config\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:37.027616 master-0 kubenswrapper[4172]: I0220 14:46:37.027530 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj4dx\" (UniqueName: \"kubernetes.io/projected/c81ad608-a8ad-4289-a8d2-d48acb9b540c-kube-api-access-wj4dx\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:37.027616 master-0 kubenswrapper[4172]: I0220 14:46:37.027602 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e9807a-859c-44c1-8511-0066b0f59ff8-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:37.027753 master-0 kubenswrapper[4172]: I0220 14:46:37.027675 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svhtr\" (UniqueName: \"kubernetes.io/projected/45d7ef0c-272b-4d1e-965f-484975d5d25c-kube-api-access-svhtr\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:37.027753 master-0 kubenswrapper[4172]: I0220 14:46:37.027735 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c81ad608-a8ad-4289-a8d2-d48acb9b540c-serving-cert\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:37.027864 master-0 kubenswrapper[4172]: I0220 14:46:37.027793 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45d7ef0c-272b-4d1e-965f-484975d5d25c-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:37.027944 master-0 kubenswrapper[4172]: I0220 14:46:37.027854 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43e9807a-859c-44c1-8511-0066b0f59ff8-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:37.028021 master-0 kubenswrapper[4172]: I0220 14:46:37.027990 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d7ef0c-272b-4d1e-965f-484975d5d25c-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:37.028084 master-0 kubenswrapper[4172]: I0220 14:46:37.028065 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c31b8a7-edcb-403d-9122-7eb740f7d659-config\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:37.031162 master-0 kubenswrapper[4172]: I0220 14:46:37.028136 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c81ad608-a8ad-4289-a8d2-d48acb9b540c-config\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:37.032978 master-0 kubenswrapper[4172]: I0220 14:46:37.032880 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp"] Feb 20 14:46:37.035043 master-0 kubenswrapper[4172]: I0220 14:46:37.033891 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt"] Feb 20 14:46:37.035043 master-0 kubenswrapper[4172]: I0220 14:46:37.034746 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:37.035043 master-0 kubenswrapper[4172]: I0220 14:46:37.035016 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-97m7r"] Feb 20 14:46:37.036986 master-0 kubenswrapper[4172]: I0220 14:46:37.035530 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:37.041810 master-0 kubenswrapper[4172]: I0220 14:46:37.041759 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:37.045222 master-0 kubenswrapper[4172]: I0220 14:46:37.042590 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx"] Feb 20 14:46:37.045222 master-0 kubenswrapper[4172]: I0220 14:46:37.043141 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.045222 master-0 kubenswrapper[4172]: I0220 14:46:37.043840 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4"] Feb 20 14:46:37.045222 master-0 kubenswrapper[4172]: I0220 14:46:37.044213 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 20 14:46:37.045222 master-0 kubenswrapper[4172]: I0220 14:46:37.044395 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 20 14:46:37.045222 master-0 kubenswrapper[4172]: I0220 14:46:37.044439 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:37.045222 master-0 kubenswrapper[4172]: I0220 14:46:37.044687 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 20 14:46:37.045222 master-0 kubenswrapper[4172]: I0220 14:46:37.044849 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 20 14:46:37.046276 master-0 kubenswrapper[4172]: I0220 14:46:37.046241 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr"] Feb 20 14:46:37.049701 master-0 kubenswrapper[4172]: I0220 14:46:37.046687 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr" Feb 20 14:46:37.049701 master-0 kubenswrapper[4172]: I0220 14:46:37.047460 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 20 14:46:37.049701 master-0 kubenswrapper[4172]: I0220 14:46:37.049397 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 20 14:46:37.049701 master-0 kubenswrapper[4172]: I0220 14:46:37.049547 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 20 14:46:37.050044 master-0 kubenswrapper[4172]: I0220 14:46:37.049712 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s"] Feb 20 14:46:37.050583 master-0 kubenswrapper[4172]: I0220 14:46:37.050527 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:37.051028 master-0 kubenswrapper[4172]: I0220 14:46:37.050777 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq"] Feb 20 14:46:37.051998 master-0 kubenswrapper[4172]: I0220 14:46:37.051314 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:37.051998 master-0 kubenswrapper[4172]: I0220 14:46:37.051319 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 20 14:46:37.051998 master-0 kubenswrapper[4172]: I0220 14:46:37.051368 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 20 14:46:37.051998 master-0 kubenswrapper[4172]: I0220 14:46:37.051842 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 20 14:46:37.051998 master-0 kubenswrapper[4172]: I0220 14:46:37.051551 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 20 14:46:37.051998 master-0 kubenswrapper[4172]: I0220 14:46:37.051590 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 20 14:46:37.055017 master-0 kubenswrapper[4172]: I0220 14:46:37.052616 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24"] Feb 20 14:46:37.055017 master-0 kubenswrapper[4172]: I0220 14:46:37.052892 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 20 14:46:37.055017 master-0 kubenswrapper[4172]: I0220 14:46:37.053212 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:37.055017 master-0 kubenswrapper[4172]: I0220 14:46:37.053355 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 20 14:46:37.055017 master-0 kubenswrapper[4172]: I0220 14:46:37.053493 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 20 14:46:37.055017 master-0 kubenswrapper[4172]: I0220 14:46:37.053534 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x"] Feb 20 14:46:37.055017 master-0 kubenswrapper[4172]: I0220 14:46:37.053605 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 20 14:46:37.055017 master-0 kubenswrapper[4172]: I0220 14:46:37.053736 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 20 14:46:37.055017 master-0 kubenswrapper[4172]: I0220 14:46:37.053871 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 20 14:46:37.055017 master-0 kubenswrapper[4172]: I0220 14:46:37.054034 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:37.055017 master-0 kubenswrapper[4172]: I0220 14:46:37.054877 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-gkrph"] Feb 20 14:46:37.055017 master-0 kubenswrapper[4172]: I0220 14:46:37.054920 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 20 14:46:37.056087 master-0 kubenswrapper[4172]: I0220 14:46:37.055269 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:37.056087 master-0 kubenswrapper[4172]: I0220 14:46:37.055335 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 20 14:46:37.056087 master-0 kubenswrapper[4172]: I0220 14:46:37.056071 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 20 14:46:37.057083 master-0 kubenswrapper[4172]: I0220 14:46:37.057041 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c"] Feb 20 14:46:37.057535 master-0 kubenswrapper[4172]: I0220 14:46:37.057489 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.059092 master-0 kubenswrapper[4172]: I0220 14:46:37.057781 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt"] Feb 20 14:46:37.059092 master-0 kubenswrapper[4172]: I0220 14:46:37.058284 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:37.066803 master-0 kubenswrapper[4172]: I0220 14:46:37.063492 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 20 14:46:37.066803 master-0 kubenswrapper[4172]: I0220 14:46:37.066095 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z"] Feb 20 14:46:37.066803 master-0 kubenswrapper[4172]: I0220 14:46:37.066829 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm"] Feb 20 14:46:37.067265 master-0 kubenswrapper[4172]: I0220 14:46:37.066876 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww"] Feb 20 14:46:37.067265 master-0 kubenswrapper[4172]: I0220 14:46:37.066999 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.068687 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.069067 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj"] Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.069157 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.069242 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.069273 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.069161 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.069463 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.069531 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.069469 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.069619 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.069701 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.069714 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.069795 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.069910 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.070127 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.070153 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.070270 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt"] Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.070337 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.070892 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 20 14:46:37.071099 master-0 kubenswrapper[4172]: I0220 14:46:37.071059 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 20 14:46:37.074475 master-0 kubenswrapper[4172]: I0220 14:46:37.071246 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 20 14:46:37.074475 master-0 kubenswrapper[4172]: I0220 14:46:37.071286 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 20 14:46:37.074475 master-0 kubenswrapper[4172]: I0220 14:46:37.071322 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 20 14:46:37.074475 master-0 kubenswrapper[4172]: I0220 14:46:37.071400 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 20 14:46:37.074475 master-0 kubenswrapper[4172]: I0220 14:46:37.071488 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 20 14:46:37.074475 master-0 kubenswrapper[4172]: I0220 14:46:37.071538 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 20 14:46:37.074475 master-0 kubenswrapper[4172]: I0220 14:46:37.072031 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 20 14:46:37.089563 master-0 kubenswrapper[4172]: I0220 14:46:37.088538 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 20 14:46:37.089563 master-0 kubenswrapper[4172]: I0220 14:46:37.088968 4172 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 20 14:46:37.089563 master-0 kubenswrapper[4172]: I0220 14:46:37.089411 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 20 14:46:37.090082 master-0 kubenswrapper[4172]: I0220 14:46:37.089708 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 20 14:46:37.092665 master-0 kubenswrapper[4172]: I0220 14:46:37.091455 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-97m7r"] Feb 20 14:46:37.097184 master-0 kubenswrapper[4172]: I0220 14:46:37.095591 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 20 14:46:37.097287 master-0 kubenswrapper[4172]: I0220 14:46:37.097241 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-gkrph"] Feb 20 14:46:37.098345 master-0 kubenswrapper[4172]: I0220 14:46:37.098298 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq"] Feb 20 14:46:37.098574 master-0 kubenswrapper[4172]: I0220 14:46:37.098543 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 20 14:46:37.099205 master-0 kubenswrapper[4172]: I0220 14:46:37.099163 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4"] Feb 20 14:46:37.100230 master-0 kubenswrapper[4172]: I0220 14:46:37.100175 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp"] Feb 20 14:46:37.101211 master-0 kubenswrapper[4172]: I0220 14:46:37.101101 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x"] Feb 20 14:46:37.102123 master-0 kubenswrapper[4172]: I0220 14:46:37.102072 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s"] Feb 20 14:46:37.103280 master-0 kubenswrapper[4172]: I0220 14:46:37.102943 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr"] Feb 20 14:46:37.104600 master-0 kubenswrapper[4172]: I0220 14:46:37.104265 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6"] Feb 20 14:46:37.105486 master-0 kubenswrapper[4172]: I0220 14:46:37.105434 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c"] Feb 20 14:46:37.106554 master-0 kubenswrapper[4172]: I0220 14:46:37.106507 4172 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-cgp8r"] Feb 20 14:46:37.107009 master-0 kubenswrapper[4172]: I0220 14:46:37.106911 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:37.108500 master-0 kubenswrapper[4172]: I0220 14:46:37.108433 4172 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 20 14:46:37.108675 master-0 kubenswrapper[4172]: I0220 14:46:37.108637 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx"] Feb 20 14:46:37.110008 master-0 kubenswrapper[4172]: I0220 14:46:37.109975 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24"] Feb 20 14:46:37.111143 master-0 kubenswrapper[4172]: I0220 14:46:37.111108 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z"] Feb 20 14:46:37.112111 master-0 kubenswrapper[4172]: I0220 14:46:37.112060 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt"] Feb 20 14:46:37.128402 master-0 kubenswrapper[4172]: I0220 14:46:37.128358 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/87cf4690-1ec1-44fc-94bd-730d9f2e6762-iptables-alerter-script\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:37.128546 master-0 kubenswrapper[4172]: I0220 14:46:37.128421 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e9807a-859c-44c1-8511-0066b0f59ff8-config\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:37.128546 master-0 kubenswrapper[4172]: I0220 14:46:37.128450 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj4dx\" (UniqueName: \"kubernetes.io/projected/c81ad608-a8ad-4289-a8d2-d48acb9b540c-kube-api-access-wj4dx\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:37.128546 master-0 kubenswrapper[4172]: I0220 14:46:37.128475 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fttgr\" (UniqueName: \"kubernetes.io/projected/419f28a9-8fd7-4b59-9554-4d884a1208b5-kube-api-access-fttgr\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:37.128546 master-0 kubenswrapper[4172]: I0220 14:46:37.128500 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.128546 master-0 kubenswrapper[4172]: I0220 14:46:37.128521 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-config\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.128546 master-0 kubenswrapper[4172]: I0220 14:46:37.128542 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:37.128978 master-0 kubenswrapper[4172]: I0220 14:46:37.128565 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm5p2\" (UniqueName: \"kubernetes.io/projected/d28490b0-96ca-4fe0-8fae-e6f8390f933b-kube-api-access-qm5p2\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:37.128978 master-0 kubenswrapper[4172]: I0220 14:46:37.128588 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c81ad608-a8ad-4289-a8d2-d48acb9b540c-serving-cert\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:37.128978 master-0 kubenswrapper[4172]: I0220 14:46:37.128607 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:37.128978 master-0 kubenswrapper[4172]: I0220 14:46:37.128628 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.128978 master-0 kubenswrapper[4172]: I0220 14:46:37.128650 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svhtr\" (UniqueName: \"kubernetes.io/projected/45d7ef0c-272b-4d1e-965f-484975d5d25c-kube-api-access-svhtr\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:37.128978 master-0 kubenswrapper[4172]: I0220 14:46:37.128671 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:37.128978 master-0 kubenswrapper[4172]: I0220 14:46:37.128694 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db9dc349-5216-43ff-8c17-3a9384a010ea-config\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:37.128978 master-0 kubenswrapper[4172]: I0220 14:46:37.128718 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43e9807a-859c-44c1-8511-0066b0f59ff8-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:37.128978 master-0 kubenswrapper[4172]: I0220 14:46:37.128747 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9c94\" (UniqueName: \"kubernetes.io/projected/87cf4690-1ec1-44fc-94bd-730d9f2e6762-kube-api-access-r9c94\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:37.128978 master-0 kubenswrapper[4172]: I0220 14:46:37.128772 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c31b8a7-edcb-403d-9122-7eb740f7d659-config\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:37.128978 master-0 kubenswrapper[4172]: I0220 14:46:37.128795 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n85mh\" (UniqueName: \"kubernetes.io/projected/900e244c-67aa-402f-b5f0-d37c5c1cedf7-kube-api-access-n85mh\") pod \"csi-snapshot-controller-operator-6fb4df594f-p29qr\" (UID: \"900e244c-67aa-402f-b5f0-d37c5c1cedf7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr" Feb 20 14:46:37.128978 master-0 kubenswrapper[4172]: I0220 14:46:37.128868 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb46b\" (UniqueName: \"kubernetes.io/projected/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-kube-api-access-mb46b\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:37.128978 master-0 kubenswrapper[4172]: I0220 14:46:37.128906 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:37.128978 master-0 kubenswrapper[4172]: I0220 14:46:37.128967 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:37.129703 master-0 kubenswrapper[4172]: I0220 14:46:37.128989 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:37.129703 master-0 kubenswrapper[4172]: I0220 14:46:37.129014 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c31b8a7-edcb-403d-9122-7eb740f7d659-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:37.129703 master-0 kubenswrapper[4172]: I0220 14:46:37.129036 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jn8g\" (UniqueName: \"kubernetes.io/projected/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-kube-api-access-4jn8g\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:37.129703 master-0 kubenswrapper[4172]: I0220 14:46:37.129057 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-client\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.129703 master-0 kubenswrapper[4172]: I0220 14:46:37.129079 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:37.129703 master-0 kubenswrapper[4172]: I0220 14:46:37.129099 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/419f28a9-8fd7-4b59-9554-4d884a1208b5-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:37.129703 master-0 kubenswrapper[4172]: I0220 14:46:37.129119 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-config\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.129703 master-0 kubenswrapper[4172]: I0220 14:46:37.129138 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk9xr\" (UniqueName: \"kubernetes.io/projected/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-kube-api-access-jk9xr\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:37.129703 master-0 kubenswrapper[4172]: I0220 14:46:37.129157 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:37.129703 master-0 kubenswrapper[4172]: I0220 14:46:37.129186 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/989af121-da08-4f40-b08c-dd2aa67bc60c-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:37.129703 master-0 kubenswrapper[4172]: I0220 14:46:37.129213 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c31b8a7-edcb-403d-9122-7eb740f7d659-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:37.129703 master-0 kubenswrapper[4172]: I0220 14:46:37.129237 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/989af121-da08-4f40-b08c-dd2aa67bc60c-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:37.129703 master-0 kubenswrapper[4172]: I0220 14:46:37.129259 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:37.129703 master-0 kubenswrapper[4172]: I0220 14:46:37.129282 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e9807a-859c-44c1-8511-0066b0f59ff8-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:37.130486 master-0 kubenswrapper[4172]: I0220 14:46:37.129304 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.130486 master-0 kubenswrapper[4172]: I0220 14:46:37.129347 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnwtd\" (UniqueName: \"kubernetes.io/projected/1fe69517-eec2-4721-933c-fa27cea7ab1f-kube-api-access-rnwtd\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:37.130486 master-0 kubenswrapper[4172]: I0220 14:46:37.129386 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:37.130486 master-0 kubenswrapper[4172]: I0220 14:46:37.129614 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-serving-cert\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.130486 master-0 kubenswrapper[4172]: I0220 14:46:37.129688 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45d7ef0c-272b-4d1e-965f-484975d5d25c-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:37.130486 master-0 kubenswrapper[4172]: I0220 14:46:37.129713 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/989af121-da08-4f40-b08c-dd2aa67bc60c-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:37.130486 master-0 kubenswrapper[4172]: I0220 14:46:37.129780 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:37.130486 master-0 kubenswrapper[4172]: I0220 14:46:37.129893 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzmqr\" (UniqueName: \"kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-kube-api-access-pzmqr\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:37.130486 master-0 kubenswrapper[4172]: I0220 14:46:37.129983 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk5m4\" (UniqueName: \"kubernetes.io/projected/8157f73d-c757-40c4-80bc-3c9de2f2288a-kube-api-access-bk5m4\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.130486 master-0 kubenswrapper[4172]: I0220 14:46:37.130026 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:37.130486 master-0 kubenswrapper[4172]: I0220 14:46:37.130063 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d5fq\" (UniqueName: \"kubernetes.io/projected/c0a3548f-299c-4234-9bf1-c93efcb9740b-kube-api-access-7d5fq\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:37.130486 master-0 kubenswrapper[4172]: I0220 14:46:37.130100 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8157f73d-c757-40c4-80bc-3c9de2f2288a-serving-cert\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.130486 master-0 kubenswrapper[4172]: I0220 14:46:37.130153 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d7ef0c-272b-4d1e-965f-484975d5d25c-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:37.130486 master-0 kubenswrapper[4172]: I0220 14:46:37.130192 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:37.131278 master-0 kubenswrapper[4172]: I0220 14:46:37.130229 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-ca\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.131278 master-0 kubenswrapper[4172]: I0220 14:46:37.130262 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smglm\" (UniqueName: \"kubernetes.io/projected/db9dc349-5216-43ff-8c17-3a9384a010ea-kube-api-access-smglm\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:37.131278 master-0 kubenswrapper[4172]: I0220 14:46:37.130296 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwb5n\" (UniqueName: \"kubernetes.io/projected/234a44fd-c153-47a6-a11d-7d4b7165c236-kube-api-access-gwb5n\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.131278 master-0 kubenswrapper[4172]: I0220 14:46:37.130328 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:37.131278 master-0 kubenswrapper[4172]: I0220 14:46:37.130361 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31d71c90-cab7-4411-9426-0713cb026294-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:37.131278 master-0 kubenswrapper[4172]: I0220 14:46:37.130399 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c81ad608-a8ad-4289-a8d2-d48acb9b540c-config\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:37.131278 master-0 kubenswrapper[4172]: I0220 14:46:37.130507 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db9dc349-5216-43ff-8c17-3a9384a010ea-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:37.131278 master-0 kubenswrapper[4172]: I0220 14:46:37.130553 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57cks\" (UniqueName: \"kubernetes.io/projected/31d71c90-cab7-4411-9426-0713cb026294-kube-api-access-57cks\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:37.131278 master-0 kubenswrapper[4172]: I0220 14:46:37.130579 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:37.131278 master-0 kubenswrapper[4172]: I0220 14:46:37.130601 4172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87cf4690-1ec1-44fc-94bd-730d9f2e6762-host-slash\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:37.131278 master-0 kubenswrapper[4172]: I0220 14:46:37.130735 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d7ef0c-272b-4d1e-965f-484975d5d25c-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:37.131867 master-0 kubenswrapper[4172]: I0220 14:46:37.131502 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e9807a-859c-44c1-8511-0066b0f59ff8-config\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:37.131867 master-0 kubenswrapper[4172]: I0220 14:46:37.131519 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c81ad608-a8ad-4289-a8d2-d48acb9b540c-config\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:37.131867 master-0 kubenswrapper[4172]: I0220 14:46:37.131681 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c31b8a7-edcb-403d-9122-7eb740f7d659-config\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:37.149495 master-0 kubenswrapper[4172]: I0220 14:46:37.144645 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e9807a-859c-44c1-8511-0066b0f59ff8-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:37.149495 master-0 kubenswrapper[4172]: I0220 14:46:37.144849 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c81ad608-a8ad-4289-a8d2-d48acb9b540c-serving-cert\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:37.149495 master-0 kubenswrapper[4172]: I0220 14:46:37.144860 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c31b8a7-edcb-403d-9122-7eb740f7d659-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:37.149495 master-0 kubenswrapper[4172]: I0220 14:46:37.145164 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45d7ef0c-272b-4d1e-965f-484975d5d25c-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:37.149965 master-0 kubenswrapper[4172]: I0220 14:46:37.149885 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svhtr\" (UniqueName: \"kubernetes.io/projected/45d7ef0c-272b-4d1e-965f-484975d5d25c-kube-api-access-svhtr\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:37.152995 master-0 kubenswrapper[4172]: I0220 14:46:37.152822 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj4dx\" (UniqueName: \"kubernetes.io/projected/c81ad608-a8ad-4289-a8d2-d48acb9b540c-kube-api-access-wj4dx\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:37.158882 master-0 kubenswrapper[4172]: I0220 14:46:37.157820 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43e9807a-859c-44c1-8511-0066b0f59ff8-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:37.158882 master-0 kubenswrapper[4172]: I0220 14:46:37.157878 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c31b8a7-edcb-403d-9122-7eb740f7d659-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:37.231193 master-0 kubenswrapper[4172]: I0220 14:46:37.231127 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/87cf4690-1ec1-44fc-94bd-730d9f2e6762-iptables-alerter-script\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:37.231499 master-0 kubenswrapper[4172]: I0220 14:46:37.231217 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fttgr\" (UniqueName: \"kubernetes.io/projected/419f28a9-8fd7-4b59-9554-4d884a1208b5-kube-api-access-fttgr\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:37.231499 master-0 kubenswrapper[4172]: I0220 14:46:37.231439 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.231499 master-0 kubenswrapper[4172]: I0220 14:46:37.231474 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-config\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.231591 master-0 kubenswrapper[4172]: I0220 14:46:37.231509 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:37.231591 master-0 kubenswrapper[4172]: I0220 14:46:37.231544 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm5p2\" (UniqueName: \"kubernetes.io/projected/d28490b0-96ca-4fe0-8fae-e6f8390f933b-kube-api-access-qm5p2\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:37.231591 master-0 kubenswrapper[4172]: I0220 14:46:37.231573 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.231756 master-0 kubenswrapper[4172]: I0220 14:46:37.231608 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:37.231756 master-0 kubenswrapper[4172]: I0220 14:46:37.231650 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:37.231756 master-0 kubenswrapper[4172]: I0220 14:46:37.231695 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db9dc349-5216-43ff-8c17-3a9384a010ea-config\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:37.231838 master-0 kubenswrapper[4172]: I0220 14:46:37.231751 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9c94\" (UniqueName: \"kubernetes.io/projected/87cf4690-1ec1-44fc-94bd-730d9f2e6762-kube-api-access-r9c94\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:37.231838 master-0 kubenswrapper[4172]: I0220 14:46:37.231802 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n85mh\" (UniqueName: \"kubernetes.io/projected/900e244c-67aa-402f-b5f0-d37c5c1cedf7-kube-api-access-n85mh\") pod \"csi-snapshot-controller-operator-6fb4df594f-p29qr\" (UID: \"900e244c-67aa-402f-b5f0-d37c5c1cedf7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr" Feb 20 14:46:37.231895 master-0 kubenswrapper[4172]: I0220 14:46:37.231845 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb46b\" (UniqueName: \"kubernetes.io/projected/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-kube-api-access-mb46b\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:37.231938 master-0 kubenswrapper[4172]: I0220 14:46:37.231888 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:37.231989 master-0 kubenswrapper[4172]: I0220 14:46:37.231961 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:37.232052 master-0 kubenswrapper[4172]: I0220 14:46:37.232012 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:37.232094 master-0 kubenswrapper[4172]: I0220 14:46:37.232064 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jn8g\" (UniqueName: \"kubernetes.io/projected/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-kube-api-access-4jn8g\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:37.232137 master-0 kubenswrapper[4172]: I0220 14:46:37.232110 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-client\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.232181 master-0 kubenswrapper[4172]: I0220 14:46:37.232159 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:37.232235 master-0 kubenswrapper[4172]: I0220 14:46:37.232197 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-config\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.232284 master-0 kubenswrapper[4172]: I0220 14:46:37.232252 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/87cf4690-1ec1-44fc-94bd-730d9f2e6762-iptables-alerter-script\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:37.232313 master-0 kubenswrapper[4172]: I0220 14:46:37.232256 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk9xr\" (UniqueName: \"kubernetes.io/projected/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-kube-api-access-jk9xr\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:37.232343 master-0 kubenswrapper[4172]: I0220 14:46:37.232329 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:37.232399 master-0 kubenswrapper[4172]: I0220 14:46:37.232380 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/419f28a9-8fd7-4b59-9554-4d884a1208b5-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:37.232430 master-0 kubenswrapper[4172]: I0220 14:46:37.232414 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/989af121-da08-4f40-b08c-dd2aa67bc60c-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:37.232460 master-0 kubenswrapper[4172]: I0220 14:46:37.232441 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/989af121-da08-4f40-b08c-dd2aa67bc60c-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:37.232490 master-0 kubenswrapper[4172]: I0220 14:46:37.232465 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:37.232517 master-0 kubenswrapper[4172]: I0220 14:46:37.232491 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.232565 master-0 kubenswrapper[4172]: I0220 14:46:37.232516 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnwtd\" (UniqueName: \"kubernetes.io/projected/1fe69517-eec2-4721-933c-fa27cea7ab1f-kube-api-access-rnwtd\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:37.232565 master-0 kubenswrapper[4172]: I0220 14:46:37.232546 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:37.233042 master-0 kubenswrapper[4172]: I0220 14:46:37.233002 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.233632 master-0 kubenswrapper[4172]: I0220 14:46:37.233587 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.233712 master-0 kubenswrapper[4172]: I0220 14:46:37.233679 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-serving-cert\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.233747 master-0 kubenswrapper[4172]: I0220 14:46:37.233732 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:37.233795 master-0 kubenswrapper[4172]: I0220 14:46:37.233770 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/989af121-da08-4f40-b08c-dd2aa67bc60c-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:37.233829 master-0 kubenswrapper[4172]: I0220 14:46:37.233805 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzmqr\" (UniqueName: \"kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-kube-api-access-pzmqr\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:37.233877 master-0 kubenswrapper[4172]: I0220 14:46:37.233842 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d5fq\" (UniqueName: \"kubernetes.io/projected/c0a3548f-299c-4234-9bf1-c93efcb9740b-kube-api-access-7d5fq\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:37.233906 master-0 kubenswrapper[4172]: I0220 14:46:37.233881 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8157f73d-c757-40c4-80bc-3c9de2f2288a-serving-cert\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.234025 master-0 kubenswrapper[4172]: I0220 14:46:37.233971 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-config\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.234152 master-0 kubenswrapper[4172]: I0220 14:46:37.234106 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:37.234639 master-0 kubenswrapper[4172]: I0220 14:46:37.234600 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:37.234758 master-0 kubenswrapper[4172]: E0220 14:46:37.234740 4172 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 14:46:37.234819 master-0 kubenswrapper[4172]: E0220 14:46:37.234806 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls podName:d28490b0-96ca-4fe0-8fae-e6f8390f933b nodeName:}" failed. No retries permitted until 2026-02-20 14:46:37.734784153 +0000 UTC m=+138.290009793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls") pod "dns-operator-8c7d49845-gkrph" (UID: "d28490b0-96ca-4fe0-8fae-e6f8390f933b") : secret "metrics-tls" not found Feb 20 14:46:37.236483 master-0 kubenswrapper[4172]: E0220 14:46:37.236249 4172 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 14:46:37.236483 master-0 kubenswrapper[4172]: E0220 14:46:37.236325 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics podName:c0a3548f-299c-4234-9bf1-c93efcb9740b nodeName:}" failed. No retries permitted until 2026-02-20 14:46:37.73630356 +0000 UTC m=+138.291529190 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-97m7r" (UID: "c0a3548f-299c-4234-9bf1-c93efcb9740b") : secret "marketplace-operator-metrics" not found Feb 20 14:46:37.237065 master-0 kubenswrapper[4172]: E0220 14:46:37.237037 4172 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 14:46:37.237120 master-0 kubenswrapper[4172]: E0220 14:46:37.237111 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:37.737091089 +0000 UTC m=+138.292316829 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "node-tuning-operator-tls" not found Feb 20 14:46:37.237609 master-0 kubenswrapper[4172]: I0220 14:46:37.237577 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/419f28a9-8fd7-4b59-9554-4d884a1208b5-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:37.237766 master-0 kubenswrapper[4172]: E0220 14:46:37.237735 4172 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:37.237810 master-0 kubenswrapper[4172]: I0220 14:46:37.237725 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/989af121-da08-4f40-b08c-dd2aa67bc60c-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:37.237810 master-0 kubenswrapper[4172]: E0220 14:46:37.237798 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls podName:419f28a9-8fd7-4b59-9554-4d884a1208b5 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:37.737781886 +0000 UTC m=+138.293007596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-p7mjp" (UID: "419f28a9-8fd7-4b59-9554-4d884a1208b5") : secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:37.237879 master-0 kubenswrapper[4172]: I0220 14:46:37.237737 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db9dc349-5216-43ff-8c17-3a9384a010ea-config\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:37.238135 master-0 kubenswrapper[4172]: I0220 14:46:37.238106 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:37.238246 master-0 kubenswrapper[4172]: I0220 14:46:37.238225 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:37.238603 master-0 kubenswrapper[4172]: I0220 14:46:37.238551 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk5m4\" (UniqueName: \"kubernetes.io/projected/8157f73d-c757-40c4-80bc-3c9de2f2288a-kube-api-access-bk5m4\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.238707 master-0 kubenswrapper[4172]: I0220 14:46:37.238631 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:37.238707 master-0 kubenswrapper[4172]: I0220 14:46:37.238678 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:37.238798 master-0 kubenswrapper[4172]: I0220 14:46:37.238722 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-ca\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.238798 master-0 kubenswrapper[4172]: I0220 14:46:37.238763 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:37.238885 master-0 kubenswrapper[4172]: E0220 14:46:37.238872 4172 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:37.238993 master-0 kubenswrapper[4172]: E0220 14:46:37.238955 4172 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 14:46:37.238993 master-0 kubenswrapper[4172]: I0220 14:46:37.238968 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:37.238993 master-0 kubenswrapper[4172]: E0220 14:46:37.238961 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:37.738913434 +0000 UTC m=+138.294139064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:37.239114 master-0 kubenswrapper[4172]: E0220 14:46:37.239013 4172 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 14:46:37.239114 master-0 kubenswrapper[4172]: E0220 14:46:37.239019 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls podName:b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:37.739005666 +0000 UTC m=+138.294231376 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-g7glt" (UID: "b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1") : secret "image-registry-operator-tls" not found Feb 20 14:46:37.239114 master-0 kubenswrapper[4172]: E0220 14:46:37.239055 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert podName:1fe69517-eec2-4721-933c-fa27cea7ab1f nodeName:}" failed. No retries permitted until 2026-02-20 14:46:37.739043047 +0000 UTC m=+138.294268677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2sw9z" (UID: "1fe69517-eec2-4721-933c-fa27cea7ab1f") : secret "package-server-manager-serving-cert" not found Feb 20 14:46:37.239114 master-0 kubenswrapper[4172]: I0220 14:46:37.239088 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31d71c90-cab7-4411-9426-0713cb026294-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:37.239363 master-0 kubenswrapper[4172]: I0220 14:46:37.239127 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smglm\" (UniqueName: \"kubernetes.io/projected/db9dc349-5216-43ff-8c17-3a9384a010ea-kube-api-access-smglm\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:37.239363 master-0 kubenswrapper[4172]: I0220 14:46:37.239135 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-config\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.239363 master-0 kubenswrapper[4172]: I0220 14:46:37.239162 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwb5n\" (UniqueName: \"kubernetes.io/projected/234a44fd-c153-47a6-a11d-7d4b7165c236-kube-api-access-gwb5n\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.239363 master-0 kubenswrapper[4172]: I0220 14:46:37.239203 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db9dc349-5216-43ff-8c17-3a9384a010ea-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:37.239363 master-0 kubenswrapper[4172]: I0220 14:46:37.239236 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:37.239363 master-0 kubenswrapper[4172]: I0220 14:46:37.239266 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87cf4690-1ec1-44fc-94bd-730d9f2e6762-host-slash\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:37.239363 master-0 kubenswrapper[4172]: I0220 14:46:37.239302 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57cks\" (UniqueName: \"kubernetes.io/projected/31d71c90-cab7-4411-9426-0713cb026294-kube-api-access-57cks\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:37.239688 master-0 kubenswrapper[4172]: I0220 14:46:37.239127 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:37.239688 master-0 kubenswrapper[4172]: I0220 14:46:37.239496 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-ca\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.239688 master-0 kubenswrapper[4172]: I0220 14:46:37.239505 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87cf4690-1ec1-44fc-94bd-730d9f2e6762-host-slash\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:37.239688 master-0 kubenswrapper[4172]: E0220 14:46:37.239523 4172 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 14:46:37.239688 master-0 kubenswrapper[4172]: E0220 14:46:37.239566 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs podName:a1fb2774-6dd7-4429-9df3-4ddfcdaac939 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:37.739551709 +0000 UTC m=+138.294777349 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-wl49x" (UID: "a1fb2774-6dd7-4429-9df3-4ddfcdaac939") : secret "multus-admission-controller-secret" not found Feb 20 14:46:37.240342 master-0 kubenswrapper[4172]: I0220 14:46:37.240314 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31d71c90-cab7-4411-9426-0713cb026294-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:37.240527 master-0 kubenswrapper[4172]: I0220 14:46:37.240494 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.240775 master-0 kubenswrapper[4172]: I0220 14:46:37.240733 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-serving-cert\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.241080 master-0 kubenswrapper[4172]: I0220 14:46:37.241052 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-client\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.241459 master-0 kubenswrapper[4172]: I0220 14:46:37.241431 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/989af121-da08-4f40-b08c-dd2aa67bc60c-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:37.241660 master-0 kubenswrapper[4172]: I0220 14:46:37.241613 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8157f73d-c757-40c4-80bc-3c9de2f2288a-serving-cert\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.241837 master-0 kubenswrapper[4172]: I0220 14:46:37.241815 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db9dc349-5216-43ff-8c17-3a9384a010ea-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:37.265147 master-0 kubenswrapper[4172]: I0220 14:46:37.263109 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnwtd\" (UniqueName: \"kubernetes.io/projected/1fe69517-eec2-4721-933c-fa27cea7ab1f-kube-api-access-rnwtd\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:37.265147 master-0 kubenswrapper[4172]: I0220 14:46:37.265065 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9c94\" (UniqueName: \"kubernetes.io/projected/87cf4690-1ec1-44fc-94bd-730d9f2e6762-kube-api-access-r9c94\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:37.274667 master-0 kubenswrapper[4172]: I0220 14:46:37.268776 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzmqr\" (UniqueName: \"kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-kube-api-access-pzmqr\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:37.274667 master-0 kubenswrapper[4172]: I0220 14:46:37.272654 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb46b\" (UniqueName: \"kubernetes.io/projected/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-kube-api-access-mb46b\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:37.277940 master-0 kubenswrapper[4172]: I0220 14:46:37.276190 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n85mh\" (UniqueName: \"kubernetes.io/projected/900e244c-67aa-402f-b5f0-d37c5c1cedf7-kube-api-access-n85mh\") pod \"csi-snapshot-controller-operator-6fb4df594f-p29qr\" (UID: \"900e244c-67aa-402f-b5f0-d37c5c1cedf7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr" Feb 20 14:46:37.277940 master-0 kubenswrapper[4172]: I0220 14:46:37.276939 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jn8g\" (UniqueName: \"kubernetes.io/projected/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-kube-api-access-4jn8g\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:37.286778 master-0 kubenswrapper[4172]: I0220 14:46:37.278275 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk5m4\" (UniqueName: \"kubernetes.io/projected/8157f73d-c757-40c4-80bc-3c9de2f2288a-kube-api-access-bk5m4\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.286778 master-0 kubenswrapper[4172]: I0220 14:46:37.280539 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm5p2\" (UniqueName: \"kubernetes.io/projected/d28490b0-96ca-4fe0-8fae-e6f8390f933b-kube-api-access-qm5p2\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:37.286778 master-0 kubenswrapper[4172]: I0220 14:46:37.280943 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/989af121-da08-4f40-b08c-dd2aa67bc60c-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:37.287659 master-0 kubenswrapper[4172]: I0220 14:46:37.287608 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk9xr\" (UniqueName: \"kubernetes.io/projected/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-kube-api-access-jk9xr\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:37.288240 master-0 kubenswrapper[4172]: I0220 14:46:37.288210 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:37.288318 master-0 kubenswrapper[4172]: I0220 14:46:37.288249 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d5fq\" (UniqueName: \"kubernetes.io/projected/c0a3548f-299c-4234-9bf1-c93efcb9740b-kube-api-access-7d5fq\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:37.289899 master-0 kubenswrapper[4172]: I0220 14:46:37.289855 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fttgr\" (UniqueName: \"kubernetes.io/projected/419f28a9-8fd7-4b59-9554-4d884a1208b5-kube-api-access-fttgr\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:37.297431 master-0 kubenswrapper[4172]: I0220 14:46:37.297393 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:37.307297 master-0 kubenswrapper[4172]: I0220 14:46:37.307257 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57cks\" (UniqueName: \"kubernetes.io/projected/31d71c90-cab7-4411-9426-0713cb026294-kube-api-access-57cks\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:37.308586 master-0 kubenswrapper[4172]: I0220 14:46:37.308557 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smglm\" (UniqueName: \"kubernetes.io/projected/db9dc349-5216-43ff-8c17-3a9384a010ea-kube-api-access-smglm\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:37.312630 master-0 kubenswrapper[4172]: W0220 14:46:37.312453 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87cf4690_1ec1_44fc_94bd_730d9f2e6762.slice/crio-0ea53368ce61e6c8836a7d0c6d716b7e2c7e18ee974ab80f253b08e24d34227b WatchSource:0}: Error finding container 0ea53368ce61e6c8836a7d0c6d716b7e2c7e18ee974ab80f253b08e24d34227b: Status 404 returned error can't find the container with id 0ea53368ce61e6c8836a7d0c6d716b7e2c7e18ee974ab80f253b08e24d34227b Feb 20 14:46:37.327355 master-0 kubenswrapper[4172]: I0220 14:46:37.327327 4172 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwb5n\" (UniqueName: \"kubernetes.io/projected/234a44fd-c153-47a6-a11d-7d4b7165c236-kube-api-access-gwb5n\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.341813 master-0 kubenswrapper[4172]: I0220 14:46:37.341765 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:37.373736 master-0 kubenswrapper[4172]: I0220 14:46:37.373682 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:37.398488 master-0 kubenswrapper[4172]: I0220 14:46:37.398439 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:37.404147 master-0 kubenswrapper[4172]: I0220 14:46:37.404107 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:37.413091 master-0 kubenswrapper[4172]: I0220 14:46:37.413033 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:37.447970 master-0 kubenswrapper[4172]: I0220 14:46:37.447807 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:37.479552 master-0 kubenswrapper[4172]: I0220 14:46:37.479008 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr" Feb 20 14:46:37.487947 master-0 kubenswrapper[4172]: I0220 14:46:37.485951 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:37.494581 master-0 kubenswrapper[4172]: I0220 14:46:37.494101 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:37.501286 master-0 kubenswrapper[4172]: I0220 14:46:37.501233 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:37.532444 master-0 kubenswrapper[4172]: I0220 14:46:37.532408 4172 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:37.581419 master-0 kubenswrapper[4172]: I0220 14:46:37.574645 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm"] Feb 20 14:46:37.588711 master-0 kubenswrapper[4172]: W0220 14:46:37.587730 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45d7ef0c_272b_4d1e_965f_484975d5d25c.slice/crio-9df920ca539f41ddc66a331c27bc3a12a40dbc8ec795ca71f8a746f6b5203647 WatchSource:0}: Error finding container 9df920ca539f41ddc66a331c27bc3a12a40dbc8ec795ca71f8a746f6b5203647: Status 404 returned error can't find the container with id 9df920ca539f41ddc66a331c27bc3a12a40dbc8ec795ca71f8a746f6b5203647 Feb 20 14:46:37.640794 master-0 kubenswrapper[4172]: I0220 14:46:37.640756 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt"] Feb 20 14:46:37.745380 master-0 kubenswrapper[4172]: I0220 14:46:37.745333 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:37.745380 master-0 kubenswrapper[4172]: I0220 14:46:37.745381 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:37.745587 master-0 kubenswrapper[4172]: I0220 14:46:37.745404 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:37.745587 master-0 kubenswrapper[4172]: I0220 14:46:37.745432 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:37.745587 master-0 kubenswrapper[4172]: I0220 14:46:37.745465 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:37.745587 master-0 kubenswrapper[4172]: I0220 14:46:37.745484 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:37.745587 master-0 kubenswrapper[4172]: I0220 14:46:37.745501 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:37.745587 master-0 kubenswrapper[4172]: I0220 14:46:37.745517 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:37.745801 master-0 kubenswrapper[4172]: E0220 14:46:37.745615 4172 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:37.745801 master-0 kubenswrapper[4172]: E0220 14:46:37.745649 4172 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 14:46:37.745801 master-0 kubenswrapper[4172]: E0220 14:46:37.745661 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:38.745647018 +0000 UTC m=+139.300872618 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:37.745801 master-0 kubenswrapper[4172]: E0220 14:46:37.745691 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls podName:d28490b0-96ca-4fe0-8fae-e6f8390f933b nodeName:}" failed. No retries permitted until 2026-02-20 14:46:38.745676809 +0000 UTC m=+139.300902409 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls") pod "dns-operator-8c7d49845-gkrph" (UID: "d28490b0-96ca-4fe0-8fae-e6f8390f933b") : secret "metrics-tls" not found Feb 20 14:46:37.745801 master-0 kubenswrapper[4172]: E0220 14:46:37.745734 4172 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 14:46:37.745801 master-0 kubenswrapper[4172]: E0220 14:46:37.745757 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs podName:a1fb2774-6dd7-4429-9df3-4ddfcdaac939 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:38.745750191 +0000 UTC m=+139.300975791 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-wl49x" (UID: "a1fb2774-6dd7-4429-9df3-4ddfcdaac939") : secret "multus-admission-controller-secret" not found Feb 20 14:46:37.745801 master-0 kubenswrapper[4172]: E0220 14:46:37.745795 4172 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:37.746102 master-0 kubenswrapper[4172]: E0220 14:46:37.745816 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls podName:419f28a9-8fd7-4b59-9554-4d884a1208b5 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:38.745809482 +0000 UTC m=+139.301035082 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-p7mjp" (UID: "419f28a9-8fd7-4b59-9554-4d884a1208b5") : secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:37.746102 master-0 kubenswrapper[4172]: E0220 14:46:37.745986 4172 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 14:46:37.746102 master-0 kubenswrapper[4172]: E0220 14:46:37.746009 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics podName:c0a3548f-299c-4234-9bf1-c93efcb9740b nodeName:}" failed. No retries permitted until 2026-02-20 14:46:38.746002147 +0000 UTC m=+139.301227737 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-97m7r" (UID: "c0a3548f-299c-4234-9bf1-c93efcb9740b") : secret "marketplace-operator-metrics" not found Feb 20 14:46:37.746102 master-0 kubenswrapper[4172]: E0220 14:46:37.746039 4172 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 14:46:37.746258 master-0 kubenswrapper[4172]: E0220 14:46:37.746129 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls podName:b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:38.746050338 +0000 UTC m=+139.301275938 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-g7glt" (UID: "b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1") : secret "image-registry-operator-tls" not found Feb 20 14:46:37.746258 master-0 kubenswrapper[4172]: E0220 14:46:37.746165 4172 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 14:46:37.746258 master-0 kubenswrapper[4172]: E0220 14:46:37.746183 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert podName:1fe69517-eec2-4721-933c-fa27cea7ab1f nodeName:}" failed. No retries permitted until 2026-02-20 14:46:38.746177461 +0000 UTC m=+139.301403061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2sw9z" (UID: "1fe69517-eec2-4721-933c-fa27cea7ab1f") : secret "package-server-manager-serving-cert" not found Feb 20 14:46:37.746372 master-0 kubenswrapper[4172]: E0220 14:46:37.746264 4172 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 14:46:37.746372 master-0 kubenswrapper[4172]: E0220 14:46:37.746302 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:38.746291414 +0000 UTC m=+139.301517014 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "node-tuning-operator-tls" not found Feb 20 14:46:37.762932 master-0 kubenswrapper[4172]: I0220 14:46:37.762851 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6"] Feb 20 14:46:37.770504 master-0 kubenswrapper[4172]: W0220 14:46:37.770459 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43e9807a_859c_44c1_8511_0066b0f59ff8.slice/crio-e94527abc555de66f60f9e134865dfe60d787ebd1878546078cb9b2523c30cab WatchSource:0}: Error finding container e94527abc555de66f60f9e134865dfe60d787ebd1878546078cb9b2523c30cab: Status 404 returned error can't find the container with id e94527abc555de66f60f9e134865dfe60d787ebd1878546078cb9b2523c30cab Feb 20 14:46:37.778405 master-0 kubenswrapper[4172]: I0220 14:46:37.778362 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" event={"ID":"45d7ef0c-272b-4d1e-965f-484975d5d25c","Type":"ContainerStarted","Data":"9df920ca539f41ddc66a331c27bc3a12a40dbc8ec795ca71f8a746f6b5203647"} Feb 20 14:46:37.779733 master-0 kubenswrapper[4172]: I0220 14:46:37.779684 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" event={"ID":"989af121-da08-4f40-b08c-dd2aa67bc60c","Type":"ContainerStarted","Data":"b4cf8dbc3fd31a273c2cbd586eecdb2a0961392b7bd552bb39381cfb88539e45"} Feb 20 14:46:37.780567 master-0 kubenswrapper[4172]: I0220 14:46:37.780532 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" event={"ID":"43e9807a-859c-44c1-8511-0066b0f59ff8","Type":"ContainerStarted","Data":"e94527abc555de66f60f9e134865dfe60d787ebd1878546078cb9b2523c30cab"} Feb 20 14:46:37.782867 master-0 kubenswrapper[4172]: I0220 14:46:37.782832 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-cgp8r" event={"ID":"87cf4690-1ec1-44fc-94bd-730d9f2e6762","Type":"ContainerStarted","Data":"0ea53368ce61e6c8836a7d0c6d716b7e2c7e18ee974ab80f253b08e24d34227b"} Feb 20 14:46:37.784568 master-0 kubenswrapper[4172]: I0220 14:46:37.784478 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s"] Feb 20 14:46:37.790615 master-0 kubenswrapper[4172]: W0220 14:46:37.790563 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3ca2d2f_9f31_4524_a28f_cf16b02dd711.slice/crio-6315ef904771a7f7ee8f8fb64b568088a83f03dc9235439160e67d9df1c9a04f WatchSource:0}: Error finding container 6315ef904771a7f7ee8f8fb64b568088a83f03dc9235439160e67d9df1c9a04f: Status 404 returned error can't find the container with id 6315ef904771a7f7ee8f8fb64b568088a83f03dc9235439160e67d9df1c9a04f Feb 20 14:46:37.840297 master-0 kubenswrapper[4172]: I0220 14:46:37.840244 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww"] Feb 20 14:46:37.848293 master-0 kubenswrapper[4172]: W0220 14:46:37.848244 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c31b8a7_edcb_403d_9122_7eb740f7d659.slice/crio-1913b004153de96aee747d5e43e4468694e4be30746f1b0a2aa4f60e2176707c WatchSource:0}: Error finding container 1913b004153de96aee747d5e43e4468694e4be30746f1b0a2aa4f60e2176707c: Status 404 returned error can't find the container with id 1913b004153de96aee747d5e43e4468694e4be30746f1b0a2aa4f60e2176707c Feb 20 14:46:37.921518 master-0 kubenswrapper[4172]: I0220 14:46:37.921328 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx"] Feb 20 14:46:37.929482 master-0 kubenswrapper[4172]: W0220 14:46:37.929416 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8157f73d_c757_40c4_80bc_3c9de2f2288a.slice/crio-3f68274f91c27d15a060c5bac225b0b94e8aa70b90454461d048fa9e384a03df WatchSource:0}: Error finding container 3f68274f91c27d15a060c5bac225b0b94e8aa70b90454461d048fa9e384a03df: Status 404 returned error can't find the container with id 3f68274f91c27d15a060c5bac225b0b94e8aa70b90454461d048fa9e384a03df Feb 20 14:46:37.954989 master-0 kubenswrapper[4172]: I0220 14:46:37.954809 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj"] Feb 20 14:46:37.963357 master-0 kubenswrapper[4172]: W0220 14:46:37.963330 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc81ad608_a8ad_4289_a8d2_d48acb9b540c.slice/crio-92d6a373c92ade68969e49443823f212abf3c0859e9aaf5d10ff5913a474e6f8 WatchSource:0}: Error finding container 92d6a373c92ade68969e49443823f212abf3c0859e9aaf5d10ff5913a474e6f8: Status 404 returned error can't find the container with id 92d6a373c92ade68969e49443823f212abf3c0859e9aaf5d10ff5913a474e6f8 Feb 20 14:46:38.005823 master-0 kubenswrapper[4172]: I0220 14:46:38.005514 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24"] Feb 20 14:46:38.013580 master-0 kubenswrapper[4172]: I0220 14:46:38.013510 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq"] Feb 20 14:46:38.016098 master-0 kubenswrapper[4172]: I0220 14:46:38.015044 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr"] Feb 20 14:46:38.016992 master-0 kubenswrapper[4172]: I0220 14:46:38.016877 4172 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c"] Feb 20 14:46:38.020540 master-0 kubenswrapper[4172]: W0220 14:46:38.020447 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b73ae08_0ad7_4f99_8002_6df0d984cd2c.slice/crio-afc706c41127ee1f98bf413cd8a012a0e0a8f183eef4bf77721d14a272ded89e WatchSource:0}: Error finding container afc706c41127ee1f98bf413cd8a012a0e0a8f183eef4bf77721d14a272ded89e: Status 404 returned error can't find the container with id afc706c41127ee1f98bf413cd8a012a0e0a8f183eef4bf77721d14a272ded89e Feb 20 14:46:38.020703 master-0 kubenswrapper[4172]: W0220 14:46:38.020674 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb9dc349_5216_43ff_8c17_3a9384a010ea.slice/crio-95650a37daeacacf8e69d045d48ba4a17652648a0c83345072715e4ffcfa2dda WatchSource:0}: Error finding container 95650a37daeacacf8e69d045d48ba4a17652648a0c83345072715e4ffcfa2dda: Status 404 returned error can't find the container with id 95650a37daeacacf8e69d045d48ba4a17652648a0c83345072715e4ffcfa2dda Feb 20 14:46:38.025738 master-0 kubenswrapper[4172]: W0220 14:46:38.025709 4172 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod234a44fd_c153_47a6_a11d_7d4b7165c236.slice/crio-7190b6f768a0fe97808696f83db6e3236f51dc32c15727d9791bd6e154e97696 WatchSource:0}: Error finding container 7190b6f768a0fe97808696f83db6e3236f51dc32c15727d9791bd6e154e97696: Status 404 returned error can't find the container with id 7190b6f768a0fe97808696f83db6e3236f51dc32c15727d9791bd6e154e97696 Feb 20 14:46:38.754508 master-0 kubenswrapper[4172]: I0220 14:46:38.754194 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:38.754508 master-0 kubenswrapper[4172]: I0220 14:46:38.754507 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:38.754937 master-0 kubenswrapper[4172]: I0220 14:46:38.754554 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:38.754937 master-0 kubenswrapper[4172]: E0220 14:46:38.754641 4172 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 14:46:38.754937 master-0 kubenswrapper[4172]: E0220 14:46:38.754651 4172 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 14:46:38.754937 master-0 kubenswrapper[4172]: E0220 14:46:38.754706 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert podName:1fe69517-eec2-4721-933c-fa27cea7ab1f nodeName:}" failed. No retries permitted until 2026-02-20 14:46:40.754673169 +0000 UTC m=+141.309898769 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2sw9z" (UID: "1fe69517-eec2-4721-933c-fa27cea7ab1f") : secret "package-server-manager-serving-cert" not found Feb 20 14:46:38.754937 master-0 kubenswrapper[4172]: E0220 14:46:38.754732 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls podName:b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:40.75471373 +0000 UTC m=+141.309939330 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-g7glt" (UID: "b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1") : secret "image-registry-operator-tls" not found Feb 20 14:46:38.754937 master-0 kubenswrapper[4172]: I0220 14:46:38.754766 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:38.754937 master-0 kubenswrapper[4172]: I0220 14:46:38.754813 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:38.754937 master-0 kubenswrapper[4172]: I0220 14:46:38.754854 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:38.755345 master-0 kubenswrapper[4172]: E0220 14:46:38.754964 4172 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 14:46:38.755345 master-0 kubenswrapper[4172]: E0220 14:46:38.755005 4172 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:38.755345 master-0 kubenswrapper[4172]: E0220 14:46:38.755040 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls podName:419f28a9-8fd7-4b59-9554-4d884a1208b5 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:40.755030428 +0000 UTC m=+141.310256118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-p7mjp" (UID: "419f28a9-8fd7-4b59-9554-4d884a1208b5") : secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:38.755345 master-0 kubenswrapper[4172]: E0220 14:46:38.755039 4172 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 14:46:38.755345 master-0 kubenswrapper[4172]: E0220 14:46:38.755127 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs podName:a1fb2774-6dd7-4429-9df3-4ddfcdaac939 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:40.75510368 +0000 UTC m=+141.310329370 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-wl49x" (UID: "a1fb2774-6dd7-4429-9df3-4ddfcdaac939") : secret "multus-admission-controller-secret" not found Feb 20 14:46:38.755345 master-0 kubenswrapper[4172]: E0220 14:46:38.754991 4172 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:38.755345 master-0 kubenswrapper[4172]: I0220 14:46:38.754964 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:38.755345 master-0 kubenswrapper[4172]: E0220 14:46:38.755044 4172 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 14:46:38.755345 master-0 kubenswrapper[4172]: E0220 14:46:38.755151 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:40.755145651 +0000 UTC m=+141.310371251 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:38.755345 master-0 kubenswrapper[4172]: I0220 14:46:38.755294 4172 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:38.755345 master-0 kubenswrapper[4172]: E0220 14:46:38.755351 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:40.755333695 +0000 UTC m=+141.310559365 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "node-tuning-operator-tls" not found Feb 20 14:46:38.755702 master-0 kubenswrapper[4172]: E0220 14:46:38.755396 4172 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 14:46:38.755702 master-0 kubenswrapper[4172]: E0220 14:46:38.755441 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls podName:d28490b0-96ca-4fe0-8fae-e6f8390f933b nodeName:}" failed. No retries permitted until 2026-02-20 14:46:40.755426898 +0000 UTC m=+141.310652498 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls") pod "dns-operator-8c7d49845-gkrph" (UID: "d28490b0-96ca-4fe0-8fae-e6f8390f933b") : secret "metrics-tls" not found Feb 20 14:46:38.755702 master-0 kubenswrapper[4172]: E0220 14:46:38.755604 4172 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics podName:c0a3548f-299c-4234-9bf1-c93efcb9740b nodeName:}" failed. No retries permitted until 2026-02-20 14:46:40.755587301 +0000 UTC m=+141.310812931 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-97m7r" (UID: "c0a3548f-299c-4234-9bf1-c93efcb9740b") : secret "marketplace-operator-metrics" not found Feb 20 14:46:38.800375 master-0 kubenswrapper[4172]: I0220 14:46:38.786712 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" event={"ID":"d3ca2d2f-9f31-4524-a28f-cf16b02dd711","Type":"ContainerStarted","Data":"6315ef904771a7f7ee8f8fb64b568088a83f03dc9235439160e67d9df1c9a04f"} Feb 20 14:46:38.800375 master-0 kubenswrapper[4172]: I0220 14:46:38.787601 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" event={"ID":"4c31b8a7-edcb-403d-9122-7eb740f7d659","Type":"ContainerStarted","Data":"1913b004153de96aee747d5e43e4468694e4be30746f1b0a2aa4f60e2176707c"} Feb 20 14:46:38.800375 master-0 kubenswrapper[4172]: I0220 14:46:38.788885 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" event={"ID":"8157f73d-c757-40c4-80bc-3c9de2f2288a","Type":"ContainerStarted","Data":"3f68274f91c27d15a060c5bac225b0b94e8aa70b90454461d048fa9e384a03df"} Feb 20 14:46:38.800375 master-0 kubenswrapper[4172]: I0220 14:46:38.790189 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" event={"ID":"43e9807a-859c-44c1-8511-0066b0f59ff8","Type":"ContainerStarted","Data":"434ed936cc25c1d0e0f36dd52a8572c7b7417d14a5a50821cdca25739e6e9d2b"} Feb 20 14:46:38.800375 master-0 kubenswrapper[4172]: I0220 14:46:38.791726 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr" event={"ID":"900e244c-67aa-402f-b5f0-d37c5c1cedf7","Type":"ContainerStarted","Data":"9bd614ac7dafc38d2154363d724a872731a806692546d4bc858006cdc5ade17d"} Feb 20 14:46:38.800375 master-0 kubenswrapper[4172]: I0220 14:46:38.792327 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" event={"ID":"8b73ae08-0ad7-4f99-8002-6df0d984cd2c","Type":"ContainerStarted","Data":"afc706c41127ee1f98bf413cd8a012a0e0a8f183eef4bf77721d14a272ded89e"} Feb 20 14:46:38.800375 master-0 kubenswrapper[4172]: I0220 14:46:38.792800 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" event={"ID":"db9dc349-5216-43ff-8c17-3a9384a010ea","Type":"ContainerStarted","Data":"95650a37daeacacf8e69d045d48ba4a17652648a0c83345072715e4ffcfa2dda"} Feb 20 14:46:38.828029 master-0 kubenswrapper[4172]: I0220 14:46:38.827950 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" event={"ID":"c81ad608-a8ad-4289-a8d2-d48acb9b540c","Type":"ContainerStarted","Data":"92d6a373c92ade68969e49443823f212abf3c0859e9aaf5d10ff5913a474e6f8"} Feb 20 14:46:38.828884 master-0 kubenswrapper[4172]: I0220 14:46:38.828305 4172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" podStartSLOduration=104.828280252 podStartE2EDuration="1m44.828280252s" podCreationTimestamp="2026-02-20 14:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:46:38.826094209 +0000 UTC m=+139.381319809" watchObservedRunningTime="2026-02-20 14:46:38.828280252 +0000 UTC m=+139.383505852" Feb 20 14:46:38.834063 master-0 kubenswrapper[4172]: I0220 14:46:38.829587 4172 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" event={"ID":"234a44fd-c153-47a6-a11d-7d4b7165c236","Type":"ContainerStarted","Data":"7190b6f768a0fe97808696f83db6e3236f51dc32c15727d9791bd6e154e97696"} Feb 20 14:46:40.627367 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 20 14:46:40.628438 master-0 kubenswrapper[4172]: I0220 14:46:40.627299 4172 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 14:46:40.661474 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 20 14:46:40.662077 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 20 14:46:40.665740 master-0 systemd[1]: kubelet.service: Consumed 10.487s CPU time. Feb 20 14:46:40.683034 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 20 14:46:40.876696 master-0 kubenswrapper[7744]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 14:46:40.876696 master-0 kubenswrapper[7744]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 20 14:46:40.876696 master-0 kubenswrapper[7744]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 14:46:40.876696 master-0 kubenswrapper[7744]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 14:46:40.876696 master-0 kubenswrapper[7744]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 20 14:46:40.877534 master-0 kubenswrapper[7744]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 14:46:40.877534 master-0 kubenswrapper[7744]: I0220 14:46:40.876854 7744 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 20 14:46:40.882598 master-0 kubenswrapper[7744]: W0220 14:46:40.882484 7744 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 14:46:40.882598 master-0 kubenswrapper[7744]: W0220 14:46:40.882571 7744 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 14:46:40.882598 master-0 kubenswrapper[7744]: W0220 14:46:40.882580 7744 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 14:46:40.882598 master-0 kubenswrapper[7744]: W0220 14:46:40.882587 7744 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 14:46:40.882598 master-0 kubenswrapper[7744]: W0220 14:46:40.882593 7744 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 14:46:40.882598 master-0 kubenswrapper[7744]: W0220 14:46:40.882605 7744 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882612 7744 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882617 7744 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882623 7744 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882629 7744 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882635 7744 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882640 7744 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882646 7744 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882651 7744 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882657 7744 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882663 7744 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882670 7744 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882679 7744 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882685 7744 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882691 7744 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882697 7744 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882703 7744 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882708 7744 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882714 7744 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882721 7744 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 14:46:40.883100 master-0 kubenswrapper[7744]: W0220 14:46:40.882726 7744 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882733 7744 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882739 7744 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882744 7744 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882750 7744 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882760 7744 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882765 7744 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882773 7744 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882781 7744 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882787 7744 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882792 7744 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882799 7744 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882835 7744 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882842 7744 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882847 7744 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882853 7744 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882865 7744 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882872 7744 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882878 7744 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882884 7744 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 14:46:40.884110 master-0 kubenswrapper[7744]: W0220 14:46:40.882889 7744 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.882894 7744 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.882899 7744 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.882905 7744 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.882912 7744 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.882919 7744 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.882956 7744 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.882962 7744 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.882972 7744 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.882978 7744 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.882984 7744 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.882989 7744 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.882997 7744 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.883004 7744 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.883009 7744 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.883015 7744 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.883020 7744 feature_gate.go:330] unrecognized feature gate: Example Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.883028 7744 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.883034 7744 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 14:46:40.885116 master-0 kubenswrapper[7744]: W0220 14:46:40.883045 7744 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: W0220 14:46:40.883051 7744 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: W0220 14:46:40.883056 7744 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: W0220 14:46:40.883062 7744 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: W0220 14:46:40.883067 7744 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: W0220 14:46:40.883074 7744 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: W0220 14:46:40.883080 7744 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: W0220 14:46:40.883085 7744 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: I0220 14:46:40.883235 7744 flags.go:64] FLAG: --address="0.0.0.0" Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: I0220 14:46:40.883293 7744 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: I0220 14:46:40.883309 7744 flags.go:64] FLAG: --anonymous-auth="true" Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: I0220 14:46:40.883322 7744 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: I0220 14:46:40.883330 7744 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: I0220 14:46:40.883337 7744 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: I0220 14:46:40.883344 7744 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: I0220 14:46:40.883352 7744 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: I0220 14:46:40.883362 7744 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: I0220 14:46:40.883369 7744 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: I0220 14:46:40.883377 7744 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: I0220 14:46:40.883387 7744 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: I0220 14:46:40.883394 7744 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: I0220 14:46:40.883400 7744 flags.go:64] FLAG: --cgroup-root="" Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: I0220 14:46:40.883406 7744 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 20 14:46:40.886072 master-0 kubenswrapper[7744]: I0220 14:46:40.883413 7744 flags.go:64] FLAG: --client-ca-file="" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883419 7744 flags.go:64] FLAG: --cloud-config="" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883425 7744 flags.go:64] FLAG: --cloud-provider="" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883431 7744 flags.go:64] FLAG: --cluster-dns="[]" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883443 7744 flags.go:64] FLAG: --cluster-domain="" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883452 7744 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883459 7744 flags.go:64] FLAG: --config-dir="" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883465 7744 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883472 7744 flags.go:64] FLAG: --container-log-max-files="5" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883480 7744 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883486 7744 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883493 7744 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883499 7744 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883506 7744 flags.go:64] FLAG: --contention-profiling="false" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883516 7744 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883522 7744 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883529 7744 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883535 7744 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883543 7744 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883549 7744 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883556 7744 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883562 7744 flags.go:64] FLAG: --enable-load-reader="false" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883580 7744 flags.go:64] FLAG: --enable-server="true" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883586 7744 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883598 7744 flags.go:64] FLAG: --event-burst="100" Feb 20 14:46:40.887296 master-0 kubenswrapper[7744]: I0220 14:46:40.883605 7744 flags.go:64] FLAG: --event-qps="50" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883611 7744 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883618 7744 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883624 7744 flags.go:64] FLAG: --eviction-hard="" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883632 7744 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883641 7744 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883648 7744 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883655 7744 flags.go:64] FLAG: --eviction-soft="" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883661 7744 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883667 7744 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883674 7744 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883680 7744 flags.go:64] FLAG: --experimental-mounter-path="" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883686 7744 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883692 7744 flags.go:64] FLAG: --fail-swap-on="true" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883702 7744 flags.go:64] FLAG: --feature-gates="" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883709 7744 flags.go:64] FLAG: --file-check-frequency="20s" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883716 7744 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883722 7744 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883728 7744 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883736 7744 flags.go:64] FLAG: --healthz-port="10248" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883742 7744 flags.go:64] FLAG: --help="false" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883748 7744 flags.go:64] FLAG: --hostname-override="" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883758 7744 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883768 7744 flags.go:64] FLAG: --http-check-frequency="20s" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883775 7744 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 20 14:46:40.888608 master-0 kubenswrapper[7744]: I0220 14:46:40.883780 7744 flags.go:64] FLAG: --image-credential-provider-config="" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883786 7744 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883792 7744 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883798 7744 flags.go:64] FLAG: --image-service-endpoint="" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883805 7744 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883811 7744 flags.go:64] FLAG: --kube-api-burst="100" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883820 7744 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883827 7744 flags.go:64] FLAG: --kube-api-qps="50" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883845 7744 flags.go:64] FLAG: --kube-reserved="" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883851 7744 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883857 7744 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883864 7744 flags.go:64] FLAG: --kubelet-cgroups="" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883870 7744 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883876 7744 flags.go:64] FLAG: --lock-file="" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883882 7744 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883892 7744 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883898 7744 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883907 7744 flags.go:64] FLAG: --log-json-split-stream="false" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883913 7744 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883919 7744 flags.go:64] FLAG: --log-text-split-stream="false" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883943 7744 flags.go:64] FLAG: --logging-format="text" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883949 7744 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883956 7744 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883966 7744 flags.go:64] FLAG: --manifest-url="" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883972 7744 flags.go:64] FLAG: --manifest-url-header="" Feb 20 14:46:40.890399 master-0 kubenswrapper[7744]: I0220 14:46:40.883980 7744 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.883986 7744 flags.go:64] FLAG: --max-open-files="1000000" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.883994 7744 flags.go:64] FLAG: --max-pods="110" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884000 7744 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884006 7744 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884015 7744 flags.go:64] FLAG: --memory-manager-policy="None" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884024 7744 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884031 7744 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884037 7744 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884044 7744 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884059 7744 flags.go:64] FLAG: --node-status-max-images="50" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884065 7744 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884072 7744 flags.go:64] FLAG: --oom-score-adj="-999" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884082 7744 flags.go:64] FLAG: --pod-cidr="" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884088 7744 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884097 7744 flags.go:64] FLAG: --pod-manifest-path="" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884103 7744 flags.go:64] FLAG: --pod-max-pids="-1" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884110 7744 flags.go:64] FLAG: --pods-per-core="0" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884116 7744 flags.go:64] FLAG: --port="10250" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884131 7744 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884138 7744 flags.go:64] FLAG: --provider-id="" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884147 7744 flags.go:64] FLAG: --qos-reserved="" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884154 7744 flags.go:64] FLAG: --read-only-port="10255" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884160 7744 flags.go:64] FLAG: --register-node="true" Feb 20 14:46:40.892103 master-0 kubenswrapper[7744]: I0220 14:46:40.884167 7744 flags.go:64] FLAG: --register-schedulable="true" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884173 7744 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884186 7744 flags.go:64] FLAG: --registry-burst="10" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884192 7744 flags.go:64] FLAG: --registry-qps="5" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884205 7744 flags.go:64] FLAG: --reserved-cpus="" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884215 7744 flags.go:64] FLAG: --reserved-memory="" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884222 7744 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884229 7744 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884235 7744 flags.go:64] FLAG: --rotate-certificates="false" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884241 7744 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884247 7744 flags.go:64] FLAG: --runonce="false" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884253 7744 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884260 7744 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884272 7744 flags.go:64] FLAG: --seccomp-default="false" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884278 7744 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884284 7744 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884291 7744 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884298 7744 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884304 7744 flags.go:64] FLAG: --storage-driver-password="root" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884310 7744 flags.go:64] FLAG: --storage-driver-secure="false" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884316 7744 flags.go:64] FLAG: --storage-driver-table="stats" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884323 7744 flags.go:64] FLAG: --storage-driver-user="root" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884332 7744 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884338 7744 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884345 7744 flags.go:64] FLAG: --system-cgroups="" Feb 20 14:46:40.893675 master-0 kubenswrapper[7744]: I0220 14:46:40.884351 7744 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: I0220 14:46:40.884361 7744 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: I0220 14:46:40.884367 7744 flags.go:64] FLAG: --tls-cert-file="" Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: I0220 14:46:40.884374 7744 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: I0220 14:46:40.884389 7744 flags.go:64] FLAG: --tls-min-version="" Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: I0220 14:46:40.884395 7744 flags.go:64] FLAG: --tls-private-key-file="" Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: I0220 14:46:40.884408 7744 flags.go:64] FLAG: --topology-manager-policy="none" Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: I0220 14:46:40.884415 7744 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: I0220 14:46:40.884421 7744 flags.go:64] FLAG: --topology-manager-scope="container" Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: I0220 14:46:40.884428 7744 flags.go:64] FLAG: --v="2" Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: I0220 14:46:40.884435 7744 flags.go:64] FLAG: --version="false" Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: I0220 14:46:40.884457 7744 flags.go:64] FLAG: --vmodule="" Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: I0220 14:46:40.884473 7744 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: I0220 14:46:40.884480 7744 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: W0220 14:46:40.884776 7744 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: W0220 14:46:40.884784 7744 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: W0220 14:46:40.884790 7744 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: W0220 14:46:40.884796 7744 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: W0220 14:46:40.884802 7744 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: W0220 14:46:40.884808 7744 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: W0220 14:46:40.884816 7744 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: W0220 14:46:40.884821 7744 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 14:46:40.895836 master-0 kubenswrapper[7744]: W0220 14:46:40.884828 7744 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884836 7744 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884843 7744 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884849 7744 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884859 7744 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884865 7744 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884870 7744 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884876 7744 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884881 7744 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884887 7744 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884892 7744 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884898 7744 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884903 7744 feature_gate.go:330] unrecognized feature gate: Example Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884908 7744 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884914 7744 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884919 7744 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884945 7744 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884951 7744 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884956 7744 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884962 7744 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 14:46:40.896963 master-0 kubenswrapper[7744]: W0220 14:46:40.884976 7744 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.884983 7744 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.884989 7744 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.884995 7744 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885000 7744 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885005 7744 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885010 7744 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885016 7744 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885021 7744 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885030 7744 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885038 7744 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885043 7744 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885048 7744 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885054 7744 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885059 7744 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885066 7744 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885073 7744 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885079 7744 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885086 7744 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885091 7744 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 14:46:40.898054 master-0 kubenswrapper[7744]: W0220 14:46:40.885097 7744 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885102 7744 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885111 7744 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885116 7744 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885122 7744 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885128 7744 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885133 7744 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885138 7744 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885143 7744 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885149 7744 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885154 7744 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885160 7744 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885165 7744 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885172 7744 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885178 7744 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885186 7744 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885202 7744 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885209 7744 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885215 7744 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 14:46:40.899539 master-0 kubenswrapper[7744]: W0220 14:46:40.885220 7744 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 14:46:40.900635 master-0 kubenswrapper[7744]: W0220 14:46:40.885226 7744 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 14:46:40.900635 master-0 kubenswrapper[7744]: W0220 14:46:40.885232 7744 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 14:46:40.900635 master-0 kubenswrapper[7744]: W0220 14:46:40.885239 7744 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 14:46:40.900635 master-0 kubenswrapper[7744]: W0220 14:46:40.885245 7744 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 14:46:40.900635 master-0 kubenswrapper[7744]: I0220 14:46:40.885262 7744 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 14:46:40.900635 master-0 kubenswrapper[7744]: I0220 14:46:40.897763 7744 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 20 14:46:40.900635 master-0 kubenswrapper[7744]: I0220 14:46:40.897811 7744 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 20 14:46:40.900635 master-0 kubenswrapper[7744]: W0220 14:46:40.898085 7744 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 14:46:40.900635 master-0 kubenswrapper[7744]: W0220 14:46:40.898099 7744 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 14:46:40.900635 master-0 kubenswrapper[7744]: W0220 14:46:40.898109 7744 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 14:46:40.900635 master-0 kubenswrapper[7744]: W0220 14:46:40.898124 7744 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 14:46:40.900635 master-0 kubenswrapper[7744]: W0220 14:46:40.898132 7744 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 14:46:40.900635 master-0 kubenswrapper[7744]: W0220 14:46:40.898141 7744 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 14:46:40.900635 master-0 kubenswrapper[7744]: W0220 14:46:40.898149 7744 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 14:46:40.900635 master-0 kubenswrapper[7744]: W0220 14:46:40.898157 7744 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898165 7744 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898173 7744 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898182 7744 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898190 7744 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898198 7744 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898206 7744 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898215 7744 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898234 7744 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898244 7744 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898259 7744 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898274 7744 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898291 7744 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898305 7744 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898318 7744 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898331 7744 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898342 7744 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898352 7744 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 14:46:40.901443 master-0 kubenswrapper[7744]: W0220 14:46:40.898360 7744 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898370 7744 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898385 7744 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898393 7744 feature_gate.go:330] unrecognized feature gate: Example Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898401 7744 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898412 7744 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898423 7744 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898433 7744 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898442 7744 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898452 7744 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898460 7744 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898469 7744 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898477 7744 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898485 7744 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898498 7744 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898506 7744 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898514 7744 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898522 7744 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898530 7744 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898537 7744 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 14:46:40.902376 master-0 kubenswrapper[7744]: W0220 14:46:40.898545 7744 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898553 7744 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898561 7744 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898569 7744 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898633 7744 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898644 7744 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898652 7744 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898666 7744 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898674 7744 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898682 7744 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898691 7744 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898699 7744 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898707 7744 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898715 7744 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898723 7744 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898731 7744 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898738 7744 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898747 7744 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.898756 7744 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.899142 7744 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.899160 7744 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 14:46:40.903644 master-0 kubenswrapper[7744]: W0220 14:46:40.899168 7744 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 14:46:40.904729 master-0 kubenswrapper[7744]: W0220 14:46:40.899176 7744 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 14:46:40.904729 master-0 kubenswrapper[7744]: W0220 14:46:40.899184 7744 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 14:46:40.904729 master-0 kubenswrapper[7744]: W0220 14:46:40.899192 7744 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 14:46:40.904729 master-0 kubenswrapper[7744]: W0220 14:46:40.899200 7744 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 14:46:40.904729 master-0 kubenswrapper[7744]: W0220 14:46:40.899209 7744 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 14:46:40.904729 master-0 kubenswrapper[7744]: I0220 14:46:40.899225 7744 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 14:46:40.904729 master-0 kubenswrapper[7744]: W0220 14:46:40.899622 7744 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 14:46:40.904729 master-0 kubenswrapper[7744]: W0220 14:46:40.899642 7744 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 14:46:40.904729 master-0 kubenswrapper[7744]: W0220 14:46:40.899661 7744 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 14:46:40.904729 master-0 kubenswrapper[7744]: W0220 14:46:40.899669 7744 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 14:46:40.904729 master-0 kubenswrapper[7744]: W0220 14:46:40.899678 7744 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 14:46:40.904729 master-0 kubenswrapper[7744]: W0220 14:46:40.899685 7744 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 14:46:40.904729 master-0 kubenswrapper[7744]: W0220 14:46:40.899701 7744 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 14:46:40.904729 master-0 kubenswrapper[7744]: W0220 14:46:40.899716 7744 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 14:46:40.904729 master-0 kubenswrapper[7744]: W0220 14:46:40.899731 7744 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899740 7744 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899754 7744 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899772 7744 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899786 7744 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899794 7744 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899802 7744 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899810 7744 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899820 7744 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899831 7744 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899839 7744 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899847 7744 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899855 7744 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899863 7744 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899871 7744 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899884 7744 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899894 7744 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899903 7744 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899913 7744 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 14:46:40.905488 master-0 kubenswrapper[7744]: W0220 14:46:40.899957 7744 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.899967 7744 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.899975 7744 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.899983 7744 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.899991 7744 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.899999 7744 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.900007 7744 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.900015 7744 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.900022 7744 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.900030 7744 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.900038 7744 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.900046 7744 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.900054 7744 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.900062 7744 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.900070 7744 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.900077 7744 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.900085 7744 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.900093 7744 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.900101 7744 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.900109 7744 feature_gate.go:330] unrecognized feature gate: Example Feb 20 14:46:40.906440 master-0 kubenswrapper[7744]: W0220 14:46:40.900117 7744 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900124 7744 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900132 7744 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900140 7744 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900147 7744 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900155 7744 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900162 7744 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900176 7744 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900187 7744 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900196 7744 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900206 7744 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900215 7744 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900223 7744 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900230 7744 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900239 7744 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900247 7744 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900254 7744 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900262 7744 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900269 7744 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900277 7744 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 14:46:40.907527 master-0 kubenswrapper[7744]: W0220 14:46:40.900285 7744 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 14:46:40.908684 master-0 kubenswrapper[7744]: W0220 14:46:40.900293 7744 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 14:46:40.908684 master-0 kubenswrapper[7744]: W0220 14:46:40.900300 7744 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 14:46:40.908684 master-0 kubenswrapper[7744]: W0220 14:46:40.900308 7744 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 14:46:40.908684 master-0 kubenswrapper[7744]: W0220 14:46:40.900318 7744 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 14:46:40.908684 master-0 kubenswrapper[7744]: I0220 14:46:40.900332 7744 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 14:46:40.908684 master-0 kubenswrapper[7744]: I0220 14:46:40.900789 7744 server.go:940] "Client rotation is on, will bootstrap in background" Feb 20 14:46:40.908684 master-0 kubenswrapper[7744]: I0220 14:46:40.903873 7744 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 20 14:46:40.908684 master-0 kubenswrapper[7744]: I0220 14:46:40.903981 7744 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 20 14:46:40.908684 master-0 kubenswrapper[7744]: I0220 14:46:40.904268 7744 server.go:997] "Starting client certificate rotation" Feb 20 14:46:40.908684 master-0 kubenswrapper[7744]: I0220 14:46:40.904279 7744 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 20 14:46:40.908684 master-0 kubenswrapper[7744]: I0220 14:46:40.904534 7744 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-21 14:36:18 +0000 UTC, rotation deadline is 2026-02-21 11:08:45.750514505 +0000 UTC Feb 20 14:46:40.908684 master-0 kubenswrapper[7744]: I0220 14:46:40.904641 7744 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h22m4.845878172s for next certificate rotation Feb 20 14:46:40.909327 master-0 kubenswrapper[7744]: I0220 14:46:40.905063 7744 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 14:46:40.909327 master-0 kubenswrapper[7744]: I0220 14:46:40.906686 7744 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 14:46:40.910642 master-0 kubenswrapper[7744]: I0220 14:46:40.910591 7744 log.go:25] "Validated CRI v1 runtime API" Feb 20 14:46:40.915125 master-0 kubenswrapper[7744]: I0220 14:46:40.915066 7744 log.go:25] "Validated CRI v1 image API" Feb 20 14:46:40.916383 master-0 kubenswrapper[7744]: I0220 14:46:40.916328 7744 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 20 14:46:40.924504 master-0 kubenswrapper[7744]: I0220 14:46:40.924408 7744 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 f887e099-fa60-4eeb-b981-d71fb787fc62:/dev/vda3] Feb 20 14:46:40.925256 master-0 kubenswrapper[7744]: I0220 14:46:40.924481 7744 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0ea53368ce61e6c8836a7d0c6d716b7e2c7e18ee974ab80f253b08e24d34227b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0ea53368ce61e6c8836a7d0c6d716b7e2c7e18ee974ab80f253b08e24d34227b/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1489b48b9281848030ac8650ba6a4f51919e00d3276dcba9cb79f43f94b0f041/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1489b48b9281848030ac8650ba6a4f51919e00d3276dcba9cb79f43f94b0f041/userdata/shm major:0 minor:113 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1913b004153de96aee747d5e43e4468694e4be30746f1b0a2aa4f60e2176707c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1913b004153de96aee747d5e43e4468694e4be30746f1b0a2aa4f60e2176707c/userdata/shm major:0 minor:275 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/26c5fe83ca44257f00aa75056a5ba23aa71fd99df73033faf567ea11ded1340f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/26c5fe83ca44257f00aa75056a5ba23aa71fd99df73033faf567ea11ded1340f/userdata/shm major:0 minor:143 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/329b7497d730cc1438c1c88bd3563dab745cc5c71baf09835af567df43aee00e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/329b7497d730cc1438c1c88bd3563dab745cc5c71baf09835af567df43aee00e/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3595d9d8fc957b18c48383f1ad0fcfa521ef5e3e33c6ab788b51ff8638981630/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3595d9d8fc957b18c48383f1ad0fcfa521ef5e3e33c6ab788b51ff8638981630/userdata/shm major:0 minor:168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3f68274f91c27d15a060c5bac225b0b94e8aa70b90454461d048fa9e384a03df/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3f68274f91c27d15a060c5bac225b0b94e8aa70b90454461d048fa9e384a03df/userdata/shm major:0 minor:283 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/469af398b29095aa460373b4a9d58261db50995525853368aaa76c2198d9753f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/469af398b29095aa460373b4a9d58261db50995525853368aaa76c2198d9753f/userdata/shm major:0 minor:117 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5ea8ac7578359ce087855682fd87fbd08a72604f8701716ddbb28b051d93bff2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5ea8ac7578359ce087855682fd87fbd08a72604f8701716ddbb28b051d93bff2/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6315ef904771a7f7ee8f8fb64b568088a83f03dc9235439160e67d9df1c9a04f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6315ef904771a7f7ee8f8fb64b568088a83f03dc9235439160e67d9df1c9a04f/userdata/shm major:0 minor:286 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7190b6f768a0fe97808696f83db6e3236f51dc32c15727d9791bd6e154e97696/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7190b6f768a0fe97808696f83db6e3236f51dc32c15727d9791bd6e154e97696/userdata/shm major:0 minor:293 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/92d6a373c92ade68969e49443823f212abf3c0859e9aaf5d10ff5913a474e6f8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/92d6a373c92ade68969e49443823f212abf3c0859e9aaf5d10ff5913a474e6f8/userdata/shm major:0 minor:278 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95115710de33578fe832a95630e8d98eba6ecc806a442bdc7740ad889ac1e80b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95115710de33578fe832a95630e8d98eba6ecc806a442bdc7740ad889ac1e80b/userdata/shm major:0 minor:131 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95650a37daeacacf8e69d045d48ba4a17652648a0c83345072715e4ffcfa2dda/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95650a37daeacacf8e69d045d48ba4a17652648a0c83345072715e4ffcfa2dda/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9bd614ac7dafc38d2154363d724a872731a806692546d4bc858006cdc5ade17d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9bd614ac7dafc38d2154363d724a872731a806692546d4bc858006cdc5ade17d/userdata/shm major:0 minor:285 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9df920ca539f41ddc66a331c27bc3a12a40dbc8ec795ca71f8a746f6b5203647/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9df920ca539f41ddc66a331c27bc3a12a40dbc8ec795ca71f8a746f6b5203647/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/afc706c41127ee1f98bf413cd8a012a0e0a8f183eef4bf77721d14a272ded89e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/afc706c41127ee1f98bf413cd8a012a0e0a8f183eef4bf77721d14a272ded89e/userdata/shm major:0 minor:289 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b4cf8dbc3fd31a273c2cbd586eecdb2a0961392b7bd552bb39381cfb88539e45/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b4cf8dbc3fd31a273c2cbd586eecdb2a0961392b7bd552bb39381cfb88539e45/userdata/shm major:0 minor:280 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b953e5f23702f5654559767cf06b2635635ca7c579d9ee9d2d2bf61bf3d9a6b1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b953e5f23702f5654559767cf06b2635635ca7c579d9ee9d2d2bf61bf3d9a6b1/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c7111b0bf2b7379929af69699174f229cbbc25f01fc7ffc44b3371950f17c6f2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c7111b0bf2b7379929af69699174f229cbbc25f01fc7ffc44b3371950f17c6f2/userdata/shm major:0 minor:44 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d15f8dfa0d113319aa72954517575419d7a6afcad7f7cef9517b2fb935c0ea42/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d15f8dfa0d113319aa72954517575419d7a6afcad7f7cef9517b2fb935c0ea42/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e94527abc555de66f60f9e134865dfe60d787ebd1878546078cb9b2523c30cab/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e94527abc555de66f60f9e134865dfe60d787ebd1878546078cb9b2523c30cab/userdata/shm major:0 minor:277 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ff5aeff3d91fe04ad5b35e5f18daa8ee28aba3161b0999bafdb650c9674062ac/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ff5aeff3d91fe04ad5b35e5f18daa8ee28aba3161b0999bafdb650c9674062ac/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1fe69517-eec2-4721-933c-fa27cea7ab1f/volumes/kubernetes.io~projected/kube-api-access-rnwtd:{mountpoint:/var/lib/kubelet/pods/1fe69517-eec2-4721-933c-fa27cea7ab1f/volumes/kubernetes.io~projected/kube-api-access-rnwtd major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volumes/kubernetes.io~projected/kube-api-access-gr6nr:{mountpoint:/var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volumes/kubernetes.io~projected/kube-api-access-gr6nr major:0 minor:141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~projected/kube-api-access-gwb5n:{mountpoint:/var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~projected/kube-api-access-gwb5n major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~secret/etcd-client major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~secret/serving-cert major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31d71c90-cab7-4411-9426-0713cb026294/volumes/kubernetes.io~projected/kube-api-access-57cks:{mountpoint:/var/lib/kubelet/pods/31d71c90-cab7-4411-9426-0713cb026294/volumes/kubernetes.io~projected/kube-api-access-57cks major:0 minor:268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/32a79fe0-e619-4a66-8617-e8111bdc7e96/volumes/kubernetes.io~projected/kube-api-access-jkq7j:{mountpoint:/var/lib/kubelet/pods/32a79fe0-e619-4a66-8617-e8111bdc7e96/volumes/kubernetes.io~projected/kube-api-access-jkq7j major:0 minor:110 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33675e96-ce49-49be-9117-954ac7cca5d5/volumes/kubernetes.io~projected/kube-api-access-hbw6n:{mountpoint:/var/lib/kubelet/pods/33675e96-ce49-49be-9117-954ac7cca5d5/volumes/kubernetes.io~projected/kube-api-access-hbw6n major:0 minor:167 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33675e96-ce49-49be-9117-954ac7cca5d5/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/33675e96-ce49-49be-9117-954ac7cca5d5/volumes/kubernetes.io~secret/webhook-cert major:0 minor:166 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/419f28a9-8fd7-4b59-9554-4d884a1208b5/volumes/kubernetes.io~projected/kube-api-access-fttgr:{mountpoint:/var/lib/kubelet/pods/419f28a9-8fd7-4b59-9554-4d884a1208b5/volumes/kubernetes.io~projected/kube-api-access-fttgr major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43e9807a-859c-44c1-8511-0066b0f59ff8/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/43e9807a-859c-44c1-8511-0066b0f59ff8/volumes/kubernetes.io~projected/kube-api-access major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43e9807a-859c-44c1-8511-0066b0f59ff8/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/43e9807a-859c-44c1-8511-0066b0f59ff8/volumes/kubernetes.io~secret/serving-cert major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45d7ef0c-272b-4d1e-965f-484975d5d25c/volumes/kubernetes.io~projected/kube-api-access-svhtr:{mountpoint:/var/lib/kubelet/pods/45d7ef0c-272b-4d1e-965f-484975d5d25c/volumes/kubernetes.io~projected/kube-api-access-svhtr major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45d7ef0c-272b-4d1e-965f-484975d5d25c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/45d7ef0c-272b-4d1e-965f-484975d5d25c/volumes/kubernetes.io~secret/serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4c31b8a7-edcb-403d-9122-7eb740f7d659/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/4c31b8a7-edcb-403d-9122-7eb740f7d659/volumes/kubernetes.io~projected/kube-api-access major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4c31b8a7-edcb-403d-9122-7eb740f7d659/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4c31b8a7-edcb-403d-9122-7eb740f7d659/volumes/kubernetes.io~secret/serving-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4cede061-d85a-4366-9f1e-90be51f726fc/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/4cede061-d85a-4366-9f1e-90be51f726fc/volumes/kubernetes.io~projected/kube-api-access major:0 minor:112 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5d2b154b-de63-4c9b-99d8-487fb3035fb9/volumes/kubernetes.io~projected/kube-api-access-mclrj:{mountpoint:/var/lib/kubelet/pods/5d2b154b-de63-4c9b-99d8-487fb3035fb9/volumes/kubernetes.io~projected/kube-api-access-mclrj major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5d2b154b-de63-4c9b-99d8-487fb3035fb9/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/5d2b154b-de63-4c9b-99d8-487fb3035fb9/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ea4c132-b6d0-4dc9-942d-48e359eed418/volumes/kubernetes.io~projected/kube-api-access-7nlf9:{mountpoint:/var/lib/kubelet/pods/5ea4c132-b6d0-4dc9-942d-48e359eed418/volumes/kubernetes.io~projected/kube-api-access-7nlf9 major:0 minor:135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8157f73d-c757-40c4-80bc-3c9de2f2288a/volumes/kubernetes.io~projected/kube-api-access-bk5m4:{mountpoint:/var/lib/kubelet/pods/8157f73d-c757-40c4-80bc-3c9de2f2288a/volumes/kubernetes.io~projected/kube-api-access-bk5m4 major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8157f73d-c757-40c4-80bc-3c9de2f2288a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8157f73d-c757-40c4-80bc-3c9de2f2288a/volumes/kubernetes.io~secret/serving-cert major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/87cf4690-1ec1-44fc-94bd-730d9f2e6762/volumes/kubernetes.io~projected/kube-api-access-r9c94:{mountpoint:/var/lib/kubelet/pods/87cf4690-1ec1-44fc-94bd-730d9f2e6762/volumes/kubernetes.io~projected/kube-api-access-r9c94 major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b73ae08-0ad7-4f99-8002-6df0d984cd2c/volumes/kubernetes.io~projected/kube-api-access-mb46b:{mountpoint:/var/lib/kubelet/pods/8b73ae08-0ad7-4f99-8002-6df0d984cd2c/volumes/kubernetes.io~projected/kube-api-access-mb46b major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b73ae08-0ad7-4f99-8002-6df0d984cd2c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8b73ae08-0ad7-4f99-8002-6df0d984cd2c/volumes/kubernetes.io~secret/serving-cert major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/900e244c-67aa-402f-b5f0-d37c5c1cedf7/volumes/kubernetes.io~projected/kube-api-access-n85mh:{mountpoint:/var/lib/kubelet/pods/900e244c-67aa-402f-b5f0-d37c5c1cedf7/volumes/kubernetes.io~projected/kube-api-access-n85mh major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/989af121-da08-4f40-b08c-dd2aa67bc60c/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/989af121-da08-4f40-b08c-dd2aa67bc60c/volumes/kubernetes.io~projected/kube-api-access major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/989af121-da08-4f40-b08c-dd2aa67bc60c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/989af121-da08-4f40-b08c-dd2aa67bc60c/volumes/kubernetes.io~secret/serving-cert major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9fd9f419-2cdc-4991-8fb9-87d76ac58976/volumes/kubernetes.io~projected/kube-api-access-svlzf:{mountpoint:/var/lib/kubelet/pods/9fd9f419-2cdc-4991-8fb9-87d76ac58976/volumes/kubernetes.io~projected/kube-api-access-svlzf major:0 minor:111 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9fd9f419-2cdc-4991-8fb9-87d76ac58976/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/9fd9f419-2cdc-4991-8fb9-87d76ac58976/volumes/kubernetes.io~secret/metrics-tls major:0 minor:77 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1fb2774-6dd7-4429-9df3-4ddfcdaac939/volumes/kubernetes.io~projected/kube-api-access-jk9xr:{mountpoint:/var/lib/kubelet/pods/a1fb2774-6dd7-4429-9df3-4ddfcdaac939/volumes/kubernetes.io~projected/kube-api-access-jk9xr major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6e6d218-d969-40b5-a32b-9b2093089dbf/volumes/kubernetes.io~projected/kube-api-access-psd59:{mountpoint:/var/lib/kubelet/pods/b6e6d218-d969-40b5-a32b-9b2093089dbf/volumes/kubernetes.io~projected/kube-api-access-psd59 major:0 minor:130 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1/volumes/kubernetes.io~projected/kube-api-access-pzmqr:{mountpoint:/var/lib/kubelet/pods/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1/volumes/kubernetes.io~projected/kube-api-access-pzmqr major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c0a3548f-299c-4234-9bf1-c93efcb9740b/volumes/kubernetes.io~projected/kube-api-access-7d5fq:{mountpoint:/var/lib/kubelet/pods/c0a3548f-299c-4234-9bf1-c93efcb9740b/volumes/kubernetes.io~projected/kube-api-access-7d5fq major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c81ad608-a8ad-4289-a8d2-d48acb9b540c/volumes/kubernetes.io~projected/kube-api-access-wj4dx:{mountpoint:/var/lib/kubelet/pods/c81ad608-a8ad-4289-a8d2-d48acb9b540c/volumes/kubernetes.io~projected/kube-api-access-wj4dx major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c81ad608-a8ad-4289-a8d2-d48acb9b540c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c81ad608-a8ad-4289-a8d2-d48acb9b540c/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d28490b0-96ca-4fe0-8fae-e6f8390f933b/volumes/kubernetes.io~projected/kube-api-access-qm5p2:{mountpoint:/var/lib/kubelet/pods/d28490b0-96ca-4fe0-8fae-e6f8390f933b/volumes/kubernetes.io~projected/kube-api-access-qm5p2 major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3ca2d2f-9f31-4524-a28f-cf16b02dd711/volumes/kubernetes.io~projected/kube-api-access-4jn8g:{mountpoint:/var/lib/kubelet/pods/d3ca2d2f-9f31-4524-a28f-cf16b02dd711/volumes/kubernetes.io~projected/kube-api-access-4jn8g major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3ca2d2f-9f31-4524-a28f-cf16b02dd711/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/d3ca2d2f-9f31-4524-a28f-cf16b02dd711/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/db9dc349-5216-43ff-8c17-3a9384a010ea/volumes/kubernetes.io~projected/kube-api-access-smglm:{mountpoint:/var/lib/kubelet/pods/db9dc349-5216-43ff-8c17-3a9384a010ea/volumes/kubernetes.io~projected/kube-api-access-smglm major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/db9dc349-5216-43ff-8c17-3a9384a010ea/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/db9dc349-5216-43ff-8c17-3a9384a010ea/volumes/kubernetes.io~secret/serving-cert major:0 minor:252 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/41833c2ec1b3af4047582f7cede91a689f9c9c56a7da92514b19696f9de5c8fc/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-115:{mountpoint:/var/lib/containers/storage/overlay/9b4bb7bf4d5cc47259fef5a25c6f92b4bac2fba022ebe82b81d64bdd4d76a2e1/merged major:0 minor:115 fsType:overlay blockSize:0} overlay_0-119:{mountpoint:/var/lib/containers/storage/overlay/a213ddefcc2eac415c59ebb84984762aa19415b47e14e15d3a32ecbb8c4e8db4/merged major:0 minor:119 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/f4f4921c4279a75135a7edb09a87486976e2f4bcb6e4a92228fc91ff8134cd9e/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-123:{mountpoint:/var/lib/containers/storage/overlay/5d36f7a42ddc9f1289f2bfd6dcd693d2f1411b0916aa0c91a9a47541dbadd79e/merged major:0 minor:123 fsType:overlay blockSize:0} overlay_0-125:{mountpoint:/var/lib/containers/storage/overlay/a9682a4735a156b63f3d33c12d2c286d109e4d67a515bb5a23fad4adee06a814/merged major:0 minor:125 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/42fea0280817d534a2baf61ef052d3c5fef40dc4470baf5ff1c6372fa88cdd14/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/371c312551cf30b9bd5610ccc6f83ad3a2420bb932045b3bf69f44a34c8fd209/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/88afa83af9754029705c132540fee1717e3f30bda62375056689c08e3fd9c186/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/9cdc891a1b2041c1f0fbd12af9293cc3eba45b2310911c564caaf97784d61b35/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/095b07d1fdec20a84a4b6b8faf0e952e43e5714129579d1b7f0b3385967cc719/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/91b598f62b9d266fdf4e03ff8f12dabff58d2d97e4b1e3aaec93e2f4cfbaf7f6/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/04b49d66ea78cded76006545f5f0991bdf5168cd60c57f78b7e188bc427a24a4/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/ce916713e86b6573843078429d32e9e3c301db603bfaf66103172ba8dd211ec9/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/9660bbdd6383fc73dd97f0f279768413e10d264ac38e590e70cfefe8f0187ce1/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/d1e21a6b71cd6adad9ff667b49d3a0ea53813e7526f0eae3ae00e0876f217363/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/b02be86bbff28746f6aafe50e5c48ca560b1ed8c57bda8e5fcff55adea9d35e9/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/06607e04a8667b0ad462a765b6138c230302bc723f96b3d1e36ecb804f94fcc7/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-176:{mountpoint:/var/lib/containers/storage/overlay/884dddb3ccb62f88a585a5ea28de915faa79eddb5fb65af77030322802394dbf/merged major:0 minor:176 fsType:overlay blockSize:0} overlay_0-178:{mountpoint:/var/lib/containers/storage/overlay/33a2defa9b85296d1a0b80315602adb8a66752f61db42a95e863938dc8d11282/merged major:0 minor:178 fsType:overlay blockSize:0} overlay_0-180:{mountpoint:/var/lib/containers/storage/overlay/f00867832b255e75707a9fabc6305f45d3d2876d766fe496b97e14423b6b5b60/merged major:0 minor:180 fsType:overlay blockSize:0} overlay_0-182:{mountpoint:/var/lib/containers/storage/overlay/f6dae9f7fa172b5cf053523244188662150c9ccf1cc16344dd9ad9fcd780b074/merged major:0 minor:182 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/e494711d932198e113069979d11742defc146f0c350d81d4f13bac816c1fcad0/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/63c3e916b033be6d0858bfd7ffc02b6682d3131b17372c4ffe0223760d31d475/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-200:{mountpoint:/var/lib/containers/storage/overlay/6752b334a95f512e46fdcf652c3c38a8d91157660efab64ad5ed1342df317bcb/merged major:0 minor:200 fsType:overlay blockSize:0} overlay_0-202:{mountpoint:/var/lib/containers/storage/overlay/e1e1ab6f1a3a16c6097fc1bc10107c8bdf011b4cf8a44aaae301f1502e2d1be3/merged major:0 minor:202 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/1209a0651f9fdd29c776b6fd6e9aebabc3684884ab3b77aa961f078a888e6426/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-215:{mountpoint:/var/lib/containers/storage/overlay/5abe1a1dfcedfe8aea872180aea9d9066d024b2579673806ae04f105fe9d79f3/merged major:0 minor:215 fsType:overlay blockSize:0} overlay_0-217:{mountpoint:/var/lib/containers/storage/overlay/159e8b8325ea5f0bff51e0166c11e5e19c9d3891873c042f1a1541cc136a56c1/merged major:0 minor:217 fsType:overlay blockSize:0} overlay_0-222:{mountpoint:/var/lib/containers/storage/overlay/19ec1cfddd6029204bd1e471a20b4eff22fd6a6182bd701cc30a9b8a73581ebd/merged major:0 minor:222 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/61ee62664ae85ddf0c90683532d8e9705aa26361a25aa487c11f62ce2327af65/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-270:{mountpoint:/var/lib/containers/storage/overlay/ef949e9f88456f7303370d9cc1840f562391df74e20098f0528d76daaa9d42ea/merged major:0 minor:270 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/5ff857cc9fffaaebde426bf5f86e18c24e6e0e1a2430185a110f213dd393c9a4/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/8ab419c96e042d72e23d71d2b690d3a372d40de7726d974249ce53c590c85b33/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/c9ed36b06f64c2a9d4c2857406c11bb5f39771959fe552fe1bfaee30a0cbf0cd/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/38e3f396ff7a0a130ec51bbc340c958e0cd8a6cd694d91f4adb8e1241a51b37d/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/d9f3d597e0230afc1df35523f75b7f9ce9712bcff99fb0f8b329911d2c53cc06/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/02d6cf3a35343bf224af25b1ed6f931d862a0a43d4ae4e3bd43d36937a49a4ff/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/76470ecdbf0d9605fbd5dbaecdf5ef327ad18aedcbe5b0bcb3d7af83441a0979/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-310:{mountpoint:/var/lib/containers/storage/overlay/cd4193a4b62518ccb300694dfe270ed56295e20833819cd1e84944ae89e1c711/merged major:0 minor:310 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/36a06de1514a13d3ec4751fe34163c7df6d5a2995aa7fac634c13b0510acc47e/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-318:{mountpoint:/var/lib/containers/storage/overlay/2ff306a0b075c1d2b9642532bab2d974d516d2a6e576cee1b20a1b0a791d3d2e/merged major:0 minor:318 fsType:overlay blockSize:0} overlay_0-320:{mountpoint:/var/lib/containers/storage/overlay/b2c484d3afb38ece6cf1d3adc2ced31eef7c52e33c18c656481ca73d0bfb29c2/merged major:0 minor:320 fsType:overlay blockSize:0} overlay_0-322:{mountpoint:/var/lib/containers/storage/overlay/9c3995c7f1537015ff5ddfd64f26852fd3465bb9be274ae4c7a4f3820db6afd6/merged major:0 minor:322 fsType:overlay blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/53c76ecc43293420c85c5dcdbb521c3df32ae40ce32a708738331606439e98a7/merged major:0 minor:43 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/20726f7e6892612c3b5823a58e454d1187d6c7e4c794379b81825690bdb1404d/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/99441a2dc18c87e6d73ce8b5cf3e7c576f37163d088cb5a78587e0f89e88c957/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/cebcf66871efea27960f118294a7bfa5ba5de2bc9c3fe0aef7dbcac75ac0c434/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/9ff95433627da241fd7fc0423febfbb63cdfeeab52db59dcb56499ac966fd941/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/a8a3c06fd7934077fef989b864ce6c1e8643e3f9a1f6b7f8ad3e457aa4eacb03/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/02ae7df61298da85844631b075cc9b116989a3dabe7a91f5ad056d6a718d20ce/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/4be4f658d12741d859e79a932de1fe56423a30e6fea5cb02504deaf6678f5306/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/462c8e8b44aa23e230cde7535593fd9411bf20234f64da7eb85080c2379b611f/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/2034230a5cb0571ee1d220b0ad49b383a1b2539b4ee918803c56e3e4f1c5b4a6/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/63542e62fe1ae5883637f9e3194ec246da80e466d48c124ba0c4228ffb68271f/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/05893d4cc305f9e0b2b3305339509b5e7d13a97256d8bc186069213a97a3f066/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/6200a30f082b00c275ea7fa989a2eb8fe3fb0dc23d4a571001710ce3a7460c0a/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-83:{mountpoint:/var/lib/containers/storage/overlay/bd9b6104fa81d58d01d8acba96a676965f5b2e9db9935a5cd5f01c1befca57b5/merged major:0 minor:83 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/4f01ee5413796f040d200625612a3726bfd9664c6b311b26a2082dd6c3d8419a/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-97:{mountpoint:/var/lib/containers/storage/overlay/2f39c90dc7f56115d12499bc862161abcd8ff145a77ad3f85abd7d3484c62194/merged major:0 minor:97 fsType:overlay blockSize:0}] Feb 20 14:46:40.967314 master-0 kubenswrapper[7744]: I0220 14:46:40.966165 7744 manager.go:217] Machine: {Timestamp:2026-02-20 14:46:40.96482762 +0000 UTC m=+0.167027580 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:48b9dd3ce20842759e3dc6160315340b SystemUUID:48b9dd3c-e208-4275-9e3d-c6160315340b BootID:509e02d8-f41f-4d6f-8d1a-4efa2a52c9c0 Filesystems:[{Device:overlay_0-83 DeviceMajor:0 DeviceMinor:83 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-97 DeviceMajor:0 DeviceMinor:97 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5d2b154b-de63-4c9b-99d8-487fb3035fb9/volumes/kubernetes.io~projected/kube-api-access-mclrj DeviceMajor:0 DeviceMinor:139 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-222 DeviceMajor:0 DeviceMinor:222 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-123 DeviceMajor:0 DeviceMinor:123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-318 DeviceMajor:0 DeviceMinor:318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9fd9f419-2cdc-4991-8fb9-87d76ac58976/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:77 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-270 DeviceMajor:0 DeviceMinor:270 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8157f73d-c757-40c4-80bc-3c9de2f2288a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:251 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b4cf8dbc3fd31a273c2cbd586eecdb2a0961392b7bd552bb39381cfb88539e45/userdata/shm DeviceMajor:0 DeviceMinor:280 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-200 DeviceMajor:0 DeviceMinor:200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:140 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/45d7ef0c-272b-4d1e-965f-484975d5d25c/volumes/kubernetes.io~projected/kube-api-access-svhtr DeviceMajor:0 DeviceMinor:242 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ff5aeff3d91fe04ad5b35e5f18daa8ee28aba3161b0999bafdb650c9674062ac/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4c31b8a7-edcb-403d-9122-7eb740f7d659/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c0a3548f-299c-4234-9bf1-c93efcb9740b/volumes/kubernetes.io~projected/kube-api-access-7d5fq DeviceMajor:0 DeviceMinor:261 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8b73ae08-0ad7-4f99-8002-6df0d984cd2c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:246 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d28490b0-96ca-4fe0-8fae-e6f8390f933b/volumes/kubernetes.io~projected/kube-api-access-qm5p2 DeviceMajor:0 DeviceMinor:260 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/92d6a373c92ade68969e49443823f212abf3c0859e9aaf5d10ff5913a474e6f8/userdata/shm DeviceMajor:0 DeviceMinor:278 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-115 DeviceMajor:0 DeviceMinor:115 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95650a37daeacacf8e69d045d48ba4a17652648a0c83345072715e4ffcfa2dda/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-176 DeviceMajor:0 DeviceMinor:176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d3ca2d2f-9f31-4524-a28f-cf16b02dd711/volumes/kubernetes.io~projected/kube-api-access-4jn8g DeviceMajor:0 DeviceMinor:259 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9bd614ac7dafc38d2154363d724a872731a806692546d4bc858006cdc5ade17d/userdata/shm DeviceMajor:0 DeviceMinor:285 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/31d71c90-cab7-4411-9426-0713cb026294/volumes/kubernetes.io~projected/kube-api-access-57cks DeviceMajor:0 DeviceMinor:268 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6315ef904771a7f7ee8f8fb64b568088a83f03dc9235439160e67d9df1c9a04f/userdata/shm DeviceMajor:0 DeviceMinor:286 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b6e6d218-d969-40b5-a32b-9b2093089dbf/volumes/kubernetes.io~projected/kube-api-access-psd59 DeviceMajor:0 DeviceMinor:130 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3595d9d8fc957b18c48383f1ad0fcfa521ef5e3e33c6ab788b51ff8638981630/userdata/shm DeviceMajor:0 DeviceMinor:168 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/4c31b8a7-edcb-403d-9122-7eb740f7d659/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-125 DeviceMajor:0 DeviceMinor:125 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/33675e96-ce49-49be-9117-954ac7cca5d5/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:166 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-180 DeviceMajor:0 DeviceMinor:180 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:248 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1fe69517-eec2-4721-933c-fa27cea7ab1f/volumes/kubernetes.io~projected/kube-api-access-rnwtd DeviceMajor:0 DeviceMinor:253 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/afc706c41127ee1f98bf413cd8a012a0e0a8f183eef4bf77721d14a272ded89e/userdata/shm DeviceMajor:0 DeviceMinor:289 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c81ad608-a8ad-4289-a8d2-d48acb9b540c/volumes/kubernetes.io~projected/kube-api-access-wj4dx DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/43e9807a-859c-44c1-8511-0066b0f59ff8/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1/volumes/kubernetes.io~projected/kube-api-access-pzmqr DeviceMajor:0 DeviceMinor:255 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1913b004153de96aee747d5e43e4468694e4be30746f1b0a2aa4f60e2176707c/userdata/shm DeviceMajor:0 DeviceMinor:275 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c7111b0bf2b7379929af69699174f229cbbc25f01fc7ffc44b3371950f17c6f2/userdata/shm DeviceMajor:0 DeviceMinor:44 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95115710de33578fe832a95630e8d98eba6ecc806a442bdc7740ad889ac1e80b/userdata/shm DeviceMajor:0 DeviceMinor:131 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/329b7497d730cc1438c1c88bd3563dab745cc5c71baf09835af567df43aee00e/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:249 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/db9dc349-5216-43ff-8c17-3a9384a010ea/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:252 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-310 DeviceMajor:0 DeviceMinor:310 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d15f8dfa0d113319aa72954517575419d7a6afcad7f7cef9517b2fb935c0ea42/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/900e244c-67aa-402f-b5f0-d37c5c1cedf7/volumes/kubernetes.io~projected/kube-api-access-n85mh DeviceMajor:0 DeviceMinor:257 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/43e9807a-859c-44c1-8511-0066b0f59ff8/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:241 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3f68274f91c27d15a060c5bac225b0b94e8aa70b90454461d048fa9e384a03df/userdata/shm DeviceMajor:0 DeviceMinor:283 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/26c5fe83ca44257f00aa75056a5ba23aa71fd99df73033faf567ea11ded1340f/userdata/shm DeviceMajor:0 DeviceMinor:143 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:263 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-182 DeviceMajor:0 DeviceMinor:182 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8b73ae08-0ad7-4f99-8002-6df0d984cd2c/volumes/kubernetes.io~projected/kube-api-access-mb46b DeviceMajor:0 DeviceMinor:258 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7190b6f768a0fe97808696f83db6e3236f51dc32c15727d9791bd6e154e97696/userdata/shm DeviceMajor:0 DeviceMinor:293 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e94527abc555de66f60f9e134865dfe60d787ebd1878546078cb9b2523c30cab/userdata/shm DeviceMajor:0 DeviceMinor:277 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/87cf4690-1ec1-44fc-94bd-730d9f2e6762/volumes/kubernetes.io~projected/kube-api-access-r9c94 DeviceMajor:0 DeviceMinor:254 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5ea4c132-b6d0-4dc9-942d-48e359eed418/volumes/kubernetes.io~projected/kube-api-access-7nlf9 DeviceMajor:0 DeviceMinor:135 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4cede061-d85a-4366-9f1e-90be51f726fc/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:112 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volumes/kubernetes.io~projected/kube-api-access-gr6nr DeviceMajor:0 DeviceMinor:141 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-217 DeviceMajor:0 DeviceMinor:217 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/419f28a9-8fd7-4b59-9554-4d884a1208b5/volumes/kubernetes.io~projected/kube-api-access-fttgr DeviceMajor:0 DeviceMinor:265 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0ea53368ce61e6c8836a7d0c6d716b7e2c7e18ee974ab80f253b08e24d34227b/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5ea8ac7578359ce087855682fd87fbd08a72604f8701716ddbb28b051d93bff2/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5d2b154b-de63-4c9b-99d8-487fb3035fb9/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:138 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/989af121-da08-4f40-b08c-dd2aa67bc60c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/32a79fe0-e619-4a66-8617-e8111bdc7e96/volumes/kubernetes.io~projected/kube-api-access-jkq7j DeviceMajor:0 DeviceMinor:110 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-215 DeviceMajor:0 DeviceMinor:215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-320 DeviceMajor:0 DeviceMinor:320 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-322 DeviceMajor:0 DeviceMinor:322 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/db9dc349-5216-43ff-8c17-3a9384a010ea/volumes/kubernetes.io~projected/kube-api-access-smglm DeviceMajor:0 DeviceMinor:269 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~projected/kube-api-access-gwb5n DeviceMajor:0 DeviceMinor:272 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-202 DeviceMajor:0 DeviceMinor:202 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8157f73d-c757-40c4-80bc-3c9de2f2288a/volumes/kubernetes.io~projected/kube-api-access-bk5m4 DeviceMajor:0 DeviceMinor:262 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a1fb2774-6dd7-4429-9df3-4ddfcdaac939/volumes/kubernetes.io~projected/kube-api-access-jk9xr DeviceMajor:0 DeviceMinor:264 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9df920ca539f41ddc66a331c27bc3a12a40dbc8ec795ca71f8a746f6b5203647/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c81ad608-a8ad-4289-a8d2-d48acb9b540c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9fd9f419-2cdc-4991-8fb9-87d76ac58976/volumes/kubernetes.io~projected/kube-api-access-svlzf DeviceMajor:0 DeviceMinor:111 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/469af398b29095aa460373b4a9d58261db50995525853368aaa76c2198d9753f/userdata/shm DeviceMajor:0 DeviceMinor:117 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/989af121-da08-4f40-b08c-dd2aa67bc60c/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:256 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-178 DeviceMajor:0 DeviceMinor:178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d3ca2d2f-9f31-4524-a28f-cf16b02dd711/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:247 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/45d7ef0c-272b-4d1e-965f-484975d5d25c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1489b48b9281848030ac8650ba6a4f51919e00d3276dcba9cb79f43f94b0f041/userdata/shm DeviceMajor:0 DeviceMinor:113 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/33675e96-ce49-49be-9117-954ac7cca5d5/volumes/kubernetes.io~projected/kube-api-access-hbw6n DeviceMajor:0 DeviceMinor:167 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b953e5f23702f5654559767cf06b2635635ca7c579d9ee9d2d2bf61bf3d9a6b1/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-119 DeviceMajor:0 DeviceMinor:119 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:1913b004153de96 MacAddress:fa:52:7d:53:19:f5 Speed:10000 Mtu:8900} {Name:3f68274f91c27d1 MacAddress:f6:85:07:7c:af:c7 Speed:10000 Mtu:8900} {Name:6315ef904771a7f MacAddress:4e:fc:5a:bc:2f:59 Speed:10000 Mtu:8900} {Name:7190b6f768a0fe9 MacAddress:d6:5f:5d:88:d5:65 Speed:10000 Mtu:8900} {Name:92d6a373c92ade6 MacAddress:12:49:03:4d:f8:d5 Speed:10000 Mtu:8900} {Name:95650a37daeacac MacAddress:de:b2:a0:e7:da:18 Speed:10000 Mtu:8900} {Name:9bd614ac7dafc38 MacAddress:42:07:17:76:7c:36 Speed:10000 Mtu:8900} {Name:9df920ca539f41d MacAddress:de:0c:a4:5b:fe:69 Speed:10000 Mtu:8900} {Name:afc706c41127ee1 MacAddress:36:dc:a5:b3:e6:db Speed:10000 Mtu:8900} {Name:b4cf8dbc3fd31a2 MacAddress:96:7e:ea:03:b5:07 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:62:9a:a7:52:77:d8 Speed:0 Mtu:8900} {Name:e94527abc555de6 MacAddress:36:97:86:64:85:c0 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:2c:8d:77 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:3f:57:48 Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:1a:32:2a:84:99:77 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 20 14:46:40.967314 master-0 kubenswrapper[7744]: I0220 14:46:40.967273 7744 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 20 14:46:40.967864 master-0 kubenswrapper[7744]: I0220 14:46:40.967477 7744 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 20 14:46:40.967864 master-0 kubenswrapper[7744]: I0220 14:46:40.967748 7744 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 20 14:46:40.968017 master-0 kubenswrapper[7744]: I0220 14:46:40.967984 7744 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 20 14:46:40.968296 master-0 kubenswrapper[7744]: I0220 14:46:40.968020 7744 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 20 14:46:40.968383 master-0 kubenswrapper[7744]: I0220 14:46:40.968315 7744 topology_manager.go:138] "Creating topology manager with none policy" Feb 20 14:46:40.968383 master-0 kubenswrapper[7744]: I0220 14:46:40.968332 7744 container_manager_linux.go:303] "Creating device plugin manager" Feb 20 14:46:40.968383 master-0 kubenswrapper[7744]: I0220 14:46:40.968346 7744 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 20 14:46:40.968383 master-0 kubenswrapper[7744]: I0220 14:46:40.968378 7744 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 20 14:46:40.968591 master-0 kubenswrapper[7744]: I0220 14:46:40.968512 7744 state_mem.go:36] "Initialized new in-memory state store" Feb 20 14:46:40.968646 master-0 kubenswrapper[7744]: I0220 14:46:40.968617 7744 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 20 14:46:40.968732 master-0 kubenswrapper[7744]: I0220 14:46:40.968707 7744 kubelet.go:418] "Attempting to sync node with API server" Feb 20 14:46:40.968732 master-0 kubenswrapper[7744]: I0220 14:46:40.968729 7744 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 20 14:46:40.968836 master-0 kubenswrapper[7744]: I0220 14:46:40.968751 7744 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 20 14:46:40.968836 master-0 kubenswrapper[7744]: I0220 14:46:40.968778 7744 kubelet.go:324] "Adding apiserver pod source" Feb 20 14:46:40.968836 master-0 kubenswrapper[7744]: I0220 14:46:40.968802 7744 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 20 14:46:40.971072 master-0 kubenswrapper[7744]: I0220 14:46:40.971019 7744 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 20 14:46:40.971312 master-0 kubenswrapper[7744]: I0220 14:46:40.971271 7744 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 20 14:46:40.971812 master-0 kubenswrapper[7744]: I0220 14:46:40.971773 7744 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 20 14:46:40.972654 master-0 kubenswrapper[7744]: I0220 14:46:40.972175 7744 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 20 14:46:40.972654 master-0 kubenswrapper[7744]: I0220 14:46:40.972261 7744 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 20 14:46:40.972654 master-0 kubenswrapper[7744]: I0220 14:46:40.972279 7744 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 20 14:46:40.972654 master-0 kubenswrapper[7744]: I0220 14:46:40.972291 7744 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 20 14:46:40.972654 master-0 kubenswrapper[7744]: I0220 14:46:40.972304 7744 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 20 14:46:40.972654 master-0 kubenswrapper[7744]: I0220 14:46:40.972317 7744 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 20 14:46:40.972654 master-0 kubenswrapper[7744]: I0220 14:46:40.972330 7744 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 20 14:46:40.972654 master-0 kubenswrapper[7744]: I0220 14:46:40.972342 7744 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 20 14:46:40.972654 master-0 kubenswrapper[7744]: I0220 14:46:40.972356 7744 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 20 14:46:40.972654 master-0 kubenswrapper[7744]: I0220 14:46:40.972371 7744 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 20 14:46:40.972654 master-0 kubenswrapper[7744]: I0220 14:46:40.972418 7744 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 20 14:46:40.977206 master-0 kubenswrapper[7744]: I0220 14:46:40.976166 7744 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 20 14:46:40.977206 master-0 kubenswrapper[7744]: I0220 14:46:40.976226 7744 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 20 14:46:40.977206 master-0 kubenswrapper[7744]: I0220 14:46:40.976599 7744 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 20 14:46:40.977206 master-0 kubenswrapper[7744]: I0220 14:46:40.976825 7744 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 20 14:46:40.977661 master-0 kubenswrapper[7744]: I0220 14:46:40.977616 7744 server.go:1280] "Started kubelet" Feb 20 14:46:40.978569 master-0 kubenswrapper[7744]: I0220 14:46:40.978217 7744 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 20 14:46:40.978569 master-0 kubenswrapper[7744]: I0220 14:46:40.978356 7744 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 20 14:46:40.979765 master-0 kubenswrapper[7744]: I0220 14:46:40.979709 7744 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 20 14:46:40.980464 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 20 14:46:40.981690 master-0 kubenswrapper[7744]: I0220 14:46:40.981589 7744 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 20 14:46:40.990203 master-0 kubenswrapper[7744]: I0220 14:46:40.989562 7744 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 20 14:46:40.990203 master-0 kubenswrapper[7744]: I0220 14:46:40.989633 7744 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 20 14:46:40.990203 master-0 kubenswrapper[7744]: I0220 14:46:40.990146 7744 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 20 14:46:40.990203 master-0 kubenswrapper[7744]: I0220 14:46:40.990177 7744 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 20 14:46:40.990586 master-0 kubenswrapper[7744]: I0220 14:46:40.990371 7744 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 20 14:46:40.991524 master-0 kubenswrapper[7744]: I0220 14:46:40.991295 7744 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-21 14:36:18 +0000 UTC, rotation deadline is 2026-02-21 10:47:52.540678443 +0000 UTC Feb 20 14:46:40.991524 master-0 kubenswrapper[7744]: I0220 14:46:40.991502 7744 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h1m11.549182574s for next certificate rotation Feb 20 14:46:40.992506 master-0 kubenswrapper[7744]: I0220 14:46:40.992456 7744 server.go:449] "Adding debug handlers to kubelet server" Feb 20 14:46:40.992659 master-0 kubenswrapper[7744]: I0220 14:46:40.992642 7744 factory.go:153] Registering CRI-O factory Feb 20 14:46:40.992659 master-0 kubenswrapper[7744]: I0220 14:46:40.992661 7744 factory.go:221] Registration of the crio container factory successfully Feb 20 14:46:40.992833 master-0 kubenswrapper[7744]: I0220 14:46:40.992741 7744 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 20 14:46:40.992833 master-0 kubenswrapper[7744]: I0220 14:46:40.992757 7744 factory.go:55] Registering systemd factory Feb 20 14:46:40.992833 master-0 kubenswrapper[7744]: I0220 14:46:40.992765 7744 factory.go:221] Registration of the systemd container factory successfully Feb 20 14:46:40.992833 master-0 kubenswrapper[7744]: I0220 14:46:40.992783 7744 factory.go:103] Registering Raw factory Feb 20 14:46:40.992833 master-0 kubenswrapper[7744]: I0220 14:46:40.992797 7744 manager.go:1196] Started watching for new ooms in manager Feb 20 14:46:40.993781 master-0 kubenswrapper[7744]: I0220 14:46:40.993730 7744 manager.go:319] Starting recovery of all containers Feb 20 14:46:40.995284 master-0 kubenswrapper[7744]: I0220 14:46:40.995193 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33675e96-ce49-49be-9117-954ac7cca5d5" volumeName="kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-env-overrides" seLinuxMountContext="" Feb 20 14:46:40.995284 master-0 kubenswrapper[7744]: I0220 14:46:40.995266 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="419f28a9-8fd7-4b59-9554-4d884a1208b5" volumeName="kubernetes.io/projected/419f28a9-8fd7-4b59-9554-4d884a1208b5-kube-api-access-fttgr" seLinuxMountContext="" Feb 20 14:46:40.995284 master-0 kubenswrapper[7744]: I0220 14:46:40.995286 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6e6d218-d969-40b5-a32b-9b2093089dbf" volumeName="kubernetes.io/projected/b6e6d218-d969-40b5-a32b-9b2093089dbf-kube-api-access-psd59" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995303 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c81ad608-a8ad-4289-a8d2-d48acb9b540c" volumeName="kubernetes.io/secret/c81ad608-a8ad-4289-a8d2-d48acb9b540c-serving-cert" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995320 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3ca2d2f-9f31-4524-a28f-cf16b02dd711" volumeName="kubernetes.io/empty-dir/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-operand-assets" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995335 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db9dc349-5216-43ff-8c17-3a9384a010ea" volumeName="kubernetes.io/projected/db9dc349-5216-43ff-8c17-3a9384a010ea-kube-api-access-smglm" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995350 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3ca2d2f-9f31-4524-a28f-cf16b02dd711" volumeName="kubernetes.io/secret/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995368 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21384bd0-495c-406a-9462-e9e740c04686" volumeName="kubernetes.io/secret/21384bd0-495c-406a-9462-e9e740c04686-ovn-node-metrics-cert" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995387 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32a79fe0-e619-4a66-8617-e8111bdc7e96" volumeName="kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-daemon-config" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995403 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43e9807a-859c-44c1-8511-0066b0f59ff8" volumeName="kubernetes.io/projected/43e9807a-859c-44c1-8511-0066b0f59ff8-kube-api-access" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995419 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf4690-1ec1-44fc-94bd-730d9f2e6762" volumeName="kubernetes.io/configmap/87cf4690-1ec1-44fc-94bd-730d9f2e6762-iptables-alerter-script" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995435 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fd9f419-2cdc-4991-8fb9-87d76ac58976" volumeName="kubernetes.io/projected/9fd9f419-2cdc-4991-8fb9-87d76ac58976-kube-api-access-svlzf" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995451 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3ca2d2f-9f31-4524-a28f-cf16b02dd711" volumeName="kubernetes.io/projected/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-kube-api-access-4jn8g" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995469 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21384bd0-495c-406a-9462-e9e740c04686" volumeName="kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-env-overrides" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995484 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21384bd0-495c-406a-9462-e9e740c04686" volumeName="kubernetes.io/projected/21384bd0-495c-406a-9462-e9e740c04686-kube-api-access-gr6nr" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995525 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b73ae08-0ad7-4f99-8002-6df0d984cd2c" volumeName="kubernetes.io/configmap/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-config" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995541 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="989af121-da08-4f40-b08c-dd2aa67bc60c" volumeName="kubernetes.io/configmap/989af121-da08-4f40-b08c-dd2aa67bc60c-config" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995558 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0a3548f-299c-4234-9bf1-c93efcb9740b" volumeName="kubernetes.io/configmap/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-trusted-ca" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995572 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b73ae08-0ad7-4f99-8002-6df0d984cd2c" volumeName="kubernetes.io/secret/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-serving-cert" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995589 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="989af121-da08-4f40-b08c-dd2aa67bc60c" volumeName="kubernetes.io/projected/989af121-da08-4f40-b08c-dd2aa67bc60c-kube-api-access" seLinuxMountContext="" Feb 20 14:46:40.995572 master-0 kubenswrapper[7744]: I0220 14:46:40.995609 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43e9807a-859c-44c1-8511-0066b0f59ff8" volumeName="kubernetes.io/secret/43e9807a-859c-44c1-8511-0066b0f59ff8-serving-cert" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995626 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8157f73d-c757-40c4-80bc-3c9de2f2288a" volumeName="kubernetes.io/secret/8157f73d-c757-40c4-80bc-3c9de2f2288a-serving-cert" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995646 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fd9f419-2cdc-4991-8fb9-87d76ac58976" volumeName="kubernetes.io/secret/9fd9f419-2cdc-4991-8fb9-87d76ac58976-metrics-tls" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995663 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1" volumeName="kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-bound-sa-token" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995681 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1fe69517-eec2-4721-933c-fa27cea7ab1f" volumeName="kubernetes.io/projected/1fe69517-eec2-4721-933c-fa27cea7ab1f-kube-api-access-rnwtd" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995700 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32a79fe0-e619-4a66-8617-e8111bdc7e96" volumeName="kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-cni-binary-copy" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995726 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33675e96-ce49-49be-9117-954ac7cca5d5" volumeName="kubernetes.io/secret/33675e96-ce49-49be-9117-954ac7cca5d5-webhook-cert" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995787 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1fb2774-6dd7-4429-9df3-4ddfcdaac939" volumeName="kubernetes.io/projected/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-kube-api-access-jk9xr" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995805 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6e6d218-d969-40b5-a32b-9b2093089dbf" volumeName="kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-sysctl-allowlist" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995822 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c81ad608-a8ad-4289-a8d2-d48acb9b540c" volumeName="kubernetes.io/configmap/c81ad608-a8ad-4289-a8d2-d48acb9b540c-config" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995839 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="234a44fd-c153-47a6-a11d-7d4b7165c236" volumeName="kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-serving-cert" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995855 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d71c90-cab7-4411-9426-0713cb026294" volumeName="kubernetes.io/configmap/31d71c90-cab7-4411-9426-0713cb026294-trusted-ca" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995872 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45d7ef0c-272b-4d1e-965f-484975d5d25c" volumeName="kubernetes.io/secret/45d7ef0c-272b-4d1e-965f-484975d5d25c-serving-cert" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995891 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8157f73d-c757-40c4-80bc-3c9de2f2288a" volumeName="kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-service-ca-bundle" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995909 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="900e244c-67aa-402f-b5f0-d37c5c1cedf7" volumeName="kubernetes.io/projected/900e244c-67aa-402f-b5f0-d37c5c1cedf7-kube-api-access-n85mh" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995947 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="989af121-da08-4f40-b08c-dd2aa67bc60c" volumeName="kubernetes.io/secret/989af121-da08-4f40-b08c-dd2aa67bc60c-serving-cert" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995965 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c31b8a7-edcb-403d-9122-7eb740f7d659" volumeName="kubernetes.io/configmap/4c31b8a7-edcb-403d-9122-7eb740f7d659-config" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995980 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c31b8a7-edcb-403d-9122-7eb740f7d659" volumeName="kubernetes.io/secret/4c31b8a7-edcb-403d-9122-7eb740f7d659-serving-cert" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.995994 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cede061-d85a-4366-9f1e-90be51f726fc" volumeName="kubernetes.io/projected/4cede061-d85a-4366-9f1e-90be51f726fc-kube-api-access" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996011 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d2b154b-de63-4c9b-99d8-487fb3035fb9" volumeName="kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovnkube-config" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996027 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf4690-1ec1-44fc-94bd-730d9f2e6762" volumeName="kubernetes.io/projected/87cf4690-1ec1-44fc-94bd-730d9f2e6762-kube-api-access-r9c94" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996076 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c81ad608-a8ad-4289-a8d2-d48acb9b540c" volumeName="kubernetes.io/projected/c81ad608-a8ad-4289-a8d2-d48acb9b540c-kube-api-access-wj4dx" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996093 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d71c90-cab7-4411-9426-0713cb026294" volumeName="kubernetes.io/projected/31d71c90-cab7-4411-9426-0713cb026294-kube-api-access-57cks" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996109 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d2b154b-de63-4c9b-99d8-487fb3035fb9" volumeName="kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-env-overrides" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996127 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1" volumeName="kubernetes.io/configmap/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-trusted-ca" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996142 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8157f73d-c757-40c4-80bc-3c9de2f2288a" volumeName="kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-trusted-ca-bundle" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996159 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8157f73d-c757-40c4-80bc-3c9de2f2288a" volumeName="kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-config" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996178 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b73ae08-0ad7-4f99-8002-6df0d984cd2c" volumeName="kubernetes.io/projected/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-kube-api-access-mb46b" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996195 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6e6d218-d969-40b5-a32b-9b2093089dbf" volumeName="kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-whereabouts-configmap" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996213 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="234a44fd-c153-47a6-a11d-7d4b7165c236" volumeName="kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-client" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996233 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4cede061-d85a-4366-9f1e-90be51f726fc" volumeName="kubernetes.io/configmap/4cede061-d85a-4366-9f1e-90be51f726fc-service-ca" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996249 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6e6d218-d969-40b5-a32b-9b2093089dbf" volumeName="kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-binary-copy" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996318 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="234a44fd-c153-47a6-a11d-7d4b7165c236" volumeName="kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-config" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996343 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c31b8a7-edcb-403d-9122-7eb740f7d659" volumeName="kubernetes.io/projected/4c31b8a7-edcb-403d-9122-7eb740f7d659-kube-api-access" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996362 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ea4c132-b6d0-4dc9-942d-48e359eed418" volumeName="kubernetes.io/projected/5ea4c132-b6d0-4dc9-942d-48e359eed418-kube-api-access-7nlf9" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996382 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0a3548f-299c-4234-9bf1-c93efcb9740b" volumeName="kubernetes.io/projected/c0a3548f-299c-4234-9bf1-c93efcb9740b-kube-api-access-7d5fq" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996400 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21384bd0-495c-406a-9462-e9e740c04686" volumeName="kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-config" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996421 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="234a44fd-c153-47a6-a11d-7d4b7165c236" volumeName="kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-ca" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996453 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33675e96-ce49-49be-9117-954ac7cca5d5" volumeName="kubernetes.io/projected/33675e96-ce49-49be-9117-954ac7cca5d5-kube-api-access-hbw6n" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996470 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="419f28a9-8fd7-4b59-9554-4d884a1208b5" volumeName="kubernetes.io/configmap/419f28a9-8fd7-4b59-9554-4d884a1208b5-telemetry-config" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996487 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21384bd0-495c-406a-9462-e9e740c04686" volumeName="kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-script-lib" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996504 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="234a44fd-c153-47a6-a11d-7d4b7165c236" volumeName="kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-service-ca" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996521 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32a79fe0-e619-4a66-8617-e8111bdc7e96" volumeName="kubernetes.io/projected/32a79fe0-e619-4a66-8617-e8111bdc7e96-kube-api-access-jkq7j" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996538 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d2b154b-de63-4c9b-99d8-487fb3035fb9" volumeName="kubernetes.io/secret/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996554 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8157f73d-c757-40c4-80bc-3c9de2f2288a" volumeName="kubernetes.io/projected/8157f73d-c757-40c4-80bc-3c9de2f2288a-kube-api-access-bk5m4" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996570 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d28490b0-96ca-4fe0-8fae-e6f8390f933b" volumeName="kubernetes.io/projected/d28490b0-96ca-4fe0-8fae-e6f8390f933b-kube-api-access-qm5p2" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996587 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43e9807a-859c-44c1-8511-0066b0f59ff8" volumeName="kubernetes.io/configmap/43e9807a-859c-44c1-8511-0066b0f59ff8-config" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996604 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45d7ef0c-272b-4d1e-965f-484975d5d25c" volumeName="kubernetes.io/projected/45d7ef0c-272b-4d1e-965f-484975d5d25c-kube-api-access-svhtr" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996620 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d2b154b-de63-4c9b-99d8-487fb3035fb9" volumeName="kubernetes.io/projected/5d2b154b-de63-4c9b-99d8-487fb3035fb9-kube-api-access-mclrj" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996638 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db9dc349-5216-43ff-8c17-3a9384a010ea" volumeName="kubernetes.io/configmap/db9dc349-5216-43ff-8c17-3a9384a010ea-config" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996653 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="234a44fd-c153-47a6-a11d-7d4b7165c236" volumeName="kubernetes.io/projected/234a44fd-c153-47a6-a11d-7d4b7165c236-kube-api-access-gwb5n" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996672 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33675e96-ce49-49be-9117-954ac7cca5d5" volumeName="kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-ovnkube-identity-cm" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996688 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45d7ef0c-272b-4d1e-965f-484975d5d25c" volumeName="kubernetes.io/configmap/45d7ef0c-272b-4d1e-965f-484975d5d25c-config" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996707 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1" volumeName="kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-kube-api-access-pzmqr" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996725 7744 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db9dc349-5216-43ff-8c17-3a9384a010ea" volumeName="kubernetes.io/secret/db9dc349-5216-43ff-8c17-3a9384a010ea-serving-cert" seLinuxMountContext="" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996740 7744 reconstruct.go:97] "Volume reconstruction finished" Feb 20 14:46:40.997000 master-0 kubenswrapper[7744]: I0220 14:46:40.996751 7744 reconciler.go:26] "Reconciler: start to sync state" Feb 20 14:46:41.005286 master-0 kubenswrapper[7744]: I0220 14:46:41.004945 7744 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 20 14:46:41.005286 master-0 kubenswrapper[7744]: I0220 14:46:41.005227 7744 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 20 14:46:41.034295 master-0 kubenswrapper[7744]: I0220 14:46:41.034058 7744 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 20 14:46:41.036426 master-0 kubenswrapper[7744]: I0220 14:46:41.036389 7744 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 20 14:46:41.036500 master-0 kubenswrapper[7744]: I0220 14:46:41.036451 7744 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 20 14:46:41.036500 master-0 kubenswrapper[7744]: I0220 14:46:41.036483 7744 kubelet.go:2335] "Starting kubelet main sync loop" Feb 20 14:46:41.036601 master-0 kubenswrapper[7744]: E0220 14:46:41.036561 7744 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 20 14:46:41.038678 master-0 kubenswrapper[7744]: I0220 14:46:41.038632 7744 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 20 14:46:41.051861 master-0 kubenswrapper[7744]: I0220 14:46:41.051778 7744 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="92784546c39ab249199b64e99295b360ac694daa7345bcc5ca4290c1679248d5" exitCode=0 Feb 20 14:46:41.060357 master-0 kubenswrapper[7744]: I0220 14:46:41.060289 7744 generic.go:334] "Generic (PLEG): container finished" podID="014f3913-ac7e-431a-880c-91d979a5dfc7" containerID="d0525760cb8ba3e4a202836682905e3209d011265d322e121763f9e03af800fb" exitCode=0 Feb 20 14:46:41.067272 master-0 kubenswrapper[7744]: I0220 14:46:41.067236 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 20 14:46:41.067669 master-0 kubenswrapper[7744]: I0220 14:46:41.067632 7744 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="45a697749c461413b0722aa1be0b316cc858779a0e80c5ef44f0a3c27a2f1822" exitCode=1 Feb 20 14:46:41.067669 master-0 kubenswrapper[7744]: I0220 14:46:41.067665 7744 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="8aa8f34057d37d62316a09602947b9934df303dc999d1b14efc423cb04940c72" exitCode=0 Feb 20 14:46:41.075840 master-0 kubenswrapper[7744]: I0220 14:46:41.075801 7744 generic.go:334] "Generic (PLEG): container finished" podID="21384bd0-495c-406a-9462-e9e740c04686" containerID="325237c1c62eee1b6dbe253582be0281f8aeaa79ed6559821ac6420b7b9c38ca" exitCode=0 Feb 20 14:46:41.086147 master-0 kubenswrapper[7744]: I0220 14:46:41.086108 7744 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="6e7cd59de9caeb6625ff93f951dca8b15c57f96db1e17aebced0a5231f411d3f" exitCode=0 Feb 20 14:46:41.086147 master-0 kubenswrapper[7744]: I0220 14:46:41.086146 7744 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="7caadc72530799fe020f6b0140bace32e6cb7e8ebbe6207315d6d035384c83d6" exitCode=0 Feb 20 14:46:41.086256 master-0 kubenswrapper[7744]: I0220 14:46:41.086162 7744 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="177aef91a6eb47e06724759a7ce69757e5533636be520f8861b5d3c44d7c4272" exitCode=0 Feb 20 14:46:41.086256 master-0 kubenswrapper[7744]: I0220 14:46:41.086177 7744 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="187177cba6632230d116641fd3dad458ff096f751d761a5c25483f731b58481b" exitCode=0 Feb 20 14:46:41.086256 master-0 kubenswrapper[7744]: I0220 14:46:41.086190 7744 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="86aaca74eb46c2a67484d7ed32bbe3315e4c31acc5fa267db57dbe7175337821" exitCode=0 Feb 20 14:46:41.086256 master-0 kubenswrapper[7744]: I0220 14:46:41.086203 7744 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="c092110b72556c746170c7d0567154da90861fa9515b4bc320e9e6d1cc856cd6" exitCode=0 Feb 20 14:46:41.088801 master-0 kubenswrapper[7744]: I0220 14:46:41.088759 7744 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="6dbf7c55ace0ed513f2aaaeda5aa48d72fc75a02defc6cc2063a7bcf59d1c27f" exitCode=1 Feb 20 14:46:41.091240 master-0 kubenswrapper[7744]: I0220 14:46:41.091210 7744 generic.go:334] "Generic (PLEG): container finished" podID="cbc6343c-22ec-4cf8-904f-6a93cd251993" containerID="7d3284edf21995a27c89886a69cea12c9862e571d30d2baf2f5e1bce4a1984d8" exitCode=0 Feb 20 14:46:41.136786 master-0 kubenswrapper[7744]: E0220 14:46:41.136668 7744 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 20 14:46:41.140306 master-0 kubenswrapper[7744]: I0220 14:46:41.140252 7744 manager.go:324] Recovery completed Feb 20 14:46:41.180894 master-0 kubenswrapper[7744]: I0220 14:46:41.180856 7744 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 20 14:46:41.180894 master-0 kubenswrapper[7744]: I0220 14:46:41.180880 7744 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 20 14:46:41.180894 master-0 kubenswrapper[7744]: I0220 14:46:41.180897 7744 state_mem.go:36] "Initialized new in-memory state store" Feb 20 14:46:41.181186 master-0 kubenswrapper[7744]: I0220 14:46:41.181067 7744 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 20 14:46:41.181186 master-0 kubenswrapper[7744]: I0220 14:46:41.181080 7744 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 20 14:46:41.181186 master-0 kubenswrapper[7744]: I0220 14:46:41.181103 7744 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 20 14:46:41.181186 master-0 kubenswrapper[7744]: I0220 14:46:41.181111 7744 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 20 14:46:41.181186 master-0 kubenswrapper[7744]: I0220 14:46:41.181118 7744 policy_none.go:49] "None policy: Start" Feb 20 14:46:41.183743 master-0 kubenswrapper[7744]: I0220 14:46:41.182723 7744 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 20 14:46:41.183743 master-0 kubenswrapper[7744]: I0220 14:46:41.182762 7744 state_mem.go:35] "Initializing new in-memory state store" Feb 20 14:46:41.183743 master-0 kubenswrapper[7744]: I0220 14:46:41.183013 7744 state_mem.go:75] "Updated machine memory state" Feb 20 14:46:41.183743 master-0 kubenswrapper[7744]: I0220 14:46:41.183023 7744 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 20 14:46:41.220795 master-0 kubenswrapper[7744]: I0220 14:46:41.220754 7744 manager.go:334] "Starting Device Plugin manager" Feb 20 14:46:41.221111 master-0 kubenswrapper[7744]: I0220 14:46:41.221062 7744 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 20 14:46:41.221111 master-0 kubenswrapper[7744]: I0220 14:46:41.221093 7744 server.go:79] "Starting device plugin registration server" Feb 20 14:46:41.221757 master-0 kubenswrapper[7744]: I0220 14:46:41.221726 7744 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 20 14:46:41.221885 master-0 kubenswrapper[7744]: I0220 14:46:41.221759 7744 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 20 14:46:41.222509 master-0 kubenswrapper[7744]: I0220 14:46:41.222458 7744 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 20 14:46:41.222792 master-0 kubenswrapper[7744]: I0220 14:46:41.222748 7744 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 20 14:46:41.223066 master-0 kubenswrapper[7744]: I0220 14:46:41.222774 7744 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 20 14:46:41.323044 master-0 kubenswrapper[7744]: I0220 14:46:41.322430 7744 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 14:46:41.324863 master-0 kubenswrapper[7744]: I0220 14:46:41.324814 7744 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 14:46:41.324863 master-0 kubenswrapper[7744]: I0220 14:46:41.324861 7744 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 14:46:41.325051 master-0 kubenswrapper[7744]: I0220 14:46:41.324883 7744 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 14:46:41.325051 master-0 kubenswrapper[7744]: I0220 14:46:41.324981 7744 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 14:46:41.337564 master-0 kubenswrapper[7744]: I0220 14:46:41.337423 7744 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Feb 20 14:46:41.337967 master-0 kubenswrapper[7744]: I0220 14:46:41.337896 7744 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 20 14:46:41.338104 master-0 kubenswrapper[7744]: I0220 14:46:41.338073 7744 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 20 14:46:41.338644 master-0 kubenswrapper[7744]: I0220 14:46:41.338576 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"d15f8dfa0d113319aa72954517575419d7a6afcad7f7cef9517b2fb935c0ea42"} Feb 20 14:46:41.338799 master-0 kubenswrapper[7744]: I0220 14:46:41.338751 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"3c2b6c4d3887c6ce78fb1f319d3d917dd19b6ede5e9ab3d53c00d05b6ea4ef23"} Feb 20 14:46:41.338890 master-0 kubenswrapper[7744]: I0220 14:46:41.338773 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"321be2d7453c33396b3363bf789e4d552d4e8d66090aa9915bf60f644a971c6e"} Feb 20 14:46:41.338890 master-0 kubenswrapper[7744]: I0220 14:46:41.338839 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerDied","Data":"92784546c39ab249199b64e99295b360ac694daa7345bcc5ca4290c1679248d5"} Feb 20 14:46:41.338890 master-0 kubenswrapper[7744]: I0220 14:46:41.338854 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"c7111b0bf2b7379929af69699174f229cbbc25f01fc7ffc44b3371950f17c6f2"} Feb 20 14:46:41.339130 master-0 kubenswrapper[7744]: I0220 14:46:41.338975 7744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="617ccef4b48beb8ed1f21a9b1c418d8de1fbc1ee6e5e89c3998a1f0b78051407" Feb 20 14:46:41.339130 master-0 kubenswrapper[7744]: I0220 14:46:41.339051 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"246a4a72bf2ebfa2d43942f255f719a181c7fa6fae84b5f564297d3cc7eff684"} Feb 20 14:46:41.339130 master-0 kubenswrapper[7744]: I0220 14:46:41.339068 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"45a697749c461413b0722aa1be0b316cc858779a0e80c5ef44f0a3c27a2f1822"} Feb 20 14:46:41.339579 master-0 kubenswrapper[7744]: I0220 14:46:41.339446 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"8aa8f34057d37d62316a09602947b9934df303dc999d1b14efc423cb04940c72"} Feb 20 14:46:41.339579 master-0 kubenswrapper[7744]: I0220 14:46:41.339475 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"5ea8ac7578359ce087855682fd87fbd08a72604f8701716ddbb28b051d93bff2"} Feb 20 14:46:41.339579 master-0 kubenswrapper[7744]: I0220 14:46:41.339487 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"ab258fec42d8ec54f4f2b16e7f18ce6e3f88de1f121875064baf67bce8e05a10"} Feb 20 14:46:41.339579 master-0 kubenswrapper[7744]: I0220 14:46:41.339500 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"10bfd96b29aba7539a53e7ab2b44c245c4854718cd635aecd100e792a48f1fdc"} Feb 20 14:46:41.339579 master-0 kubenswrapper[7744]: I0220 14:46:41.339512 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"b953e5f23702f5654559767cf06b2635635ca7c579d9ee9d2d2bf61bf3d9a6b1"} Feb 20 14:46:41.339579 master-0 kubenswrapper[7744]: I0220 14:46:41.339569 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"24b2aee1f972d18ca4405ff399927f57d407665113e657b4f3db6303afde8747"} Feb 20 14:46:41.339579 master-0 kubenswrapper[7744]: I0220 14:46:41.339583 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2"} Feb 20 14:46:41.339992 master-0 kubenswrapper[7744]: I0220 14:46:41.339599 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"6dbf7c55ace0ed513f2aaaeda5aa48d72fc75a02defc6cc2063a7bcf59d1c27f"} Feb 20 14:46:41.339992 master-0 kubenswrapper[7744]: I0220 14:46:41.339612 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"ff5aeff3d91fe04ad5b35e5f18daa8ee28aba3161b0999bafdb650c9674062ac"} Feb 20 14:46:41.339992 master-0 kubenswrapper[7744]: I0220 14:46:41.339631 7744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f15a77dc38bd5f18a50d69cdc8939c3167e1a8020322420263668615a817067f" Feb 20 14:46:41.356954 master-0 kubenswrapper[7744]: W0220 14:46:41.356737 7744 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 20 14:46:41.357133 master-0 kubenswrapper[7744]: E0220 14:46:41.357020 7744 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:46:41.357401 master-0 kubenswrapper[7744]: E0220 14:46:41.357343 7744 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:46:41.357653 master-0 kubenswrapper[7744]: E0220 14:46:41.357600 7744 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 14:46:41.358082 master-0 kubenswrapper[7744]: E0220 14:46:41.358026 7744 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:41.358143 master-0 kubenswrapper[7744]: E0220 14:46:41.358053 7744 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.407444 master-0 kubenswrapper[7744]: I0220 14:46:41.407262 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:41.407444 master-0 kubenswrapper[7744]: I0220 14:46:41.407376 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.407598 master-0 kubenswrapper[7744]: I0220 14:46:41.407470 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.407598 master-0 kubenswrapper[7744]: I0220 14:46:41.407579 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.407867 master-0 kubenswrapper[7744]: I0220 14:46:41.407812 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.408128 master-0 kubenswrapper[7744]: I0220 14:46:41.408072 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:41.408242 master-0 kubenswrapper[7744]: I0220 14:46:41.408173 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 14:46:41.408358 master-0 kubenswrapper[7744]: I0220 14:46:41.408259 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 14:46:41.408358 master-0 kubenswrapper[7744]: I0220 14:46:41.408328 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:46:41.408543 master-0 kubenswrapper[7744]: I0220 14:46:41.408361 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:46:41.408543 master-0 kubenswrapper[7744]: I0220 14:46:41.408428 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:46:41.408709 master-0 kubenswrapper[7744]: I0220 14:46:41.408526 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:46:41.408709 master-0 kubenswrapper[7744]: I0220 14:46:41.408605 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.408709 master-0 kubenswrapper[7744]: I0220 14:46:41.408659 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.408977 master-0 kubenswrapper[7744]: I0220 14:46:41.408715 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:41.408977 master-0 kubenswrapper[7744]: I0220 14:46:41.408762 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:41.408977 master-0 kubenswrapper[7744]: I0220 14:46:41.408804 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:41.509551 master-0 kubenswrapper[7744]: I0220 14:46:41.509464 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.509551 master-0 kubenswrapper[7744]: I0220 14:46:41.509543 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:41.509950 master-0 kubenswrapper[7744]: I0220 14:46:41.509581 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 14:46:41.509950 master-0 kubenswrapper[7744]: I0220 14:46:41.509613 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 14:46:41.509950 master-0 kubenswrapper[7744]: I0220 14:46:41.509647 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.509950 master-0 kubenswrapper[7744]: I0220 14:46:41.509678 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.509950 master-0 kubenswrapper[7744]: I0220 14:46:41.509707 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:46:41.509950 master-0 kubenswrapper[7744]: I0220 14:46:41.509736 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:46:41.509950 master-0 kubenswrapper[7744]: I0220 14:46:41.509768 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:46:41.509950 master-0 kubenswrapper[7744]: I0220 14:46:41.509797 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:46:41.509950 master-0 kubenswrapper[7744]: I0220 14:46:41.509827 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:41.509950 master-0 kubenswrapper[7744]: I0220 14:46:41.509856 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:41.509950 master-0 kubenswrapper[7744]: I0220 14:46:41.509891 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:41.509950 master-0 kubenswrapper[7744]: I0220 14:46:41.509945 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.510768 master-0 kubenswrapper[7744]: I0220 14:46:41.509977 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.510768 master-0 kubenswrapper[7744]: I0220 14:46:41.510009 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:41.510768 master-0 kubenswrapper[7744]: I0220 14:46:41.510393 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.510768 master-0 kubenswrapper[7744]: I0220 14:46:41.510510 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:41.510768 master-0 kubenswrapper[7744]: I0220 14:46:41.510564 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.510768 master-0 kubenswrapper[7744]: I0220 14:46:41.510624 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.510768 master-0 kubenswrapper[7744]: I0220 14:46:41.510681 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.510768 master-0 kubenswrapper[7744]: I0220 14:46:41.510726 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:41.511280 master-0 kubenswrapper[7744]: I0220 14:46:41.510800 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 14:46:41.511280 master-0 kubenswrapper[7744]: I0220 14:46:41.510890 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:41.511280 master-0 kubenswrapper[7744]: I0220 14:46:41.511004 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:46:41.511280 master-0 kubenswrapper[7744]: I0220 14:46:41.511067 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:46:41.511280 master-0 kubenswrapper[7744]: I0220 14:46:41.511131 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.511280 master-0 kubenswrapper[7744]: I0220 14:46:41.511195 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:41.511280 master-0 kubenswrapper[7744]: I0220 14:46:41.511258 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.511619 master-0 kubenswrapper[7744]: I0220 14:46:41.511327 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:41.511619 master-0 kubenswrapper[7744]: I0220 14:46:41.511393 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:46:41.511619 master-0 kubenswrapper[7744]: I0220 14:46:41.511452 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:41.511619 master-0 kubenswrapper[7744]: I0220 14:46:41.511505 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:46:41.511619 master-0 kubenswrapper[7744]: I0220 14:46:41.511570 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 14:46:41.969799 master-0 kubenswrapper[7744]: I0220 14:46:41.969718 7744 apiserver.go:52] "Watching apiserver" Feb 20 14:46:41.978992 master-0 kubenswrapper[7744]: I0220 14:46:41.978948 7744 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 20 14:46:41.979949 master-0 kubenswrapper[7744]: I0220 14:46:41.979868 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx","openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr","openshift-network-operator/iptables-alerter-cgp8r","openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s","openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6","openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp","openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24","openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4","openshift-network-diagnostics/network-check-target-ljvkb","openshift-network-operator/network-operator-7d7db75979-tj8fx","openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx","openshift-ovn-kubernetes/ovnkube-node-5gzs6","openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c","openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z","openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x","openshift-multus/multus-m6hpf","openshift-multus/network-metrics-daemon-99lkv","openshift-network-node-identity/network-node-identity-gprr4","assisted-installer/assisted-installer-controller-wtxfh","kube-system/bootstrap-kube-scheduler-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww","openshift-multus/multus-additional-cni-plugins-6ts4p","openshift-marketplace/marketplace-operator-6f5488b997-97m7r","kube-system/bootstrap-kube-controller-manager-master-0","openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq","openshift-dns-operator/dns-operator-8c7d49845-gkrph","openshift-etcd/etcd-master-0-master-0","openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj"] Feb 20 14:46:41.980357 master-0 kubenswrapper[7744]: I0220 14:46:41.980276 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 14:46:41.980422 master-0 kubenswrapper[7744]: I0220 14:46:41.980302 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:41.980422 master-0 kubenswrapper[7744]: I0220 14:46:41.980399 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:41.986373 master-0 kubenswrapper[7744]: I0220 14:46:41.986317 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:41.986575 master-0 kubenswrapper[7744]: I0220 14:46:41.986529 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:41.987638 master-0 kubenswrapper[7744]: I0220 14:46:41.987602 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:41.988622 master-0 kubenswrapper[7744]: I0220 14:46:41.988582 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:41.988844 master-0 kubenswrapper[7744]: I0220 14:46:41.988789 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 20 14:46:41.988909 master-0 kubenswrapper[7744]: I0220 14:46:41.988870 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:41.989204 master-0 kubenswrapper[7744]: I0220 14:46:41.989163 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 20 14:46:41.989305 master-0 kubenswrapper[7744]: I0220 14:46:41.989256 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 20 14:46:41.989364 master-0 kubenswrapper[7744]: I0220 14:46:41.989332 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 20 14:46:41.989486 master-0 kubenswrapper[7744]: I0220 14:46:41.989448 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 20 14:46:41.989940 master-0 kubenswrapper[7744]: I0220 14:46:41.989819 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 14:46:41.990320 master-0 kubenswrapper[7744]: I0220 14:46:41.990075 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 20 14:46:41.990374 master-0 kubenswrapper[7744]: I0220 14:46:41.990361 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 20 14:46:41.990471 master-0 kubenswrapper[7744]: I0220 14:46:41.990441 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 20 14:46:41.990513 master-0 kubenswrapper[7744]: I0220 14:46:41.990477 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 20 14:46:41.990796 master-0 kubenswrapper[7744]: I0220 14:46:41.990429 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 20 14:46:41.990860 master-0 kubenswrapper[7744]: I0220 14:46:41.990812 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 20 14:46:41.991126 master-0 kubenswrapper[7744]: I0220 14:46:41.991081 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 20 14:46:41.991187 master-0 kubenswrapper[7744]: I0220 14:46:41.991176 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 20 14:46:41.991325 master-0 kubenswrapper[7744]: I0220 14:46:41.991295 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 20 14:46:41.993357 master-0 kubenswrapper[7744]: I0220 14:46:41.993321 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 20 14:46:41.993757 master-0 kubenswrapper[7744]: I0220 14:46:41.993724 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 20 14:46:41.994098 master-0 kubenswrapper[7744]: I0220 14:46:41.994064 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 20 14:46:41.994759 master-0 kubenswrapper[7744]: I0220 14:46:41.994727 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:41.996651 master-0 kubenswrapper[7744]: I0220 14:46:41.994721 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 20 14:46:41.996651 master-0 kubenswrapper[7744]: I0220 14:46:41.994783 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 20 14:46:41.996651 master-0 kubenswrapper[7744]: I0220 14:46:41.995747 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:41.997573 master-0 kubenswrapper[7744]: I0220 14:46:41.997517 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 20 14:46:41.997803 master-0 kubenswrapper[7744]: I0220 14:46:41.997761 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:41.998224 master-0 kubenswrapper[7744]: I0220 14:46:41.998195 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 20 14:46:41.998367 master-0 kubenswrapper[7744]: I0220 14:46:41.998350 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 20 14:46:41.998463 master-0 kubenswrapper[7744]: I0220 14:46:41.998445 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 20 14:46:41.998538 master-0 kubenswrapper[7744]: I0220 14:46:41.998519 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 20 14:46:42.002260 master-0 kubenswrapper[7744]: I0220 14:46:42.002186 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 20 14:46:42.003097 master-0 kubenswrapper[7744]: I0220 14:46:42.003052 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 20 14:46:42.003248 master-0 kubenswrapper[7744]: I0220 14:46:42.003206 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 20 14:46:42.003417 master-0 kubenswrapper[7744]: I0220 14:46:42.003383 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 20 14:46:42.003507 master-0 kubenswrapper[7744]: I0220 14:46:42.003473 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 20 14:46:42.003836 master-0 kubenswrapper[7744]: I0220 14:46:42.003692 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 20 14:46:42.005099 master-0 kubenswrapper[7744]: I0220 14:46:42.005051 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 20 14:46:42.005504 master-0 kubenswrapper[7744]: I0220 14:46:42.005466 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 20 14:46:42.006746 master-0 kubenswrapper[7744]: I0220 14:46:42.005472 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 20 14:46:42.006858 master-0 kubenswrapper[7744]: I0220 14:46:42.006013 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 20 14:46:42.006858 master-0 kubenswrapper[7744]: I0220 14:46:42.006055 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 20 14:46:42.007023 master-0 kubenswrapper[7744]: I0220 14:46:42.006087 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 20 14:46:42.007023 master-0 kubenswrapper[7744]: I0220 14:46:42.006206 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 20 14:46:42.007023 master-0 kubenswrapper[7744]: I0220 14:46:42.006606 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 20 14:46:42.008890 master-0 kubenswrapper[7744]: I0220 14:46:42.008605 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 20 14:46:42.009360 master-0 kubenswrapper[7744]: I0220 14:46:42.009273 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 20 14:46:42.009360 master-0 kubenswrapper[7744]: I0220 14:46:42.009289 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 20 14:46:42.010195 master-0 kubenswrapper[7744]: I0220 14:46:42.009586 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 20 14:46:42.010195 master-0 kubenswrapper[7744]: I0220 14:46:42.009596 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 20 14:46:42.010195 master-0 kubenswrapper[7744]: I0220 14:46:42.009833 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 20 14:46:42.010195 master-0 kubenswrapper[7744]: I0220 14:46:42.010012 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 20 14:46:42.010858 master-0 kubenswrapper[7744]: I0220 14:46:42.010416 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 20 14:46:42.011643 master-0 kubenswrapper[7744]: I0220 14:46:42.011037 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 20 14:46:42.011643 master-0 kubenswrapper[7744]: I0220 14:46:42.011132 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 20 14:46:42.011643 master-0 kubenswrapper[7744]: I0220 14:46:42.011228 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 20 14:46:42.011643 master-0 kubenswrapper[7744]: I0220 14:46:42.011345 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 20 14:46:42.011643 master-0 kubenswrapper[7744]: I0220 14:46:42.011382 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 20 14:46:42.011643 master-0 kubenswrapper[7744]: I0220 14:46:42.011525 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 20 14:46:42.011643 master-0 kubenswrapper[7744]: I0220 14:46:42.011624 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 20 14:46:42.012139 master-0 kubenswrapper[7744]: I0220 14:46:42.011664 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 20 14:46:42.012139 master-0 kubenswrapper[7744]: I0220 14:46:42.011777 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 20 14:46:42.012139 master-0 kubenswrapper[7744]: I0220 14:46:42.011873 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 20 14:46:42.012139 master-0 kubenswrapper[7744]: I0220 14:46:42.011913 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 20 14:46:42.012139 master-0 kubenswrapper[7744]: I0220 14:46:42.012031 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 20 14:46:42.012139 master-0 kubenswrapper[7744]: I0220 14:46:42.012035 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 20 14:46:42.012448 master-0 kubenswrapper[7744]: I0220 14:46:42.012174 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 20 14:46:42.012511 master-0 kubenswrapper[7744]: I0220 14:46:42.012447 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 20 14:46:42.013323 master-0 kubenswrapper[7744]: I0220 14:46:42.012693 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 20 14:46:42.013323 master-0 kubenswrapper[7744]: I0220 14:46:42.012847 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 20 14:46:42.013323 master-0 kubenswrapper[7744]: I0220 14:46:42.012914 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 20 14:46:42.013323 master-0 kubenswrapper[7744]: I0220 14:46:42.013069 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 20 14:46:42.013323 master-0 kubenswrapper[7744]: I0220 14:46:42.013095 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 20 14:46:42.013323 master-0 kubenswrapper[7744]: I0220 14:46:42.013117 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 20 14:46:42.013708 master-0 kubenswrapper[7744]: I0220 14:46:42.013425 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57cks\" (UniqueName: \"kubernetes.io/projected/31d71c90-cab7-4411-9426-0713cb026294-kube-api-access-57cks\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:42.013708 master-0 kubenswrapper[7744]: I0220 14:46:42.013469 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:42.013708 master-0 kubenswrapper[7744]: I0220 14:46:42.013607 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:42.013970 master-0 kubenswrapper[7744]: I0220 14:46:42.013742 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 20 14:46:42.014237 master-0 kubenswrapper[7744]: I0220 14:46:42.014131 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:42.015225 master-0 kubenswrapper[7744]: I0220 14:46:42.014406 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:42.015225 master-0 kubenswrapper[7744]: I0220 14:46:42.014460 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-multus\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.015225 master-0 kubenswrapper[7744]: I0220 14:46:42.014495 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 20 14:46:42.015225 master-0 kubenswrapper[7744]: I0220 14:46:42.014590 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c81ad608-a8ad-4289-a8d2-d48acb9b540c-serving-cert\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:42.015225 master-0 kubenswrapper[7744]: I0220 14:46:42.014698 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 20 14:46:42.015225 master-0 kubenswrapper[7744]: I0220 14:46:42.014768 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 20 14:46:42.015225 master-0 kubenswrapper[7744]: I0220 14:46:42.014839 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 20 14:46:42.015225 master-0 kubenswrapper[7744]: I0220 14:46:42.015203 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c81ad608-a8ad-4289-a8d2-d48acb9b540c-serving-cert\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:42.015737 master-0 kubenswrapper[7744]: I0220 14:46:42.015265 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 20 14:46:42.015737 master-0 kubenswrapper[7744]: I0220 14:46:42.015282 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c81ad608-a8ad-4289-a8d2-d48acb9b540c-config\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:42.015737 master-0 kubenswrapper[7744]: I0220 14:46:42.015373 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj4dx\" (UniqueName: \"kubernetes.io/projected/c81ad608-a8ad-4289-a8d2-d48acb9b540c-kube-api-access-wj4dx\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:42.015737 master-0 kubenswrapper[7744]: I0220 14:46:42.015448 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n85mh\" (UniqueName: \"kubernetes.io/projected/900e244c-67aa-402f-b5f0-d37c5c1cedf7-kube-api-access-n85mh\") pod \"csi-snapshot-controller-operator-6fb4df594f-p29qr\" (UID: \"900e244c-67aa-402f-b5f0-d37c5c1cedf7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr" Feb 20 14:46:42.015737 master-0 kubenswrapper[7744]: I0220 14:46:42.015516 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-k8s-cni-cncf-io\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.015737 master-0 kubenswrapper[7744]: I0220 14:46:42.015542 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:42.015737 master-0 kubenswrapper[7744]: I0220 14:46:42.015563 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smglm\" (UniqueName: \"kubernetes.io/projected/db9dc349-5216-43ff-8c17-3a9384a010ea-kube-api-access-smglm\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:42.015737 master-0 kubenswrapper[7744]: I0220 14:46:42.015584 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-netns\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.015737 master-0 kubenswrapper[7744]: I0220 14:46:42.015424 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 20 14:46:42.015737 master-0 kubenswrapper[7744]: I0220 14:46:42.015652 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 20 14:46:42.015737 master-0 kubenswrapper[7744]: I0220 14:46:42.015630 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/989af121-da08-4f40-b08c-dd2aa67bc60c-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:42.016055 master-0 kubenswrapper[7744]: I0220 14:46:42.015475 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 20 14:46:42.016055 master-0 kubenswrapper[7744]: I0220 14:46:42.015802 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/419f28a9-8fd7-4b59-9554-4d884a1208b5-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:42.016055 master-0 kubenswrapper[7744]: I0220 14:46:42.015865 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwb5n\" (UniqueName: \"kubernetes.io/projected/234a44fd-c153-47a6-a11d-7d4b7165c236-kube-api-access-gwb5n\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:42.016055 master-0 kubenswrapper[7744]: I0220 14:46:42.015470 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 20 14:46:42.016055 master-0 kubenswrapper[7744]: I0220 14:46:42.015962 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c81ad608-a8ad-4289-a8d2-d48acb9b540c-config\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:42.016055 master-0 kubenswrapper[7744]: I0220 14:46:42.016014 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb46b\" (UniqueName: \"kubernetes.io/projected/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-kube-api-access-mb46b\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:42.016209 master-0 kubenswrapper[7744]: I0220 14:46:42.016068 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c31b8a7-edcb-403d-9122-7eb740f7d659-config\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:42.016209 master-0 kubenswrapper[7744]: I0220 14:46:42.016123 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svlzf\" (UniqueName: \"kubernetes.io/projected/9fd9f419-2cdc-4991-8fb9-87d76ac58976-kube-api-access-svlzf\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:46:42.016209 master-0 kubenswrapper[7744]: I0220 14:46:42.016186 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/989af121-da08-4f40-b08c-dd2aa67bc60c-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:42.016289 master-0 kubenswrapper[7744]: I0220 14:46:42.015607 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 20 14:46:42.016289 master-0 kubenswrapper[7744]: I0220 14:46:42.016248 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/989af121-da08-4f40-b08c-dd2aa67bc60c-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:42.016376 master-0 kubenswrapper[7744]: I0220 14:46:42.016338 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-daemon-config\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.016425 master-0 kubenswrapper[7744]: I0220 14:46:42.016394 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:42.016460 master-0 kubenswrapper[7744]: I0220 14:46:42.016438 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-client\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:42.016520 master-0 kubenswrapper[7744]: I0220 14:46:42.016491 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:42.016627 master-0 kubenswrapper[7744]: I0220 14:46:42.016608 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/989af121-da08-4f40-b08c-dd2aa67bc60c-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:42.016665 master-0 kubenswrapper[7744]: I0220 14:46:42.016648 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svhtr\" (UniqueName: \"kubernetes.io/projected/45d7ef0c-272b-4d1e-965f-484975d5d25c-kube-api-access-svhtr\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:42.016705 master-0 kubenswrapper[7744]: I0220 14:46:42.016668 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c31b8a7-edcb-403d-9122-7eb740f7d659-config\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:42.016705 master-0 kubenswrapper[7744]: I0220 14:46:42.016688 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-cni-binary-copy\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.016756 master-0 kubenswrapper[7744]: I0220 14:46:42.016737 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-bin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.016805 master-0 kubenswrapper[7744]: I0220 14:46:42.016780 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-conf-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.016840 master-0 kubenswrapper[7744]: I0220 14:46:42.016803 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:42.016868 master-0 kubenswrapper[7744]: I0220 14:46:42.016826 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d5fq\" (UniqueName: \"kubernetes.io/projected/c0a3548f-299c-4234-9bf1-c93efcb9740b-kube-api-access-7d5fq\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:42.017012 master-0 kubenswrapper[7744]: I0220 14:46:42.016958 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db9dc349-5216-43ff-8c17-3a9384a010ea-config\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:42.017092 master-0 kubenswrapper[7744]: I0220 14:46:42.017058 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:42.017092 master-0 kubenswrapper[7744]: I0220 14:46:42.017088 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-client\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:42.017206 master-0 kubenswrapper[7744]: I0220 14:46:42.017111 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:42.017206 master-0 kubenswrapper[7744]: I0220 14:46:42.017170 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-config\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:42.017263 master-0 kubenswrapper[7744]: I0220 14:46:42.017207 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:42.017263 master-0 kubenswrapper[7744]: I0220 14:46:42.017241 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45d7ef0c-272b-4d1e-965f-484975d5d25c-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:42.017320 master-0 kubenswrapper[7744]: I0220 14:46:42.017277 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-hostroot\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.017320 master-0 kubenswrapper[7744]: I0220 14:46:42.017297 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db9dc349-5216-43ff-8c17-3a9384a010ea-config\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:42.017375 master-0 kubenswrapper[7744]: I0220 14:46:42.017305 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 20 14:46:42.017375 master-0 kubenswrapper[7744]: I0220 14:46:42.017334 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 20 14:46:42.017437 master-0 kubenswrapper[7744]: I0220 14:46:42.017316 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-etc-kubernetes\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.017437 master-0 kubenswrapper[7744]: I0220 14:46:42.017412 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9fd9f419-2cdc-4991-8fb9-87d76ac58976-host-etc-kube\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:46:42.017487 master-0 kubenswrapper[7744]: I0220 14:46:42.017436 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:42.017487 master-0 kubenswrapper[7744]: I0220 14:46:42.017456 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 20 14:46:42.017487 master-0 kubenswrapper[7744]: I0220 14:46:42.017461 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-config\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:42.017567 master-0 kubenswrapper[7744]: I0220 14:46:42.017528 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 20 14:46:42.017567 master-0 kubenswrapper[7744]: I0220 14:46:42.017562 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 20 14:46:42.017617 master-0 kubenswrapper[7744]: I0220 14:46:42.017456 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fttgr\" (UniqueName: \"kubernetes.io/projected/419f28a9-8fd7-4b59-9554-4d884a1208b5-kube-api-access-fttgr\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:42.017617 master-0 kubenswrapper[7744]: I0220 14:46:42.017592 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-ca\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:42.017667 master-0 kubenswrapper[7744]: I0220 14:46:42.017620 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.017667 master-0 kubenswrapper[7744]: I0220 14:46:42.017638 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-multus-certs\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.017667 master-0 kubenswrapper[7744]: I0220 14:46:42.017659 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:42.017748 master-0 kubenswrapper[7744]: I0220 14:46:42.017685 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk5m4\" (UniqueName: \"kubernetes.io/projected/8157f73d-c757-40c4-80bc-3c9de2f2288a-kube-api-access-bk5m4\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:42.017748 master-0 kubenswrapper[7744]: I0220 14:46:42.017702 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-serving-cert\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:42.017748 master-0 kubenswrapper[7744]: I0220 14:46:42.017719 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-config\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:42.017748 master-0 kubenswrapper[7744]: I0220 14:46:42.017716 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45d7ef0c-272b-4d1e-965f-484975d5d25c-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:42.017849 master-0 kubenswrapper[7744]: I0220 14:46:42.017753 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:42.017849 master-0 kubenswrapper[7744]: I0220 14:46:42.017771 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e9807a-859c-44c1-8511-0066b0f59ff8-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:42.017849 master-0 kubenswrapper[7744]: I0220 14:46:42.017821 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c31b8a7-edcb-403d-9122-7eb740f7d659-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:42.017938 master-0 kubenswrapper[7744]: I0220 14:46:42.017862 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnwtd\" (UniqueName: \"kubernetes.io/projected/1fe69517-eec2-4721-933c-fa27cea7ab1f-kube-api-access-rnwtd\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:42.017978 master-0 kubenswrapper[7744]: I0220 14:46:42.017568 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 20 14:46:42.017978 master-0 kubenswrapper[7744]: I0220 14:46:42.017913 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:42.018031 master-0 kubenswrapper[7744]: I0220 14:46:42.017984 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cede061-d85a-4366-9f1e-90be51f726fc-service-ca\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:42.018064 master-0 kubenswrapper[7744]: I0220 14:46:42.018029 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9fd9f419-2cdc-4991-8fb9-87d76ac58976-metrics-tls\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:46:42.018064 master-0 kubenswrapper[7744]: I0220 14:46:42.018053 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e9807a-859c-44c1-8511-0066b0f59ff8-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:42.018210 master-0 kubenswrapper[7744]: I0220 14:46:42.018180 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-serving-cert\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:42.018210 master-0 kubenswrapper[7744]: I0220 14:46:42.018191 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-ca\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:42.018272 master-0 kubenswrapper[7744]: I0220 14:46:42.017607 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 20 14:46:42.018299 master-0 kubenswrapper[7744]: I0220 14:46:42.018285 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:42.018327 master-0 kubenswrapper[7744]: I0220 14:46:42.017660 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 20 14:46:42.018418 master-0 kubenswrapper[7744]: I0220 14:46:42.017698 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 20 14:46:42.018418 master-0 kubenswrapper[7744]: I0220 14:46:42.017812 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 20 14:46:42.018472 master-0 kubenswrapper[7744]: I0220 14:46:42.017852 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 20 14:46:42.018542 master-0 kubenswrapper[7744]: I0220 14:46:42.017884 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 20 14:46:42.018542 master-0 kubenswrapper[7744]: I0220 14:46:42.018528 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9fd9f419-2cdc-4991-8fb9-87d76ac58976-metrics-tls\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:46:42.018617 master-0 kubenswrapper[7744]: I0220 14:46:42.018547 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:42.018854 master-0 kubenswrapper[7744]: I0220 14:46:42.018634 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:42.018854 master-0 kubenswrapper[7744]: I0220 14:46:42.018713 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43e9807a-859c-44c1-8511-0066b0f59ff8-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:42.018854 master-0 kubenswrapper[7744]: I0220 14:46:42.018752 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:42.018854 master-0 kubenswrapper[7744]: I0220 14:46:42.018756 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31d71c90-cab7-4411-9426-0713cb026294-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:42.018854 master-0 kubenswrapper[7744]: I0220 14:46:42.018767 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-config\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:42.018854 master-0 kubenswrapper[7744]: I0220 14:46:42.018793 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-system-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.019063 master-0 kubenswrapper[7744]: I0220 14:46:42.018808 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cede061-d85a-4366-9f1e-90be51f726fc-service-ca\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019228 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019301 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019338 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d7ef0c-272b-4d1e-965f-484975d5d25c-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019361 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cede061-d85a-4366-9f1e-90be51f726fc-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019386 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzmqr\" (UniqueName: \"kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-kube-api-access-pzmqr\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019415 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jn8g\" (UniqueName: \"kubernetes.io/projected/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-kube-api-access-4jn8g\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019440 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/989af121-da08-4f40-b08c-dd2aa67bc60c-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019470 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-cni-binary-copy\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019507 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019625 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db9dc349-5216-43ff-8c17-3a9384a010ea-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019675 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019707 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019729 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-socket-dir-parent\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019784 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-kubelet\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019842 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkq7j\" (UniqueName: \"kubernetes.io/projected/32a79fe0-e619-4a66-8617-e8111bdc7e96-kube-api-access-jkq7j\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019960 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d7ef0c-272b-4d1e-965f-484975d5d25c-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:42.020091 master-0 kubenswrapper[7744]: I0220 14:46:42.019981 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8157f73d-c757-40c4-80bc-3c9de2f2288a-serving-cert\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:42.021084 master-0 kubenswrapper[7744]: I0220 14:46:42.021058 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm5p2\" (UniqueName: \"kubernetes.io/projected/d28490b0-96ca-4fe0-8fae-e6f8390f933b-kube-api-access-qm5p2\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:42.021129 master-0 kubenswrapper[7744]: I0220 14:46:42.021116 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-cnibin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.021159 master-0 kubenswrapper[7744]: I0220 14:46:42.021141 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e9807a-859c-44c1-8511-0066b0f59ff8-config\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:42.021188 master-0 kubenswrapper[7744]: I0220 14:46:42.021177 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c31b8a7-edcb-403d-9122-7eb740f7d659-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:42.021226 master-0 kubenswrapper[7744]: I0220 14:46:42.021197 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-os-release\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.021226 master-0 kubenswrapper[7744]: I0220 14:46:42.021220 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:42.021438 master-0 kubenswrapper[7744]: I0220 14:46:42.021417 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e9807a-859c-44c1-8511-0066b0f59ff8-config\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:42.021574 master-0 kubenswrapper[7744]: I0220 14:46:42.021556 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c31b8a7-edcb-403d-9122-7eb740f7d659-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:42.029038 master-0 kubenswrapper[7744]: I0220 14:46:42.028034 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-daemon-config\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.029038 master-0 kubenswrapper[7744]: I0220 14:46:42.028339 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/419f28a9-8fd7-4b59-9554-4d884a1208b5-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:42.029038 master-0 kubenswrapper[7744]: I0220 14:46:42.028797 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8157f73d-c757-40c4-80bc-3c9de2f2288a-serving-cert\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:42.029038 master-0 kubenswrapper[7744]: I0220 14:46:42.028845 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db9dc349-5216-43ff-8c17-3a9384a010ea-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:42.029260 master-0 kubenswrapper[7744]: I0220 14:46:42.029087 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:42.034788 master-0 kubenswrapper[7744]: I0220 14:46:42.034737 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 20 14:46:42.035823 master-0 kubenswrapper[7744]: I0220 14:46:42.035797 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 20 14:46:42.036047 master-0 kubenswrapper[7744]: I0220 14:46:42.036028 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 20 14:46:42.036434 master-0 kubenswrapper[7744]: I0220 14:46:42.036413 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 20 14:46:42.037423 master-0 kubenswrapper[7744]: I0220 14:46:42.037398 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:42.038692 master-0 kubenswrapper[7744]: I0220 14:46:42.038670 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:42.040414 master-0 kubenswrapper[7744]: I0220 14:46:42.040178 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31d71c90-cab7-4411-9426-0713cb026294-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:42.040414 master-0 kubenswrapper[7744]: I0220 14:46:42.040233 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:42.051765 master-0 kubenswrapper[7744]: I0220 14:46:42.051182 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57cks\" (UniqueName: \"kubernetes.io/projected/31d71c90-cab7-4411-9426-0713cb026294-kube-api-access-57cks\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:42.051765 master-0 kubenswrapper[7744]: I0220 14:46:42.051560 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n85mh\" (UniqueName: \"kubernetes.io/projected/900e244c-67aa-402f-b5f0-d37c5c1cedf7-kube-api-access-n85mh\") pod \"csi-snapshot-controller-operator-6fb4df594f-p29qr\" (UID: \"900e244c-67aa-402f-b5f0-d37c5c1cedf7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr" Feb 20 14:46:42.063695 master-0 kubenswrapper[7744]: I0220 14:46:42.063642 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj4dx\" (UniqueName: \"kubernetes.io/projected/c81ad608-a8ad-4289-a8d2-d48acb9b540c-kube-api-access-wj4dx\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 14:46:42.068304 master-0 kubenswrapper[7744]: I0220 14:46:42.068273 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smglm\" (UniqueName: \"kubernetes.io/projected/db9dc349-5216-43ff-8c17-3a9384a010ea-kube-api-access-smglm\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 14:46:42.068866 master-0 kubenswrapper[7744]: I0220 14:46:42.068836 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb46b\" (UniqueName: \"kubernetes.io/projected/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-kube-api-access-mb46b\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 14:46:42.091467 master-0 kubenswrapper[7744]: I0220 14:46:42.091432 7744 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 20 14:46:42.095231 master-0 kubenswrapper[7744]: I0220 14:46:42.095207 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svlzf\" (UniqueName: \"kubernetes.io/projected/9fd9f419-2cdc-4991-8fb9-87d76ac58976-kube-api-access-svlzf\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:46:42.102532 master-0 kubenswrapper[7744]: I0220 14:46:42.102513 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svhtr\" (UniqueName: \"kubernetes.io/projected/45d7ef0c-272b-4d1e-965f-484975d5d25c-kube-api-access-svhtr\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 14:46:42.121987 master-0 kubenswrapper[7744]: I0220 14:46:42.121959 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33675e96-ce49-49be-9117-954ac7cca5d5-webhook-cert\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:46:42.122121 master-0 kubenswrapper[7744]: I0220 14:46:42.122095 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mclrj\" (UniqueName: \"kubernetes.io/projected/5d2b154b-de63-4c9b-99d8-487fb3035fb9-kube-api-access-mclrj\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:46:42.122168 master-0 kubenswrapper[7744]: I0220 14:46:42.122138 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-bin\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.122168 master-0 kubenswrapper[7744]: I0220 14:46:42.122158 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:42.122263 master-0 kubenswrapper[7744]: I0220 14:46:42.122176 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:42.122263 master-0 kubenswrapper[7744]: I0220 14:46:42.122203 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-multus\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.122263 master-0 kubenswrapper[7744]: I0220 14:46:42.122218 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:42.122263 master-0 kubenswrapper[7744]: I0220 14:46:42.122248 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-k8s-cni-cncf-io\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.122263 master-0 kubenswrapper[7744]: I0220 14:46:42.122264 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-netns\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.122453 master-0 kubenswrapper[7744]: I0220 14:46:42.122282 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:46:42.122453 master-0 kubenswrapper[7744]: I0220 14:46:42.122300 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-log-socket\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.122453 master-0 kubenswrapper[7744]: I0220 14:46:42.122324 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.122453 master-0 kubenswrapper[7744]: I0220 14:46:42.122340 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-conf-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.122453 master-0 kubenswrapper[7744]: I0220 14:46:42.122355 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-system-cni-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.122453 master-0 kubenswrapper[7744]: I0220 14:46:42.122375 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-ovnkube-identity-cm\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:46:42.122453 master-0 kubenswrapper[7744]: I0220 14:46:42.122398 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nlf9\" (UniqueName: \"kubernetes.io/projected/5ea4c132-b6d0-4dc9-942d-48e359eed418-kube-api-access-7nlf9\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:42.122453 master-0 kubenswrapper[7744]: I0220 14:46:42.122422 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-slash\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.122453 master-0 kubenswrapper[7744]: I0220 14:46:42.122450 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-var-lib-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.122755 master-0 kubenswrapper[7744]: I0220 14:46:42.122466 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.122755 master-0 kubenswrapper[7744]: I0220 14:46:42.122487 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:42.122755 master-0 kubenswrapper[7744]: I0220 14:46:42.122507 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-binary-copy\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.122755 master-0 kubenswrapper[7744]: I0220 14:46:42.122529 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-etc-kubernetes\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.122755 master-0 kubenswrapper[7744]: I0220 14:46:42.122551 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:42.122755 master-0 kubenswrapper[7744]: I0220 14:46:42.122595 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:42.122755 master-0 kubenswrapper[7744]: I0220 14:46:42.122632 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9fd9f419-2cdc-4991-8fb9-87d76ac58976-host-etc-kube\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:46:42.122755 master-0 kubenswrapper[7744]: I0220 14:46:42.122526 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33675e96-ce49-49be-9117-954ac7cca5d5-webhook-cert\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:46:42.123079 master-0 kubenswrapper[7744]: E0220 14:46:42.122794 7744 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 14:46:42.123079 master-0 kubenswrapper[7744]: E0220 14:46:42.122847 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls podName:b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:42.62283169 +0000 UTC m=+1.825031610 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-g7glt" (UID: "b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1") : secret "image-registry-operator-tls" not found Feb 20 14:46:42.123079 master-0 kubenswrapper[7744]: E0220 14:46:42.123047 7744 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 14:46:42.123079 master-0 kubenswrapper[7744]: E0220 14:46:42.123076 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:42.623068706 +0000 UTC m=+1.825268626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "node-tuning-operator-tls" not found Feb 20 14:46:42.123235 master-0 kubenswrapper[7744]: I0220 14:46:42.123093 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-multus\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.123235 master-0 kubenswrapper[7744]: I0220 14:46:42.123111 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-conf-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.123309 master-0 kubenswrapper[7744]: I0220 14:46:42.123291 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-ovnkube-identity-cm\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:46:42.123357 master-0 kubenswrapper[7744]: I0220 14:46:42.123321 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.123357 master-0 kubenswrapper[7744]: I0220 14:46:42.123343 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.123436 master-0 kubenswrapper[7744]: I0220 14:46:42.123361 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-os-release\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.123436 master-0 kubenswrapper[7744]: I0220 14:46:42.123378 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-env-overrides\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:46:42.123508 master-0 kubenswrapper[7744]: I0220 14:46:42.123452 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9fd9f419-2cdc-4991-8fb9-87d76ac58976-host-etc-kube\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 14:46:42.123508 master-0 kubenswrapper[7744]: E0220 14:46:42.123484 7744 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 14:46:42.123508 master-0 kubenswrapper[7744]: E0220 14:46:42.123505 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert podName:1fe69517-eec2-4721-933c-fa27cea7ab1f nodeName:}" failed. No retries permitted until 2026-02-20 14:46:42.623497626 +0000 UTC m=+1.825697536 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2sw9z" (UID: "1fe69517-eec2-4721-933c-fa27cea7ab1f") : secret "package-server-manager-serving-cert" not found Feb 20 14:46:42.123613 master-0 kubenswrapper[7744]: E0220 14:46:42.123534 7744 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:42.123613 master-0 kubenswrapper[7744]: E0220 14:46:42.123561 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls podName:419f28a9-8fd7-4b59-9554-4d884a1208b5 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:42.623553887 +0000 UTC m=+1.825753807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-p7mjp" (UID: "419f28a9-8fd7-4b59-9554-4d884a1208b5") : secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:42.123613 master-0 kubenswrapper[7744]: I0220 14:46:42.123579 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-netns\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.123613 master-0 kubenswrapper[7744]: I0220 14:46:42.123600 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-k8s-cni-cncf-io\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.123838 master-0 kubenswrapper[7744]: I0220 14:46:42.123776 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:46:42.124141 master-0 kubenswrapper[7744]: I0220 14:46:42.124119 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-binary-copy\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.124228 master-0 kubenswrapper[7744]: I0220 14:46:42.124152 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-etc-kubernetes\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.124306 master-0 kubenswrapper[7744]: I0220 14:46:42.124294 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.124375 master-0 kubenswrapper[7744]: I0220 14:46:42.124341 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.124458 master-0 kubenswrapper[7744]: I0220 14:46:42.124436 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-env-overrides\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:46:42.124517 master-0 kubenswrapper[7744]: I0220 14:46:42.124495 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:42.124597 master-0 kubenswrapper[7744]: E0220 14:46:42.124571 7744 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 14:46:42.124638 master-0 kubenswrapper[7744]: E0220 14:46:42.124609 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert podName:4cede061-d85a-4366-9f1e-90be51f726fc nodeName:}" failed. No retries permitted until 2026-02-20 14:46:42.624597223 +0000 UTC m=+1.826797153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert") pod "cluster-version-operator-5cfd9759cf-jf2s9" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc") : secret "cluster-version-operator-serving-cert" not found Feb 20 14:46:42.124709 master-0 kubenswrapper[7744]: I0220 14:46:42.124640 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9c94\" (UniqueName: \"kubernetes.io/projected/87cf4690-1ec1-44fc-94bd-730d9f2e6762-kube-api-access-r9c94\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:42.124787 master-0 kubenswrapper[7744]: I0220 14:46:42.124713 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mpr8\" (UniqueName: \"kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8\") pod \"network-check-target-ljvkb\" (UID: \"929dffba-46da-4d81-a437-bc6a9fe79811\") " pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:42.124787 master-0 kubenswrapper[7744]: I0220 14:46:42.124748 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-system-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.124787 master-0 kubenswrapper[7744]: I0220 14:46:42.124786 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-system-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.124912 master-0 kubenswrapper[7744]: I0220 14:46:42.124820 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk9xr\" (UniqueName: \"kubernetes.io/projected/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-kube-api-access-jk9xr\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:42.124912 master-0 kubenswrapper[7744]: I0220 14:46:42.124845 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-ovn\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.124991 master-0 kubenswrapper[7744]: I0220 14:46:42.124906 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-kubelet\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.124991 master-0 kubenswrapper[7744]: I0220 14:46:42.124960 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-etc-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.124991 master-0 kubenswrapper[7744]: I0220 14:46:42.124976 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-script-lib\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.124991 master-0 kubenswrapper[7744]: I0220 14:46:42.124973 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d5fq\" (UniqueName: \"kubernetes.io/projected/c0a3548f-299c-4234-9bf1-c93efcb9740b-kube-api-access-7d5fq\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:42.125148 master-0 kubenswrapper[7744]: I0220 14:46:42.125025 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-cnibin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.125148 master-0 kubenswrapper[7744]: I0220 14:46:42.125088 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:46:42.125148 master-0 kubenswrapper[7744]: I0220 14:46:42.125118 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-os-release\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.125276 master-0 kubenswrapper[7744]: I0220 14:46:42.125158 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-script-lib\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.125276 master-0 kubenswrapper[7744]: I0220 14:46:42.125203 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-cnibin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.125276 master-0 kubenswrapper[7744]: I0220 14:46:42.125225 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/87cf4690-1ec1-44fc-94bd-730d9f2e6762-iptables-alerter-script\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:42.125276 master-0 kubenswrapper[7744]: I0220 14:46:42.125257 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-node-log\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.125439 master-0 kubenswrapper[7744]: I0220 14:46:42.125281 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-kubelet\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.125472 master-0 kubenswrapper[7744]: I0220 14:46:42.125439 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/87cf4690-1ec1-44fc-94bd-730d9f2e6762-iptables-alerter-script\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:42.125472 master-0 kubenswrapper[7744]: I0220 14:46:42.125442 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:46:42.125551 master-0 kubenswrapper[7744]: I0220 14:46:42.125467 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-os-release\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.125551 master-0 kubenswrapper[7744]: I0220 14:46:42.125487 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-netd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.125551 master-0 kubenswrapper[7744]: I0220 14:46:42.125513 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-config\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.125551 master-0 kubenswrapper[7744]: I0220 14:46:42.125544 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbw6n\" (UniqueName: \"kubernetes.io/projected/33675e96-ce49-49be-9117-954ac7cca5d5-kube-api-access-hbw6n\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:46:42.125709 master-0 kubenswrapper[7744]: I0220 14:46:42.125560 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87cf4690-1ec1-44fc-94bd-730d9f2e6762-host-slash\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:42.125709 master-0 kubenswrapper[7744]: I0220 14:46:42.125585 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:46:42.125709 master-0 kubenswrapper[7744]: I0220 14:46:42.125697 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:46:42.125835 master-0 kubenswrapper[7744]: I0220 14:46:42.125735 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-config\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.125835 master-0 kubenswrapper[7744]: I0220 14:46:42.125740 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-cnibin\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.125835 master-0 kubenswrapper[7744]: I0220 14:46:42.125774 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:42.125971 master-0 kubenswrapper[7744]: I0220 14:46:42.125869 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-netns\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.125971 master-0 kubenswrapper[7744]: I0220 14:46:42.125894 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-systemd-units\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.125971 master-0 kubenswrapper[7744]: I0220 14:46:42.125913 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:42.125971 master-0 kubenswrapper[7744]: I0220 14:46:42.125955 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-bin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.126125 master-0 kubenswrapper[7744]: I0220 14:46:42.125981 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-kubelet\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.126125 master-0 kubenswrapper[7744]: I0220 14:46:42.126007 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr6nr\" (UniqueName: \"kubernetes.io/projected/21384bd0-495c-406a-9462-e9e740c04686-kube-api-access-gr6nr\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.126505 master-0 kubenswrapper[7744]: I0220 14:46:42.126485 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:42.126574 master-0 kubenswrapper[7744]: I0220 14:46:42.126517 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-hostroot\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.126574 master-0 kubenswrapper[7744]: I0220 14:46:42.126543 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-whereabouts-configmap\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.126574 master-0 kubenswrapper[7744]: I0220 14:46:42.126050 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-bin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.126668 master-0 kubenswrapper[7744]: I0220 14:46:42.126587 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-hostroot\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.126668 master-0 kubenswrapper[7744]: E0220 14:46:42.126032 7744 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:42.126668 master-0 kubenswrapper[7744]: I0220 14:46:42.126608 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:42.126668 master-0 kubenswrapper[7744]: I0220 14:46:42.126631 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.126668 master-0 kubenswrapper[7744]: E0220 14:46:42.126654 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:42.626632102 +0000 UTC m=+1.828832072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:42.126978 master-0 kubenswrapper[7744]: I0220 14:46:42.126685 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-systemd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.126978 master-0 kubenswrapper[7744]: I0220 14:46:42.126725 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-multus-certs\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.126978 master-0 kubenswrapper[7744]: I0220 14:46:42.126761 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:42.126978 master-0 kubenswrapper[7744]: I0220 14:46:42.126798 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21384bd0-495c-406a-9462-e9e740c04686-ovn-node-metrics-cert\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.126978 master-0 kubenswrapper[7744]: E0220 14:46:42.126804 7744 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 14:46:42.126978 master-0 kubenswrapper[7744]: I0220 14:46:42.126850 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-multus-certs\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.126978 master-0 kubenswrapper[7744]: E0220 14:46:42.126874 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls podName:d28490b0-96ca-4fe0-8fae-e6f8390f933b nodeName:}" failed. No retries permitted until 2026-02-20 14:46:42.626847488 +0000 UTC m=+1.829047428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls") pod "dns-operator-8c7d49845-gkrph" (UID: "d28490b0-96ca-4fe0-8fae-e6f8390f933b") : secret "metrics-tls" not found Feb 20 14:46:42.126978 master-0 kubenswrapper[7744]: I0220 14:46:42.126918 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:42.126978 master-0 kubenswrapper[7744]: I0220 14:46:42.126960 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:42.126978 master-0 kubenswrapper[7744]: I0220 14:46:42.126952 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-whereabouts-configmap\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.126978 master-0 kubenswrapper[7744]: I0220 14:46:42.126983 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-env-overrides\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.127338 master-0 kubenswrapper[7744]: E0220 14:46:42.127009 7744 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 14:46:42.127338 master-0 kubenswrapper[7744]: E0220 14:46:42.127034 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics podName:c0a3548f-299c-4234-9bf1-c93efcb9740b nodeName:}" failed. No retries permitted until 2026-02-20 14:46:42.627024312 +0000 UTC m=+1.829224232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-97m7r" (UID: "c0a3548f-299c-4234-9bf1-c93efcb9740b") : secret "marketplace-operator-metrics" not found Feb 20 14:46:42.127338 master-0 kubenswrapper[7744]: I0220 14:46:42.127054 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psd59\" (UniqueName: \"kubernetes.io/projected/b6e6d218-d969-40b5-a32b-9b2093089dbf-kube-api-access-psd59\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.127338 master-0 kubenswrapper[7744]: I0220 14:46:42.127071 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.127338 master-0 kubenswrapper[7744]: I0220 14:46:42.127092 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-socket-dir-parent\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.127338 master-0 kubenswrapper[7744]: I0220 14:46:42.127095 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-env-overrides\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.127338 master-0 kubenswrapper[7744]: I0220 14:46:42.127095 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21384bd0-495c-406a-9462-e9e740c04686-ovn-node-metrics-cert\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.127338 master-0 kubenswrapper[7744]: I0220 14:46:42.127158 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-socket-dir-parent\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.146968 master-0 kubenswrapper[7744]: I0220 14:46:42.146937 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fttgr\" (UniqueName: \"kubernetes.io/projected/419f28a9-8fd7-4b59-9554-4d884a1208b5-kube-api-access-fttgr\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:42.161801 master-0 kubenswrapper[7744]: I0220 14:46:42.161763 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c31b8a7-edcb-403d-9122-7eb740f7d659-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 14:46:42.184751 master-0 kubenswrapper[7744]: I0220 14:46:42.184689 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnwtd\" (UniqueName: \"kubernetes.io/projected/1fe69517-eec2-4721-933c-fa27cea7ab1f-kube-api-access-rnwtd\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:42.203889 master-0 kubenswrapper[7744]: I0220 14:46:42.203827 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwb5n\" (UniqueName: \"kubernetes.io/projected/234a44fd-c153-47a6-a11d-7d4b7165c236-kube-api-access-gwb5n\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 14:46:42.226382 master-0 kubenswrapper[7744]: I0220 14:46:42.226291 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk5m4\" (UniqueName: \"kubernetes.io/projected/8157f73d-c757-40c4-80bc-3c9de2f2288a-kube-api-access-bk5m4\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:46:42.227747 master-0 kubenswrapper[7744]: I0220 14:46:42.227699 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-slash\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.227868 master-0 kubenswrapper[7744]: I0220 14:46:42.227824 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-slash\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.227939 master-0 kubenswrapper[7744]: I0220 14:46:42.227894 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-var-lib-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.227978 master-0 kubenswrapper[7744]: I0220 14:46:42.227943 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.227978 master-0 kubenswrapper[7744]: I0220 14:46:42.227968 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-system-cni-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.228058 master-0 kubenswrapper[7744]: I0220 14:46:42.228026 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-system-cni-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.228105 master-0 kubenswrapper[7744]: I0220 14:46:42.228082 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-os-release\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.228198 master-0 kubenswrapper[7744]: I0220 14:46:42.228174 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-os-release\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.228198 master-0 kubenswrapper[7744]: I0220 14:46:42.228169 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-var-lib-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.228306 master-0 kubenswrapper[7744]: I0220 14:46:42.228228 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.228350 master-0 kubenswrapper[7744]: I0220 14:46:42.228318 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mpr8\" (UniqueName: \"kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8\") pod \"network-check-target-ljvkb\" (UID: \"929dffba-46da-4d81-a437-bc6a9fe79811\") " pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:42.228517 master-0 kubenswrapper[7744]: I0220 14:46:42.228470 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-ovn\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.228582 master-0 kubenswrapper[7744]: I0220 14:46:42.228543 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-etc-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.228632 master-0 kubenswrapper[7744]: I0220 14:46:42.228576 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-etc-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.228632 master-0 kubenswrapper[7744]: I0220 14:46:42.228590 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-node-log\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.228632 master-0 kubenswrapper[7744]: I0220 14:46:42.228550 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-ovn\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.228632 master-0 kubenswrapper[7744]: I0220 14:46:42.228622 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-netd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.228785 master-0 kubenswrapper[7744]: I0220 14:46:42.228634 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-node-log\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.228785 master-0 kubenswrapper[7744]: I0220 14:46:42.228669 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87cf4690-1ec1-44fc-94bd-730d9f2e6762-host-slash\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:42.228785 master-0 kubenswrapper[7744]: I0220 14:46:42.228693 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-netd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.228785 master-0 kubenswrapper[7744]: I0220 14:46:42.228704 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-cnibin\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.228785 master-0 kubenswrapper[7744]: I0220 14:46:42.228736 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87cf4690-1ec1-44fc-94bd-730d9f2e6762-host-slash\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:42.228785 master-0 kubenswrapper[7744]: I0220 14:46:42.228738 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:42.228785 master-0 kubenswrapper[7744]: I0220 14:46:42.228771 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-systemd-units\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.228785 master-0 kubenswrapper[7744]: I0220 14:46:42.228792 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-netns\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.229056 master-0 kubenswrapper[7744]: I0220 14:46:42.228814 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-kubelet\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.229056 master-0 kubenswrapper[7744]: I0220 14:46:42.228822 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-cnibin\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.229056 master-0 kubenswrapper[7744]: E0220 14:46:42.228838 7744 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 20 14:46:42.229056 master-0 kubenswrapper[7744]: I0220 14:46:42.228853 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-systemd-units\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.229056 master-0 kubenswrapper[7744]: I0220 14:46:42.228882 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-netns\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.229056 master-0 kubenswrapper[7744]: E0220 14:46:42.228905 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs podName:5ea4c132-b6d0-4dc9-942d-48e359eed418 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:42.728879833 +0000 UTC m=+1.931079793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs") pod "network-metrics-daemon-99lkv" (UID: "5ea4c132-b6d0-4dc9-942d-48e359eed418") : secret "metrics-daemon-secret" not found Feb 20 14:46:42.229056 master-0 kubenswrapper[7744]: I0220 14:46:42.228942 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:42.229056 master-0 kubenswrapper[7744]: I0220 14:46:42.229002 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.229056 master-0 kubenswrapper[7744]: E0220 14:46:42.229026 7744 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 14:46:42.229056 master-0 kubenswrapper[7744]: I0220 14:46:42.229040 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-systemd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.229491 master-0 kubenswrapper[7744]: E0220 14:46:42.229071 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs podName:a1fb2774-6dd7-4429-9df3-4ddfcdaac939 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:42.729053537 +0000 UTC m=+1.931253457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-wl49x" (UID: "a1fb2774-6dd7-4429-9df3-4ddfcdaac939") : secret "multus-admission-controller-secret" not found Feb 20 14:46:42.229491 master-0 kubenswrapper[7744]: I0220 14:46:42.229109 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-kubelet\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.229491 master-0 kubenswrapper[7744]: I0220 14:46:42.229108 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-systemd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.229491 master-0 kubenswrapper[7744]: I0220 14:46:42.229224 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.229491 master-0 kubenswrapper[7744]: I0220 14:46:42.229246 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.229491 master-0 kubenswrapper[7744]: I0220 14:46:42.229272 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.229491 master-0 kubenswrapper[7744]: I0220 14:46:42.229350 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-bin\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.229491 master-0 kubenswrapper[7744]: I0220 14:46:42.229380 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-bin\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.229491 master-0 kubenswrapper[7744]: I0220 14:46:42.229419 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-log-socket\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.229491 master-0 kubenswrapper[7744]: I0220 14:46:42.229442 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-log-socket\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.229491 master-0 kubenswrapper[7744]: I0220 14:46:42.229445 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.229491 master-0 kubenswrapper[7744]: I0220 14:46:42.229465 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.244604 master-0 kubenswrapper[7744]: I0220 14:46:42.244541 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43e9807a-859c-44c1-8511-0066b0f59ff8-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 14:46:42.266117 master-0 kubenswrapper[7744]: I0220 14:46:42.266073 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:42.283639 master-0 kubenswrapper[7744]: I0220 14:46:42.282568 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzmqr\" (UniqueName: \"kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-kube-api-access-pzmqr\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:42.283639 master-0 kubenswrapper[7744]: I0220 14:46:42.283075 7744 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 14:46:42.302287 master-0 kubenswrapper[7744]: I0220 14:46:42.302255 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jn8g\" (UniqueName: \"kubernetes.io/projected/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-kube-api-access-4jn8g\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 14:46:42.324655 master-0 kubenswrapper[7744]: I0220 14:46:42.324621 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cede061-d85a-4366-9f1e-90be51f726fc-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:42.344918 master-0 kubenswrapper[7744]: I0220 14:46:42.344883 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm5p2\" (UniqueName: \"kubernetes.io/projected/d28490b0-96ca-4fe0-8fae-e6f8390f933b-kube-api-access-qm5p2\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:42.362616 master-0 kubenswrapper[7744]: I0220 14:46:42.362589 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkq7j\" (UniqueName: \"kubernetes.io/projected/32a79fe0-e619-4a66-8617-e8111bdc7e96-kube-api-access-jkq7j\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 14:46:42.394864 master-0 kubenswrapper[7744]: I0220 14:46:42.394789 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/989af121-da08-4f40-b08c-dd2aa67bc60c-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 14:46:42.401346 master-0 kubenswrapper[7744]: E0220 14:46:42.401285 7744 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e" Feb 20 14:46:42.401600 master-0 kubenswrapper[7744]: E0220 14:46:42.401550 7744 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:csi-snapshot-controller-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e,Command:[],Args:[start -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERAND_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9,ValueFrom:nil,},EnvVar{Name:WEBHOOK_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d953b34fe1ab03e9a57b3c91de4220683cf92e804edb5f5c230e5888e1c5a6d2,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n85mh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-snapshot-controller-operator-6fb4df594f-p29qr_openshift-cluster-storage-operator(900e244c-67aa-402f-b5f0-d37c5c1cedf7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 20 14:46:42.402909 master-0 kubenswrapper[7744]: E0220 14:46:42.402839 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshot-controller-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr" podUID="900e244c-67aa-402f-b5f0-d37c5c1cedf7" Feb 20 14:46:42.415038 master-0 kubenswrapper[7744]: E0220 14:46:42.414989 7744 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:46:42.441854 master-0 kubenswrapper[7744]: E0220 14:46:42.441796 7744 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:42.459631 master-0 kubenswrapper[7744]: W0220 14:46:42.459586 7744 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 20 14:46:42.459818 master-0 kubenswrapper[7744]: E0220 14:46:42.459662 7744 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:46:42.480510 master-0 kubenswrapper[7744]: E0220 14:46:42.480418 7744 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:42.498381 master-0 kubenswrapper[7744]: E0220 14:46:42.498321 7744 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 14:46:42.530708 master-0 kubenswrapper[7744]: I0220 14:46:42.530664 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mclrj\" (UniqueName: \"kubernetes.io/projected/5d2b154b-de63-4c9b-99d8-487fb3035fb9-kube-api-access-mclrj\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 14:46:42.546556 master-0 kubenswrapper[7744]: I0220 14:46:42.546532 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nlf9\" (UniqueName: \"kubernetes.io/projected/5ea4c132-b6d0-4dc9-942d-48e359eed418-kube-api-access-7nlf9\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:42.562674 master-0 kubenswrapper[7744]: I0220 14:46:42.562641 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9c94\" (UniqueName: \"kubernetes.io/projected/87cf4690-1ec1-44fc-94bd-730d9f2e6762-kube-api-access-r9c94\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 14:46:42.588192 master-0 kubenswrapper[7744]: I0220 14:46:42.588160 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk9xr\" (UniqueName: \"kubernetes.io/projected/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-kube-api-access-jk9xr\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:42.602216 master-0 kubenswrapper[7744]: I0220 14:46:42.602187 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbw6n\" (UniqueName: \"kubernetes.io/projected/33675e96-ce49-49be-9117-954ac7cca5d5-kube-api-access-hbw6n\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 14:46:42.623414 master-0 kubenswrapper[7744]: I0220 14:46:42.623353 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr6nr\" (UniqueName: \"kubernetes.io/projected/21384bd0-495c-406a-9462-e9e740c04686-kube-api-access-gr6nr\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:42.633749 master-0 kubenswrapper[7744]: I0220 14:46:42.633607 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:42.638343 master-0 kubenswrapper[7744]: I0220 14:46:42.637853 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:42.638343 master-0 kubenswrapper[7744]: I0220 14:46:42.637887 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:42.638343 master-0 kubenswrapper[7744]: I0220 14:46:42.637910 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:42.638343 master-0 kubenswrapper[7744]: I0220 14:46:42.637960 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:42.638343 master-0 kubenswrapper[7744]: I0220 14:46:42.637981 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:42.638744 master-0 kubenswrapper[7744]: I0220 14:46:42.637997 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:42.638744 master-0 kubenswrapper[7744]: E0220 14:46:42.633972 7744 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 14:46:42.639052 master-0 kubenswrapper[7744]: E0220 14:46:42.638838 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert podName:4cede061-d85a-4366-9f1e-90be51f726fc nodeName:}" failed. No retries permitted until 2026-02-20 14:46:43.638800139 +0000 UTC m=+2.841000099 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert") pod "cluster-version-operator-5cfd9759cf-jf2s9" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc") : secret "cluster-version-operator-serving-cert" not found Feb 20 14:46:42.639052 master-0 kubenswrapper[7744]: E0220 14:46:42.638849 7744 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 14:46:42.639052 master-0 kubenswrapper[7744]: I0220 14:46:42.638906 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:42.639052 master-0 kubenswrapper[7744]: E0220 14:46:42.638957 7744 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 14:46:42.639052 master-0 kubenswrapper[7744]: E0220 14:46:42.638998 7744 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 14:46:42.639052 master-0 kubenswrapper[7744]: E0220 14:46:42.638969 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics podName:c0a3548f-299c-4234-9bf1-c93efcb9740b nodeName:}" failed. No retries permitted until 2026-02-20 14:46:43.638952703 +0000 UTC m=+2.841152663 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-97m7r" (UID: "c0a3548f-299c-4234-9bf1-c93efcb9740b") : secret "marketplace-operator-metrics" not found Feb 20 14:46:42.639250 master-0 kubenswrapper[7744]: E0220 14:46:42.639065 7744 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:42.639250 master-0 kubenswrapper[7744]: E0220 14:46:42.639084 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert podName:1fe69517-eec2-4721-933c-fa27cea7ab1f nodeName:}" failed. No retries permitted until 2026-02-20 14:46:43.639052475 +0000 UTC m=+2.841252425 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2sw9z" (UID: "1fe69517-eec2-4721-933c-fa27cea7ab1f") : secret "package-server-manager-serving-cert" not found Feb 20 14:46:42.639250 master-0 kubenswrapper[7744]: E0220 14:46:42.639113 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls podName:d28490b0-96ca-4fe0-8fae-e6f8390f933b nodeName:}" failed. No retries permitted until 2026-02-20 14:46:43.639100297 +0000 UTC m=+2.841300247 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls") pod "dns-operator-8c7d49845-gkrph" (UID: "d28490b0-96ca-4fe0-8fae-e6f8390f933b") : secret "metrics-tls" not found Feb 20 14:46:42.639250 master-0 kubenswrapper[7744]: E0220 14:46:42.639146 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls podName:419f28a9-8fd7-4b59-9554-4d884a1208b5 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:43.639131317 +0000 UTC m=+2.841331277 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-p7mjp" (UID: "419f28a9-8fd7-4b59-9554-4d884a1208b5") : secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:42.639250 master-0 kubenswrapper[7744]: E0220 14:46:42.638867 7744 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 14:46:42.639250 master-0 kubenswrapper[7744]: E0220 14:46:42.639173 7744 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 14:46:42.639250 master-0 kubenswrapper[7744]: E0220 14:46:42.639201 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:43.639189819 +0000 UTC m=+2.841389779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "node-tuning-operator-tls" not found Feb 20 14:46:42.639250 master-0 kubenswrapper[7744]: E0220 14:46:42.639229 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls podName:b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:43.639216049 +0000 UTC m=+2.841416009 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-g7glt" (UID: "b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1") : secret "image-registry-operator-tls" not found Feb 20 14:46:42.640193 master-0 kubenswrapper[7744]: E0220 14:46:42.639307 7744 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:42.640193 master-0 kubenswrapper[7744]: E0220 14:46:42.639351 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:43.639336942 +0000 UTC m=+2.841536902 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:42.662410 master-0 kubenswrapper[7744]: I0220 14:46:42.662095 7744 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 20 14:46:42.664048 master-0 kubenswrapper[7744]: I0220 14:46:42.663206 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psd59\" (UniqueName: \"kubernetes.io/projected/b6e6d218-d969-40b5-a32b-9b2093089dbf-kube-api-access-psd59\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 14:46:42.668570 master-0 kubenswrapper[7744]: I0220 14:46:42.668468 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mpr8\" (UniqueName: \"kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8\") pod \"network-check-target-ljvkb\" (UID: \"929dffba-46da-4d81-a437-bc6a9fe79811\") " pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:42.740752 master-0 kubenswrapper[7744]: I0220 14:46:42.740651 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:42.740959 master-0 kubenswrapper[7744]: E0220 14:46:42.740909 7744 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 20 14:46:42.741031 master-0 kubenswrapper[7744]: I0220 14:46:42.741007 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:42.741089 master-0 kubenswrapper[7744]: E0220 14:46:42.741039 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs podName:5ea4c132-b6d0-4dc9-942d-48e359eed418 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:43.741008969 +0000 UTC m=+2.943208899 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs") pod "network-metrics-daemon-99lkv" (UID: "5ea4c132-b6d0-4dc9-942d-48e359eed418") : secret "metrics-daemon-secret" not found Feb 20 14:46:42.741352 master-0 kubenswrapper[7744]: E0220 14:46:42.741298 7744 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 14:46:42.741432 master-0 kubenswrapper[7744]: E0220 14:46:42.741413 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs podName:a1fb2774-6dd7-4429-9df3-4ddfcdaac939 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:43.741388888 +0000 UTC m=+2.943588828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-wl49x" (UID: "a1fb2774-6dd7-4429-9df3-4ddfcdaac939") : secret "multus-admission-controller-secret" not found Feb 20 14:46:42.899151 master-0 kubenswrapper[7744]: I0220 14:46:42.899081 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:43.422724 master-0 kubenswrapper[7744]: E0220 14:46:43.422618 7744 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9" Feb 20 14:46:43.423724 master-0 kubenswrapper[7744]: E0220 14:46:43.422854 7744 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9c94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-cgp8r_openshift-network-operator(87cf4690-1ec1-44fc-94bd-730d9f2e6762): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 20 14:46:43.424301 master-0 kubenswrapper[7744]: E0220 14:46:43.424194 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-network-operator/iptables-alerter-cgp8r" podUID="87cf4690-1ec1-44fc-94bd-730d9f2e6762" Feb 20 14:46:43.653531 master-0 kubenswrapper[7744]: I0220 14:46:43.653463 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:43.653882 master-0 kubenswrapper[7744]: E0220 14:46:43.653728 7744 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 14:46:43.653882 master-0 kubenswrapper[7744]: E0220 14:46:43.653834 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls podName:d28490b0-96ca-4fe0-8fae-e6f8390f933b nodeName:}" failed. No retries permitted until 2026-02-20 14:46:45.653807605 +0000 UTC m=+4.856007565 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls") pod "dns-operator-8c7d49845-gkrph" (UID: "d28490b0-96ca-4fe0-8fae-e6f8390f933b") : secret "metrics-tls" not found Feb 20 14:46:43.654412 master-0 kubenswrapper[7744]: I0220 14:46:43.654361 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:43.654499 master-0 kubenswrapper[7744]: I0220 14:46:43.654431 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:43.654499 master-0 kubenswrapper[7744]: I0220 14:46:43.654471 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:43.654610 master-0 kubenswrapper[7744]: I0220 14:46:43.654509 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:43.654610 master-0 kubenswrapper[7744]: I0220 14:46:43.654564 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:43.654715 master-0 kubenswrapper[7744]: I0220 14:46:43.654618 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:43.654715 master-0 kubenswrapper[7744]: I0220 14:46:43.654700 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:43.654827 master-0 kubenswrapper[7744]: E0220 14:46:43.654813 7744 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:43.654881 master-0 kubenswrapper[7744]: E0220 14:46:43.654858 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:45.65484394 +0000 UTC m=+4.857043900 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:43.654969 master-0 kubenswrapper[7744]: E0220 14:46:43.654951 7744 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 14:46:43.655036 master-0 kubenswrapper[7744]: E0220 14:46:43.654993 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics podName:c0a3548f-299c-4234-9bf1-c93efcb9740b nodeName:}" failed. No retries permitted until 2026-02-20 14:46:45.654979093 +0000 UTC m=+4.857179043 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-97m7r" (UID: "c0a3548f-299c-4234-9bf1-c93efcb9740b") : secret "marketplace-operator-metrics" not found Feb 20 14:46:43.655664 master-0 kubenswrapper[7744]: E0220 14:46:43.655051 7744 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 14:46:43.655832 master-0 kubenswrapper[7744]: E0220 14:46:43.655795 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls podName:b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:45.655775173 +0000 UTC m=+4.857975123 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-g7glt" (UID: "b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1") : secret "image-registry-operator-tls" not found Feb 20 14:46:43.655949 master-0 kubenswrapper[7744]: E0220 14:46:43.655896 7744 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:43.656021 master-0 kubenswrapper[7744]: E0220 14:46:43.655967 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls podName:419f28a9-8fd7-4b59-9554-4d884a1208b5 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:45.655954817 +0000 UTC m=+4.858154767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-p7mjp" (UID: "419f28a9-8fd7-4b59-9554-4d884a1208b5") : secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:43.656081 master-0 kubenswrapper[7744]: E0220 14:46:43.656036 7744 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 14:46:43.656081 master-0 kubenswrapper[7744]: E0220 14:46:43.656071 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:45.65606027 +0000 UTC m=+4.858260220 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "node-tuning-operator-tls" not found Feb 20 14:46:43.656185 master-0 kubenswrapper[7744]: E0220 14:46:43.656137 7744 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 14:46:43.656185 master-0 kubenswrapper[7744]: E0220 14:46:43.656173 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert podName:1fe69517-eec2-4721-933c-fa27cea7ab1f nodeName:}" failed. No retries permitted until 2026-02-20 14:46:45.656162242 +0000 UTC m=+4.858362192 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2sw9z" (UID: "1fe69517-eec2-4721-933c-fa27cea7ab1f") : secret "package-server-manager-serving-cert" not found Feb 20 14:46:43.656272 master-0 kubenswrapper[7744]: E0220 14:46:43.656232 7744 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 14:46:43.656272 master-0 kubenswrapper[7744]: E0220 14:46:43.656266 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert podName:4cede061-d85a-4366-9f1e-90be51f726fc nodeName:}" failed. No retries permitted until 2026-02-20 14:46:45.656254074 +0000 UTC m=+4.858454024 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert") pod "cluster-version-operator-5cfd9759cf-jf2s9" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc") : secret "cluster-version-operator-serving-cert" not found Feb 20 14:46:43.755666 master-0 kubenswrapper[7744]: I0220 14:46:43.755371 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:43.755666 master-0 kubenswrapper[7744]: I0220 14:46:43.755561 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:43.755957 master-0 kubenswrapper[7744]: E0220 14:46:43.755761 7744 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 14:46:43.755957 master-0 kubenswrapper[7744]: E0220 14:46:43.755785 7744 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 20 14:46:43.755957 master-0 kubenswrapper[7744]: E0220 14:46:43.755855 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs podName:5ea4c132-b6d0-4dc9-942d-48e359eed418 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:45.75583734 +0000 UTC m=+4.958037270 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs") pod "network-metrics-daemon-99lkv" (UID: "5ea4c132-b6d0-4dc9-942d-48e359eed418") : secret "metrics-daemon-secret" not found Feb 20 14:46:43.755957 master-0 kubenswrapper[7744]: E0220 14:46:43.755901 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs podName:a1fb2774-6dd7-4429-9df3-4ddfcdaac939 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:45.755867361 +0000 UTC m=+4.958067301 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-wl49x" (UID: "a1fb2774-6dd7-4429-9df3-4ddfcdaac939") : secret "multus-admission-controller-secret" not found Feb 20 14:46:44.122209 master-0 kubenswrapper[7744]: E0220 14:46:44.122061 7744 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83" Feb 20 14:46:44.122353 master-0 kubenswrapper[7744]: E0220 14:46:44.122301 7744 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wj4dx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-c48c8bf7c-pvlhj_openshift-service-ca-operator(c81ad608-a8ad-4289-a8d2-d48acb9b540c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 20 14:46:44.124273 master-0 kubenswrapper[7744]: E0220 14:46:44.124189 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" podUID="c81ad608-a8ad-4289-a8d2-d48acb9b540c" Feb 20 14:46:44.336524 master-0 kubenswrapper[7744]: I0220 14:46:44.336462 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:44.756097 master-0 kubenswrapper[7744]: E0220 14:46:44.755969 7744 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19" Feb 20 14:46:44.757101 master-0 kubenswrapper[7744]: E0220 14:46:44.756167 7744 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-apiserver-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19,Command:[cluster-openshift-apiserver-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:KUBE_APISERVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-smglm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-apiserver-operator-8586dccc9b-pwm24_openshift-apiserver-operator(db9dc349-5216-43ff-8c17-3a9384a010ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 20 14:46:44.759195 master-0 kubenswrapper[7744]: E0220 14:46:44.759085 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" podUID="db9dc349-5216-43ff-8c17-3a9384a010ea" Feb 20 14:46:45.407797 master-0 kubenswrapper[7744]: E0220 14:46:45.407713 7744 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc" Feb 20 14:46:45.408131 master-0 kubenswrapper[7744]: E0220 14:46:45.407906 7744 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-storage-version-migrator-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc,Command:[cluster-kube-storage-version-migrator-operator start],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mb46b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-storage-version-migrator-operator-fc889cfd5-hxgzq_openshift-kube-storage-version-migrator-operator(8b73ae08-0ad7-4f99-8002-6df0d984cd2c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 20 14:46:45.409610 master-0 kubenswrapper[7744]: E0220 14:46:45.409515 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" podUID="8b73ae08-0ad7-4f99-8002-6df0d984cd2c" Feb 20 14:46:45.681339 master-0 kubenswrapper[7744]: I0220 14:46:45.681162 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:45.681587 master-0 kubenswrapper[7744]: E0220 14:46:45.681337 7744 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:45.681587 master-0 kubenswrapper[7744]: I0220 14:46:45.681442 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:45.681587 master-0 kubenswrapper[7744]: I0220 14:46:45.681487 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:45.681587 master-0 kubenswrapper[7744]: E0220 14:46:45.681551 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:49.681522492 +0000 UTC m=+8.883722442 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:45.681843 master-0 kubenswrapper[7744]: E0220 14:46:45.681625 7744 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 14:46:45.681843 master-0 kubenswrapper[7744]: I0220 14:46:45.681742 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:45.681843 master-0 kubenswrapper[7744]: E0220 14:46:45.681813 7744 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 14:46:45.682152 master-0 kubenswrapper[7744]: E0220 14:46:45.681855 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls podName:d28490b0-96ca-4fe0-8fae-e6f8390f933b nodeName:}" failed. No retries permitted until 2026-02-20 14:46:49.681813819 +0000 UTC m=+8.884013779 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls") pod "dns-operator-8c7d49845-gkrph" (UID: "d28490b0-96ca-4fe0-8fae-e6f8390f933b") : secret "metrics-tls" not found Feb 20 14:46:45.682152 master-0 kubenswrapper[7744]: E0220 14:46:45.681888 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics podName:c0a3548f-299c-4234-9bf1-c93efcb9740b nodeName:}" failed. No retries permitted until 2026-02-20 14:46:49.68187294 +0000 UTC m=+8.884072890 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-97m7r" (UID: "c0a3548f-299c-4234-9bf1-c93efcb9740b") : secret "marketplace-operator-metrics" not found Feb 20 14:46:45.682152 master-0 kubenswrapper[7744]: I0220 14:46:45.681968 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:45.682152 master-0 kubenswrapper[7744]: E0220 14:46:45.681989 7744 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 14:46:45.682152 master-0 kubenswrapper[7744]: I0220 14:46:45.682011 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:45.682152 master-0 kubenswrapper[7744]: E0220 14:46:45.682063 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls podName:b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:49.682045355 +0000 UTC m=+8.884245315 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-g7glt" (UID: "b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1") : secret "image-registry-operator-tls" not found Feb 20 14:46:45.682152 master-0 kubenswrapper[7744]: E0220 14:46:45.682073 7744 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:45.682152 master-0 kubenswrapper[7744]: I0220 14:46:45.682095 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:45.682152 master-0 kubenswrapper[7744]: E0220 14:46:45.682107 7744 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 14:46:45.682152 master-0 kubenswrapper[7744]: E0220 14:46:45.682114 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls podName:419f28a9-8fd7-4b59-9554-4d884a1208b5 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:49.682099066 +0000 UTC m=+8.884298986 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-p7mjp" (UID: "419f28a9-8fd7-4b59-9554-4d884a1208b5") : secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:45.682152 master-0 kubenswrapper[7744]: E0220 14:46:45.682149 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:49.682131507 +0000 UTC m=+8.884331537 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "node-tuning-operator-tls" not found Feb 20 14:46:45.683004 master-0 kubenswrapper[7744]: I0220 14:46:45.682173 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:45.683004 master-0 kubenswrapper[7744]: E0220 14:46:45.682193 7744 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 14:46:45.683004 master-0 kubenswrapper[7744]: E0220 14:46:45.682234 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert podName:1fe69517-eec2-4721-933c-fa27cea7ab1f nodeName:}" failed. No retries permitted until 2026-02-20 14:46:49.682221689 +0000 UTC m=+8.884421649 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2sw9z" (UID: "1fe69517-eec2-4721-933c-fa27cea7ab1f") : secret "package-server-manager-serving-cert" not found Feb 20 14:46:45.683004 master-0 kubenswrapper[7744]: E0220 14:46:45.682284 7744 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 14:46:45.683004 master-0 kubenswrapper[7744]: E0220 14:46:45.682328 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert podName:4cede061-d85a-4366-9f1e-90be51f726fc nodeName:}" failed. No retries permitted until 2026-02-20 14:46:49.682320321 +0000 UTC m=+8.884520331 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert") pod "cluster-version-operator-5cfd9759cf-jf2s9" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc") : secret "cluster-version-operator-serving-cert" not found Feb 20 14:46:45.783652 master-0 kubenswrapper[7744]: I0220 14:46:45.783581 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:45.784699 master-0 kubenswrapper[7744]: E0220 14:46:45.783749 7744 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 20 14:46:45.784699 master-0 kubenswrapper[7744]: I0220 14:46:45.783758 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:45.784699 master-0 kubenswrapper[7744]: E0220 14:46:45.783837 7744 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 14:46:45.784699 master-0 kubenswrapper[7744]: E0220 14:46:45.783845 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs podName:5ea4c132-b6d0-4dc9-942d-48e359eed418 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:49.783825134 +0000 UTC m=+8.986025054 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs") pod "network-metrics-daemon-99lkv" (UID: "5ea4c132-b6d0-4dc9-942d-48e359eed418") : secret "metrics-daemon-secret" not found Feb 20 14:46:45.784699 master-0 kubenswrapper[7744]: E0220 14:46:45.784019 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs podName:a1fb2774-6dd7-4429-9df3-4ddfcdaac939 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:49.784000408 +0000 UTC m=+8.986200328 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-wl49x" (UID: "a1fb2774-6dd7-4429-9df3-4ddfcdaac939") : secret "multus-admission-controller-secret" not found Feb 20 14:46:45.995727 master-0 kubenswrapper[7744]: E0220 14:46:45.995649 7744 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" Feb 20 14:46:45.996046 master-0 kubenswrapper[7744]: E0220 14:46:45.995858 7744 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac,Command:[cluster-kube-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac,ValueFrom:nil,},EnvVar{Name:CLUSTER_POLICY_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95,ValueFrom:nil,},EnvVar{Name:TOOLS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1a3bef9a43438ab475af89a16225f015c17bff1244015493a6a42c049a922ea4,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.31.14,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-operator-7bcfbc574b-lt7ww_openshift-kube-controller-manager-operator(4c31b8a7-edcb-403d-9122-7eb740f7d659): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 20 14:46:45.997207 master-0 kubenswrapper[7744]: E0220 14:46:45.997141 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" podUID="4c31b8a7-edcb-403d-9122-7eb740f7d659" Feb 20 14:46:46.107289 master-0 kubenswrapper[7744]: I0220 14:46:46.107182 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:46.116109 master-0 kubenswrapper[7744]: I0220 14:46:46.115993 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:46.121604 master-0 kubenswrapper[7744]: I0220 14:46:46.121543 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 14:46:46.568645 master-0 kubenswrapper[7744]: E0220 14:46:46.568567 7744 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e" Feb 20 14:46:46.568799 master-0 kubenswrapper[7744]: E0220 14:46:46.568747 7744 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 20 14:46:46.568799 master-0 kubenswrapper[7744]: container &Container{Name:authentication-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e,Command:[/bin/bash -ec],Args:[if [ -s /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then Feb 20 14:46:46.568799 master-0 kubenswrapper[7744]: echo "Copying system trust bundle" Feb 20 14:46:46.568799 master-0 kubenswrapper[7744]: cp -f /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem Feb 20 14:46:46.568799 master-0 kubenswrapper[7744]: fi Feb 20 14:46:46.568799 master-0 kubenswrapper[7744]: exec authentication-operator operator --config=/var/run/configmaps/config/operator-config.yaml --v=2 --terminate-on-files=/var/run/configmaps/trusted-ca-bundle/ca-bundle.crt --terminate-on-files=/tmp/terminate Feb 20 14:46:46.568799 master-0 kubenswrapper[7744]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE_OAUTH_SERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3167ddf67ad2f83e1a3f49ac6c7ee826469ce9ec16db6390f6a94dac24f6a346,ValueFrom:nil,},EnvVar{Name:IMAGE_OAUTH_APISERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_OAUTH_SERVER_IMAGE_VERSION,Value:4.18.33_openshift,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/trusted-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:service-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/service-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bk5m4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod authentication-operator-5bd7c86784-6r5qx_openshift-authentication-operator(8157f73d-c757-40c4-80bc-3c9de2f2288a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Feb 20 14:46:46.568799 master-0 kubenswrapper[7744]: > logger="UnhandledError" Feb 20 14:46:46.569993 master-0 kubenswrapper[7744]: E0220 14:46:46.569912 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" podUID="8157f73d-c757-40c4-80bc-3c9de2f2288a" Feb 20 14:46:46.762877 master-0 kubenswrapper[7744]: I0220 14:46:46.762319 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-ljvkb"] Feb 20 14:46:47.117549 master-0 kubenswrapper[7744]: I0220 14:46:47.117402 7744 generic.go:334] "Generic (PLEG): container finished" podID="d3ca2d2f-9f31-4524-a28f-cf16b02dd711" containerID="1233b754482b6558abf240af9822b6209076badce1d5bcade0d4d98c88cc1f1f" exitCode=0 Feb 20 14:46:47.117549 master-0 kubenswrapper[7744]: I0220 14:46:47.117508 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" event={"ID":"d3ca2d2f-9f31-4524-a28f-cf16b02dd711","Type":"ContainerDied","Data":"1233b754482b6558abf240af9822b6209076badce1d5bcade0d4d98c88cc1f1f"} Feb 20 14:46:47.121913 master-0 kubenswrapper[7744]: I0220 14:46:47.120706 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" event={"ID":"45d7ef0c-272b-4d1e-965f-484975d5d25c","Type":"ContainerStarted","Data":"233f31cc87ed77a81bb475184c8275cb1327d0aaed87c186b3895bc1d70da1c4"} Feb 20 14:46:47.123695 master-0 kubenswrapper[7744]: I0220 14:46:47.123131 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" event={"ID":"234a44fd-c153-47a6-a11d-7d4b7165c236","Type":"ContainerStarted","Data":"62a31d32d4ca4d676ab042ba4779a3437daeccc9e4cd7a7e48c41884a5b21dfe"} Feb 20 14:46:47.127700 master-0 kubenswrapper[7744]: I0220 14:46:47.127129 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" event={"ID":"989af121-da08-4f40-b08c-dd2aa67bc60c","Type":"ContainerStarted","Data":"31ee4b259747c34f0e0b3ef2fb4560b0c5185716f80403e8aa587e56efaa8aa2"} Feb 20 14:46:47.130121 master-0 kubenswrapper[7744]: I0220 14:46:47.130021 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-ljvkb" event={"ID":"929dffba-46da-4d81-a437-bc6a9fe79811","Type":"ContainerStarted","Data":"e6b22158f2a0887e8ea0d74b234993e0aec608e03c27efe7a886fc0349f774e3"} Feb 20 14:46:47.130121 master-0 kubenswrapper[7744]: I0220 14:46:47.130065 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:46:47.130121 master-0 kubenswrapper[7744]: I0220 14:46:47.130084 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-ljvkb" event={"ID":"929dffba-46da-4d81-a437-bc6a9fe79811","Type":"ContainerStarted","Data":"07f2250f0416c7a8aaa5ba7190cd272a32f30bcb4026105fc1ebf0050f1e79f2"} Feb 20 14:46:48.469689 master-0 kubenswrapper[7744]: I0220 14:46:48.468748 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:48.502991 master-0 kubenswrapper[7744]: I0220 14:46:48.502835 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:48.825378 master-0 kubenswrapper[7744]: I0220 14:46:48.793777 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg"] Feb 20 14:46:48.825378 master-0 kubenswrapper[7744]: E0220 14:46:48.794057 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014f3913-ac7e-431a-880c-91d979a5dfc7" containerName="assisted-installer-controller" Feb 20 14:46:48.825378 master-0 kubenswrapper[7744]: I0220 14:46:48.794087 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="014f3913-ac7e-431a-880c-91d979a5dfc7" containerName="assisted-installer-controller" Feb 20 14:46:48.825378 master-0 kubenswrapper[7744]: E0220 14:46:48.794113 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc6343c-22ec-4cf8-904f-6a93cd251993" containerName="prober" Feb 20 14:46:48.825378 master-0 kubenswrapper[7744]: I0220 14:46:48.794132 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc6343c-22ec-4cf8-904f-6a93cd251993" containerName="prober" Feb 20 14:46:48.825378 master-0 kubenswrapper[7744]: I0220 14:46:48.794269 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbc6343c-22ec-4cf8-904f-6a93cd251993" containerName="prober" Feb 20 14:46:48.825378 master-0 kubenswrapper[7744]: I0220 14:46:48.794295 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="014f3913-ac7e-431a-880c-91d979a5dfc7" containerName="assisted-installer-controller" Feb 20 14:46:48.825378 master-0 kubenswrapper[7744]: I0220 14:46:48.794751 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:48.825378 master-0 kubenswrapper[7744]: I0220 14:46:48.798371 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 14:46:48.825378 master-0 kubenswrapper[7744]: I0220 14:46:48.798698 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 14:46:48.825378 master-0 kubenswrapper[7744]: I0220 14:46:48.799154 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 14:46:48.825378 master-0 kubenswrapper[7744]: I0220 14:46:48.799453 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 14:46:48.825378 master-0 kubenswrapper[7744]: I0220 14:46:48.799638 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 14:46:48.825378 master-0 kubenswrapper[7744]: I0220 14:46:48.800037 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 14:46:48.825378 master-0 kubenswrapper[7744]: I0220 14:46:48.801641 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg"] Feb 20 14:46:48.932266 master-0 kubenswrapper[7744]: I0220 14:46:48.932213 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:48.932476 master-0 kubenswrapper[7744]: I0220 14:46:48.932283 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14bdb95b-35f5-44f1-887c-c9e4a20948a6-serving-cert\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:48.932476 master-0 kubenswrapper[7744]: I0220 14:46:48.932396 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-config\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:48.932531 master-0 kubenswrapper[7744]: I0220 14:46:48.932512 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-client-ca\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:48.932591 master-0 kubenswrapper[7744]: I0220 14:46:48.932565 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnrkc\" (UniqueName: \"kubernetes.io/projected/14bdb95b-35f5-44f1-887c-c9e4a20948a6-kube-api-access-vnrkc\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:49.034105 master-0 kubenswrapper[7744]: I0220 14:46:49.033978 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14bdb95b-35f5-44f1-887c-c9e4a20948a6-serving-cert\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:49.034366 master-0 kubenswrapper[7744]: E0220 14:46:49.034183 7744 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:46:49.034366 master-0 kubenswrapper[7744]: E0220 14:46:49.034276 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14bdb95b-35f5-44f1-887c-c9e4a20948a6-serving-cert podName:14bdb95b-35f5-44f1-887c-c9e4a20948a6 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:49.534249236 +0000 UTC m=+8.736449186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/14bdb95b-35f5-44f1-887c-c9e4a20948a6-serving-cert") pod "controller-manager-6c9b8f4d95-8vtvg" (UID: "14bdb95b-35f5-44f1-887c-c9e4a20948a6") : secret "serving-cert" not found Feb 20 14:46:49.034366 master-0 kubenswrapper[7744]: I0220 14:46:49.034281 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-config\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:49.034366 master-0 kubenswrapper[7744]: E0220 14:46:49.034340 7744 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 20 14:46:49.034498 master-0 kubenswrapper[7744]: E0220 14:46:49.034421 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-config podName:14bdb95b-35f5-44f1-887c-c9e4a20948a6 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:49.53440176 +0000 UTC m=+8.736601680 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-config") pod "controller-manager-6c9b8f4d95-8vtvg" (UID: "14bdb95b-35f5-44f1-887c-c9e4a20948a6") : configmap "config" not found Feb 20 14:46:49.035145 master-0 kubenswrapper[7744]: I0220 14:46:49.034526 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-client-ca\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:49.035145 master-0 kubenswrapper[7744]: E0220 14:46:49.034723 7744 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:46:49.035145 master-0 kubenswrapper[7744]: I0220 14:46:49.034773 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnrkc\" (UniqueName: \"kubernetes.io/projected/14bdb95b-35f5-44f1-887c-c9e4a20948a6-kube-api-access-vnrkc\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:49.035145 master-0 kubenswrapper[7744]: E0220 14:46:49.034810 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-client-ca podName:14bdb95b-35f5-44f1-887c-c9e4a20948a6 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:49.534783369 +0000 UTC m=+8.736983289 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-client-ca") pod "controller-manager-6c9b8f4d95-8vtvg" (UID: "14bdb95b-35f5-44f1-887c-c9e4a20948a6") : configmap "client-ca" not found Feb 20 14:46:49.035145 master-0 kubenswrapper[7744]: I0220 14:46:49.034972 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:49.035145 master-0 kubenswrapper[7744]: E0220 14:46:49.035039 7744 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Feb 20 14:46:49.035145 master-0 kubenswrapper[7744]: E0220 14:46:49.035061 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-proxy-ca-bundles podName:14bdb95b-35f5-44f1-887c-c9e4a20948a6 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:49.535054166 +0000 UTC m=+8.737254076 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-proxy-ca-bundles") pod "controller-manager-6c9b8f4d95-8vtvg" (UID: "14bdb95b-35f5-44f1-887c-c9e4a20948a6") : configmap "openshift-global-ca" not found Feb 20 14:46:49.062743 master-0 kubenswrapper[7744]: I0220 14:46:49.062073 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnrkc\" (UniqueName: \"kubernetes.io/projected/14bdb95b-35f5-44f1-887c-c9e4a20948a6-kube-api-access-vnrkc\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:49.143764 master-0 kubenswrapper[7744]: I0220 14:46:49.143643 7744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 14:46:49.143764 master-0 kubenswrapper[7744]: I0220 14:46:49.143679 7744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 14:46:49.541159 master-0 kubenswrapper[7744]: I0220 14:46:49.541063 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14bdb95b-35f5-44f1-887c-c9e4a20948a6-serving-cert\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:49.541159 master-0 kubenswrapper[7744]: I0220 14:46:49.541162 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-config\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:49.541992 master-0 kubenswrapper[7744]: E0220 14:46:49.541248 7744 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:46:49.541992 master-0 kubenswrapper[7744]: E0220 14:46:49.541340 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14bdb95b-35f5-44f1-887c-c9e4a20948a6-serving-cert podName:14bdb95b-35f5-44f1-887c-c9e4a20948a6 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:50.541318819 +0000 UTC m=+9.743518749 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/14bdb95b-35f5-44f1-887c-c9e4a20948a6-serving-cert") pod "controller-manager-6c9b8f4d95-8vtvg" (UID: "14bdb95b-35f5-44f1-887c-c9e4a20948a6") : secret "serving-cert" not found Feb 20 14:46:49.541992 master-0 kubenswrapper[7744]: E0220 14:46:49.541354 7744 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:46:49.541992 master-0 kubenswrapper[7744]: E0220 14:46:49.541423 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-client-ca podName:14bdb95b-35f5-44f1-887c-c9e4a20948a6 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:50.541401281 +0000 UTC m=+9.743601231 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-client-ca") pod "controller-manager-6c9b8f4d95-8vtvg" (UID: "14bdb95b-35f5-44f1-887c-c9e4a20948a6") : configmap "client-ca" not found Feb 20 14:46:49.541992 master-0 kubenswrapper[7744]: E0220 14:46:49.541473 7744 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 20 14:46:49.541992 master-0 kubenswrapper[7744]: E0220 14:46:49.541506 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-config podName:14bdb95b-35f5-44f1-887c-c9e4a20948a6 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:50.541494573 +0000 UTC m=+9.743694523 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-config") pod "controller-manager-6c9b8f4d95-8vtvg" (UID: "14bdb95b-35f5-44f1-887c-c9e4a20948a6") : configmap "config" not found Feb 20 14:46:49.541992 master-0 kubenswrapper[7744]: I0220 14:46:49.541271 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-client-ca\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:49.541992 master-0 kubenswrapper[7744]: I0220 14:46:49.541633 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:49.541992 master-0 kubenswrapper[7744]: E0220 14:46:49.541749 7744 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Feb 20 14:46:49.541992 master-0 kubenswrapper[7744]: E0220 14:46:49.541801 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-proxy-ca-bundles podName:14bdb95b-35f5-44f1-887c-c9e4a20948a6 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:50.541789321 +0000 UTC m=+9.743989271 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-proxy-ca-bundles") pod "controller-manager-6c9b8f4d95-8vtvg" (UID: "14bdb95b-35f5-44f1-887c-c9e4a20948a6") : configmap "openshift-global-ca" not found Feb 20 14:46:49.717874 master-0 kubenswrapper[7744]: I0220 14:46:49.717738 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg"] Feb 20 14:46:49.718188 master-0 kubenswrapper[7744]: E0220 14:46:49.718008 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" podUID="14bdb95b-35f5-44f1-887c-c9e4a20948a6" Feb 20 14:46:49.727491 master-0 kubenswrapper[7744]: I0220 14:46:49.727428 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc"] Feb 20 14:46:49.728262 master-0 kubenswrapper[7744]: I0220 14:46:49.728217 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:49.731370 master-0 kubenswrapper[7744]: I0220 14:46:49.730678 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 20 14:46:49.731370 master-0 kubenswrapper[7744]: I0220 14:46:49.730983 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 20 14:46:49.732717 master-0 kubenswrapper[7744]: I0220 14:46:49.731805 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 20 14:46:49.732717 master-0 kubenswrapper[7744]: I0220 14:46:49.732081 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 20 14:46:49.732717 master-0 kubenswrapper[7744]: I0220 14:46:49.732504 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 20 14:46:49.738232 master-0 kubenswrapper[7744]: I0220 14:46:49.738183 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc"] Feb 20 14:46:49.743725 master-0 kubenswrapper[7744]: I0220 14:46:49.743660 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:49.743888 master-0 kubenswrapper[7744]: I0220 14:46:49.743752 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:49.744102 master-0 kubenswrapper[7744]: E0220 14:46:49.744040 7744 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 14:46:49.744218 master-0 kubenswrapper[7744]: I0220 14:46:49.744052 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:49.744218 master-0 kubenswrapper[7744]: E0220 14:46:49.744122 7744 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:49.744218 master-0 kubenswrapper[7744]: E0220 14:46:49.744049 7744 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 14:46:49.744218 master-0 kubenswrapper[7744]: E0220 14:46:49.744173 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert podName:1fe69517-eec2-4721-933c-fa27cea7ab1f nodeName:}" failed. No retries permitted until 2026-02-20 14:46:57.74413805 +0000 UTC m=+16.946338010 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2sw9z" (UID: "1fe69517-eec2-4721-933c-fa27cea7ab1f") : secret "package-server-manager-serving-cert" not found Feb 20 14:46:49.744218 master-0 kubenswrapper[7744]: E0220 14:46:49.744213 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:57.744192861 +0000 UTC m=+16.946392821 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:49.744734 master-0 kubenswrapper[7744]: I0220 14:46:49.744285 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:49.744734 master-0 kubenswrapper[7744]: E0220 14:46:49.744322 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert podName:4cede061-d85a-4366-9f1e-90be51f726fc nodeName:}" failed. No retries permitted until 2026-02-20 14:46:57.744284683 +0000 UTC m=+16.946484693 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert") pod "cluster-version-operator-5cfd9759cf-jf2s9" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc") : secret "cluster-version-operator-serving-cert" not found Feb 20 14:46:49.744734 master-0 kubenswrapper[7744]: I0220 14:46:49.744377 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:49.744734 master-0 kubenswrapper[7744]: E0220 14:46:49.744435 7744 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 14:46:49.744734 master-0 kubenswrapper[7744]: I0220 14:46:49.744474 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:49.744734 master-0 kubenswrapper[7744]: E0220 14:46:49.744504 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls podName:d28490b0-96ca-4fe0-8fae-e6f8390f933b nodeName:}" failed. No retries permitted until 2026-02-20 14:46:57.744481388 +0000 UTC m=+16.946681428 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls") pod "dns-operator-8c7d49845-gkrph" (UID: "d28490b0-96ca-4fe0-8fae-e6f8390f933b") : secret "metrics-tls" not found Feb 20 14:46:49.744734 master-0 kubenswrapper[7744]: E0220 14:46:49.744563 7744 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 14:46:49.744734 master-0 kubenswrapper[7744]: I0220 14:46:49.744555 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:49.744734 master-0 kubenswrapper[7744]: E0220 14:46:49.744629 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics podName:c0a3548f-299c-4234-9bf1-c93efcb9740b nodeName:}" failed. No retries permitted until 2026-02-20 14:46:57.744612541 +0000 UTC m=+16.946812481 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-97m7r" (UID: "c0a3548f-299c-4234-9bf1-c93efcb9740b") : secret "marketplace-operator-metrics" not found Feb 20 14:46:49.744734 master-0 kubenswrapper[7744]: I0220 14:46:49.744667 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:49.744734 master-0 kubenswrapper[7744]: E0220 14:46:49.744677 7744 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 14:46:49.744734 master-0 kubenswrapper[7744]: E0220 14:46:49.744980 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls podName:b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:57.7449713 +0000 UTC m=+16.947171230 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-g7glt" (UID: "b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1") : secret "image-registry-operator-tls" not found Feb 20 14:46:49.745852 master-0 kubenswrapper[7744]: E0220 14:46:49.744691 7744 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:49.745852 master-0 kubenswrapper[7744]: E0220 14:46:49.744763 7744 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 14:46:49.745852 master-0 kubenswrapper[7744]: E0220 14:46:49.745011 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls podName:419f28a9-8fd7-4b59-9554-4d884a1208b5 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:57.745004741 +0000 UTC m=+16.947204671 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-p7mjp" (UID: "419f28a9-8fd7-4b59-9554-4d884a1208b5") : secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:49.745852 master-0 kubenswrapper[7744]: E0220 14:46:49.745094 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:57.745070853 +0000 UTC m=+16.947270913 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "node-tuning-operator-tls" not found Feb 20 14:46:49.845654 master-0 kubenswrapper[7744]: I0220 14:46:49.845567 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-config\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:49.845654 master-0 kubenswrapper[7744]: I0220 14:46:49.845633 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:49.845654 master-0 kubenswrapper[7744]: I0220 14:46:49.845669 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:49.846168 master-0 kubenswrapper[7744]: I0220 14:46:49.845696 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:49.846168 master-0 kubenswrapper[7744]: I0220 14:46:49.845723 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf4d8\" (UniqueName: \"kubernetes.io/projected/9c46d834-1901-4616-b598-7890bfe0bc72-kube-api-access-kf4d8\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:49.846168 master-0 kubenswrapper[7744]: I0220 14:46:49.845753 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:49.846168 master-0 kubenswrapper[7744]: E0220 14:46:49.845871 7744 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 20 14:46:49.846168 master-0 kubenswrapper[7744]: E0220 14:46:49.845907 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs podName:5ea4c132-b6d0-4dc9-942d-48e359eed418 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:57.845892929 +0000 UTC m=+17.048092849 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs") pod "network-metrics-daemon-99lkv" (UID: "5ea4c132-b6d0-4dc9-942d-48e359eed418") : secret "metrics-daemon-secret" not found Feb 20 14:46:49.846582 master-0 kubenswrapper[7744]: E0220 14:46:49.846252 7744 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 14:46:49.846582 master-0 kubenswrapper[7744]: E0220 14:46:49.846405 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs podName:a1fb2774-6dd7-4429-9df3-4ddfcdaac939 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:57.84636984 +0000 UTC m=+17.048569840 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-wl49x" (UID: "a1fb2774-6dd7-4429-9df3-4ddfcdaac939") : secret "multus-admission-controller-secret" not found Feb 20 14:46:49.947370 master-0 kubenswrapper[7744]: I0220 14:46:49.947275 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-config\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:49.947781 master-0 kubenswrapper[7744]: I0220 14:46:49.947689 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:49.948050 master-0 kubenswrapper[7744]: E0220 14:46:49.947993 7744 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:46:49.948169 master-0 kubenswrapper[7744]: E0220 14:46:49.948105 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert podName:9c46d834-1901-4616-b598-7890bfe0bc72 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:50.448076858 +0000 UTC m=+9.650276818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert") pod "route-controller-manager-74f69747fd-7s6sc" (UID: "9c46d834-1901-4616-b598-7890bfe0bc72") : secret "serving-cert" not found Feb 20 14:46:49.948307 master-0 kubenswrapper[7744]: I0220 14:46:49.948255 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf4d8\" (UniqueName: \"kubernetes.io/projected/9c46d834-1901-4616-b598-7890bfe0bc72-kube-api-access-kf4d8\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:49.948410 master-0 kubenswrapper[7744]: I0220 14:46:49.948323 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:49.948881 master-0 kubenswrapper[7744]: I0220 14:46:49.948815 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-config\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:49.949019 master-0 kubenswrapper[7744]: E0220 14:46:49.948838 7744 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:46:49.949019 master-0 kubenswrapper[7744]: E0220 14:46:49.948989 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca podName:9c46d834-1901-4616-b598-7890bfe0bc72 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:50.44896322 +0000 UTC m=+9.651163170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca") pod "route-controller-manager-74f69747fd-7s6sc" (UID: "9c46d834-1901-4616-b598-7890bfe0bc72") : configmap "client-ca" not found Feb 20 14:46:49.982659 master-0 kubenswrapper[7744]: I0220 14:46:49.982504 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf4d8\" (UniqueName: \"kubernetes.io/projected/9c46d834-1901-4616-b598-7890bfe0bc72-kube-api-access-kf4d8\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:50.104577 master-0 kubenswrapper[7744]: I0220 14:46:50.104452 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:50.111080 master-0 kubenswrapper[7744]: I0220 14:46:50.110777 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:50.149878 master-0 kubenswrapper[7744]: I0220 14:46:50.149773 7744 generic.go:334] "Generic (PLEG): container finished" podID="d3ca2d2f-9f31-4524-a28f-cf16b02dd711" containerID="8b677f9dfe1adb3fd4defb49e7d0b98454fc7a8c20e2d380e3e690cdf86abbc6" exitCode=0 Feb 20 14:46:50.149878 master-0 kubenswrapper[7744]: I0220 14:46:50.149825 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" event={"ID":"d3ca2d2f-9f31-4524-a28f-cf16b02dd711","Type":"ContainerDied","Data":"8b677f9dfe1adb3fd4defb49e7d0b98454fc7a8c20e2d380e3e690cdf86abbc6"} Feb 20 14:46:50.150210 master-0 kubenswrapper[7744]: I0220 14:46:50.149905 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:50.169084 master-0 kubenswrapper[7744]: I0220 14:46:50.166339 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:50.251127 master-0 kubenswrapper[7744]: I0220 14:46:50.251062 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnrkc\" (UniqueName: \"kubernetes.io/projected/14bdb95b-35f5-44f1-887c-c9e4a20948a6-kube-api-access-vnrkc\") pod \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " Feb 20 14:46:50.259863 master-0 kubenswrapper[7744]: I0220 14:46:50.259749 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14bdb95b-35f5-44f1-887c-c9e4a20948a6-kube-api-access-vnrkc" (OuterVolumeSpecName: "kube-api-access-vnrkc") pod "14bdb95b-35f5-44f1-887c-c9e4a20948a6" (UID: "14bdb95b-35f5-44f1-887c-c9e4a20948a6"). InnerVolumeSpecName "kube-api-access-vnrkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:46:50.353429 master-0 kubenswrapper[7744]: I0220 14:46:50.353332 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnrkc\" (UniqueName: \"kubernetes.io/projected/14bdb95b-35f5-44f1-887c-c9e4a20948a6-kube-api-access-vnrkc\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:50.357314 master-0 kubenswrapper[7744]: I0220 14:46:50.357214 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:50.366233 master-0 kubenswrapper[7744]: I0220 14:46:50.366170 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:50.454330 master-0 kubenswrapper[7744]: I0220 14:46:50.454227 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:50.454677 master-0 kubenswrapper[7744]: E0220 14:46:50.454452 7744 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:46:50.454677 master-0 kubenswrapper[7744]: E0220 14:46:50.454560 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca podName:9c46d834-1901-4616-b598-7890bfe0bc72 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:51.454531485 +0000 UTC m=+10.656731455 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca") pod "route-controller-manager-74f69747fd-7s6sc" (UID: "9c46d834-1901-4616-b598-7890bfe0bc72") : configmap "client-ca" not found Feb 20 14:46:50.454824 master-0 kubenswrapper[7744]: I0220 14:46:50.454747 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:50.455052 master-0 kubenswrapper[7744]: E0220 14:46:50.455006 7744 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:46:50.455143 master-0 kubenswrapper[7744]: E0220 14:46:50.455120 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert podName:9c46d834-1901-4616-b598-7890bfe0bc72 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:51.455092539 +0000 UTC m=+10.657292549 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert") pod "route-controller-manager-74f69747fd-7s6sc" (UID: "9c46d834-1901-4616-b598-7890bfe0bc72") : secret "serving-cert" not found Feb 20 14:46:50.556709 master-0 kubenswrapper[7744]: I0220 14:46:50.556529 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:50.556709 master-0 kubenswrapper[7744]: I0220 14:46:50.556610 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14bdb95b-35f5-44f1-887c-c9e4a20948a6-serving-cert\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:50.556709 master-0 kubenswrapper[7744]: I0220 14:46:50.556666 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-config\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:50.557625 master-0 kubenswrapper[7744]: I0220 14:46:50.557149 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-client-ca\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:50.557625 master-0 kubenswrapper[7744]: E0220 14:46:50.557360 7744 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:46:50.557625 master-0 kubenswrapper[7744]: E0220 14:46:50.557466 7744 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:46:50.557625 master-0 kubenswrapper[7744]: E0220 14:46:50.557494 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14bdb95b-35f5-44f1-887c-c9e4a20948a6-serving-cert podName:14bdb95b-35f5-44f1-887c-c9e4a20948a6 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:52.557454092 +0000 UTC m=+11.759654122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/14bdb95b-35f5-44f1-887c-c9e4a20948a6-serving-cert") pod "controller-manager-6c9b8f4d95-8vtvg" (UID: "14bdb95b-35f5-44f1-887c-c9e4a20948a6") : secret "serving-cert" not found Feb 20 14:46:50.557625 master-0 kubenswrapper[7744]: E0220 14:46:50.557562 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-client-ca podName:14bdb95b-35f5-44f1-887c-c9e4a20948a6 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:52.557528524 +0000 UTC m=+11.759728474 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-client-ca") pod "controller-manager-6c9b8f4d95-8vtvg" (UID: "14bdb95b-35f5-44f1-887c-c9e4a20948a6") : configmap "client-ca" not found Feb 20 14:46:50.558432 master-0 kubenswrapper[7744]: I0220 14:46:50.558359 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-config\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:50.559406 master-0 kubenswrapper[7744]: I0220 14:46:50.559351 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-proxy-ca-bundles\") pod \"controller-manager-6c9b8f4d95-8vtvg\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:50.658758 master-0 kubenswrapper[7744]: I0220 14:46:50.658656 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-config\") pod \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " Feb 20 14:46:50.659144 master-0 kubenswrapper[7744]: I0220 14:46:50.658792 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-proxy-ca-bundles\") pod \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\" (UID: \"14bdb95b-35f5-44f1-887c-c9e4a20948a6\") " Feb 20 14:46:50.659650 master-0 kubenswrapper[7744]: I0220 14:46:50.659549 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-config" (OuterVolumeSpecName: "config") pod "14bdb95b-35f5-44f1-887c-c9e4a20948a6" (UID: "14bdb95b-35f5-44f1-887c-c9e4a20948a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:46:50.659767 master-0 kubenswrapper[7744]: I0220 14:46:50.659706 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "14bdb95b-35f5-44f1-887c-c9e4a20948a6" (UID: "14bdb95b-35f5-44f1-887c-c9e4a20948a6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:46:50.737597 master-0 kubenswrapper[7744]: I0220 14:46:50.737525 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:50.760517 master-0 kubenswrapper[7744]: I0220 14:46:50.760468 7744 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-config\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:50.760517 master-0 kubenswrapper[7744]: I0220 14:46:50.760517 7744 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:51.138795 master-0 kubenswrapper[7744]: I0220 14:46:51.138695 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:51.139138 master-0 kubenswrapper[7744]: I0220 14:46:51.138905 7744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 14:46:51.139138 master-0 kubenswrapper[7744]: I0220 14:46:51.138948 7744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 14:46:51.154092 master-0 kubenswrapper[7744]: I0220 14:46:51.154005 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg" Feb 20 14:46:51.201327 master-0 kubenswrapper[7744]: I0220 14:46:51.201263 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56bf465fcf-72pzv"] Feb 20 14:46:51.202278 master-0 kubenswrapper[7744]: I0220 14:46:51.202244 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg"] Feb 20 14:46:51.202544 master-0 kubenswrapper[7744]: I0220 14:46:51.202519 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:51.208716 master-0 kubenswrapper[7744]: I0220 14:46:51.208661 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 14:46:51.208716 master-0 kubenswrapper[7744]: I0220 14:46:51.208687 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 14:46:51.209019 master-0 kubenswrapper[7744]: I0220 14:46:51.208982 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 14:46:51.209408 master-0 kubenswrapper[7744]: I0220 14:46:51.209296 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 14:46:51.210994 master-0 kubenswrapper[7744]: I0220 14:46:51.209586 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 14:46:51.212446 master-0 kubenswrapper[7744]: I0220 14:46:51.212408 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:46:51.212446 master-0 kubenswrapper[7744]: I0220 14:46:51.212572 7744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 14:46:51.220073 master-0 kubenswrapper[7744]: I0220 14:46:51.219999 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56bf465fcf-72pzv"] Feb 20 14:46:51.225161 master-0 kubenswrapper[7744]: I0220 14:46:51.223748 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6c9b8f4d95-8vtvg"] Feb 20 14:46:51.234772 master-0 kubenswrapper[7744]: I0220 14:46:51.234719 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 14:46:51.330798 master-0 kubenswrapper[7744]: I0220 14:46:51.330618 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56bf465fcf-72pzv"] Feb 20 14:46:51.331091 master-0 kubenswrapper[7744]: E0220 14:46:51.330937 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-845cw proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" podUID="4cd1cbee-d11a-4f40-9284-cdee8dc4b5db" Feb 20 14:46:51.368959 master-0 kubenswrapper[7744]: I0220 14:46:51.368904 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-client-ca\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:51.369181 master-0 kubenswrapper[7744]: I0220 14:46:51.369117 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-config\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:51.369311 master-0 kubenswrapper[7744]: I0220 14:46:51.369280 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-proxy-ca-bundles\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:51.369347 master-0 kubenswrapper[7744]: I0220 14:46:51.369314 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-845cw\" (UniqueName: \"kubernetes.io/projected/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-kube-api-access-845cw\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:51.369495 master-0 kubenswrapper[7744]: I0220 14:46:51.369470 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-serving-cert\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:51.369634 master-0 kubenswrapper[7744]: I0220 14:46:51.369550 7744 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14bdb95b-35f5-44f1-887c-c9e4a20948a6-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:51.369634 master-0 kubenswrapper[7744]: I0220 14:46:51.369570 7744 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14bdb95b-35f5-44f1-887c-c9e4a20948a6-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:51.470578 master-0 kubenswrapper[7744]: I0220 14:46:51.470517 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-proxy-ca-bundles\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:51.470919 master-0 kubenswrapper[7744]: I0220 14:46:51.470586 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-845cw\" (UniqueName: \"kubernetes.io/projected/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-kube-api-access-845cw\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:51.470919 master-0 kubenswrapper[7744]: I0220 14:46:51.470699 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-serving-cert\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:51.470919 master-0 kubenswrapper[7744]: I0220 14:46:51.470907 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:51.471206 master-0 kubenswrapper[7744]: I0220 14:46:51.471022 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-client-ca\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:51.471206 master-0 kubenswrapper[7744]: I0220 14:46:51.471079 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-config\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:51.471206 master-0 kubenswrapper[7744]: I0220 14:46:51.471144 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:51.471380 master-0 kubenswrapper[7744]: E0220 14:46:51.471358 7744 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:46:51.471488 master-0 kubenswrapper[7744]: E0220 14:46:51.471443 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-serving-cert podName:4cd1cbee-d11a-4f40-9284-cdee8dc4b5db nodeName:}" failed. No retries permitted until 2026-02-20 14:46:51.971417447 +0000 UTC m=+11.173617407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-serving-cert") pod "controller-manager-56bf465fcf-72pzv" (UID: "4cd1cbee-d11a-4f40-9284-cdee8dc4b5db") : secret "serving-cert" not found Feb 20 14:46:51.471605 master-0 kubenswrapper[7744]: E0220 14:46:51.471488 7744 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:46:51.471605 master-0 kubenswrapper[7744]: E0220 14:46:51.471568 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert podName:9c46d834-1901-4616-b598-7890bfe0bc72 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:53.4715413 +0000 UTC m=+12.673741260 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert") pod "route-controller-manager-74f69747fd-7s6sc" (UID: "9c46d834-1901-4616-b598-7890bfe0bc72") : secret "serving-cert" not found Feb 20 14:46:51.471723 master-0 kubenswrapper[7744]: E0220 14:46:51.471628 7744 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:46:51.471723 master-0 kubenswrapper[7744]: E0220 14:46:51.471667 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-client-ca podName:4cd1cbee-d11a-4f40-9284-cdee8dc4b5db nodeName:}" failed. No retries permitted until 2026-02-20 14:46:51.971653373 +0000 UTC m=+11.173853333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-client-ca") pod "controller-manager-56bf465fcf-72pzv" (UID: "4cd1cbee-d11a-4f40-9284-cdee8dc4b5db") : configmap "client-ca" not found Feb 20 14:46:51.471723 master-0 kubenswrapper[7744]: E0220 14:46:51.471709 7744 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:46:51.471895 master-0 kubenswrapper[7744]: E0220 14:46:51.471758 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca podName:9c46d834-1901-4616-b598-7890bfe0bc72 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:53.471742875 +0000 UTC m=+12.673942835 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca") pod "route-controller-manager-74f69747fd-7s6sc" (UID: "9c46d834-1901-4616-b598-7890bfe0bc72") : configmap "client-ca" not found Feb 20 14:46:51.476329 master-0 kubenswrapper[7744]: I0220 14:46:51.473562 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-config\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:51.478256 master-0 kubenswrapper[7744]: I0220 14:46:51.478165 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-proxy-ca-bundles\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:51.509239 master-0 kubenswrapper[7744]: I0220 14:46:51.509111 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-845cw\" (UniqueName: \"kubernetes.io/projected/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-kube-api-access-845cw\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:51.604231 master-0 kubenswrapper[7744]: I0220 14:46:51.604099 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:51.618429 master-0 kubenswrapper[7744]: I0220 14:46:51.618177 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:51.976977 master-0 kubenswrapper[7744]: I0220 14:46:51.976679 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-client-ca\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:51.976977 master-0 kubenswrapper[7744]: I0220 14:46:51.976829 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-serving-cert\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:51.977206 master-0 kubenswrapper[7744]: E0220 14:46:51.977050 7744 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:46:51.977206 master-0 kubenswrapper[7744]: E0220 14:46:51.977185 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-client-ca podName:4cd1cbee-d11a-4f40-9284-cdee8dc4b5db nodeName:}" failed. No retries permitted until 2026-02-20 14:46:52.977153038 +0000 UTC m=+12.179352958 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-client-ca") pod "controller-manager-56bf465fcf-72pzv" (UID: "4cd1cbee-d11a-4f40-9284-cdee8dc4b5db") : configmap "client-ca" not found Feb 20 14:46:51.977546 master-0 kubenswrapper[7744]: E0220 14:46:51.977447 7744 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:46:51.977546 master-0 kubenswrapper[7744]: E0220 14:46:51.977528 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-serving-cert podName:4cd1cbee-d11a-4f40-9284-cdee8dc4b5db nodeName:}" failed. No retries permitted until 2026-02-20 14:46:52.977509706 +0000 UTC m=+12.179709636 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-serving-cert") pod "controller-manager-56bf465fcf-72pzv" (UID: "4cd1cbee-d11a-4f40-9284-cdee8dc4b5db") : secret "serving-cert" not found Feb 20 14:46:52.157873 master-0 kubenswrapper[7744]: I0220 14:46:52.157806 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:52.165101 master-0 kubenswrapper[7744]: I0220 14:46:52.164869 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:46:52.172578 master-0 kubenswrapper[7744]: I0220 14:46:52.172507 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:52.287242 master-0 kubenswrapper[7744]: I0220 14:46:52.286648 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-845cw\" (UniqueName: \"kubernetes.io/projected/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-kube-api-access-845cw\") pod \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " Feb 20 14:46:52.287495 master-0 kubenswrapper[7744]: I0220 14:46:52.287256 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-proxy-ca-bundles\") pod \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " Feb 20 14:46:52.287495 master-0 kubenswrapper[7744]: I0220 14:46:52.287336 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-config\") pod \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " Feb 20 14:46:52.289238 master-0 kubenswrapper[7744]: I0220 14:46:52.289172 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4cd1cbee-d11a-4f40-9284-cdee8dc4b5db" (UID: "4cd1cbee-d11a-4f40-9284-cdee8dc4b5db"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:46:52.289510 master-0 kubenswrapper[7744]: I0220 14:46:52.289356 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-config" (OuterVolumeSpecName: "config") pod "4cd1cbee-d11a-4f40-9284-cdee8dc4b5db" (UID: "4cd1cbee-d11a-4f40-9284-cdee8dc4b5db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:46:52.294242 master-0 kubenswrapper[7744]: I0220 14:46:52.294143 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-kube-api-access-845cw" (OuterVolumeSpecName: "kube-api-access-845cw") pod "4cd1cbee-d11a-4f40-9284-cdee8dc4b5db" (UID: "4cd1cbee-d11a-4f40-9284-cdee8dc4b5db"). InnerVolumeSpecName "kube-api-access-845cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:46:52.388956 master-0 kubenswrapper[7744]: I0220 14:46:52.388844 7744 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-config\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:52.388956 master-0 kubenswrapper[7744]: I0220 14:46:52.388890 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-845cw\" (UniqueName: \"kubernetes.io/projected/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-kube-api-access-845cw\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:52.388956 master-0 kubenswrapper[7744]: I0220 14:46:52.388900 7744 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:52.994891 master-0 kubenswrapper[7744]: I0220 14:46:52.994821 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-serving-cert\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:52.997688 master-0 kubenswrapper[7744]: I0220 14:46:52.995089 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-client-ca\") pod \"controller-manager-56bf465fcf-72pzv\" (UID: \"4cd1cbee-d11a-4f40-9284-cdee8dc4b5db\") " pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:52.997688 master-0 kubenswrapper[7744]: E0220 14:46:52.995142 7744 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:46:52.997688 master-0 kubenswrapper[7744]: E0220 14:46:52.995251 7744 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:46:52.997688 master-0 kubenswrapper[7744]: E0220 14:46:52.995281 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-serving-cert podName:4cd1cbee-d11a-4f40-9284-cdee8dc4b5db nodeName:}" failed. No retries permitted until 2026-02-20 14:46:54.995249489 +0000 UTC m=+14.197449419 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-serving-cert") pod "controller-manager-56bf465fcf-72pzv" (UID: "4cd1cbee-d11a-4f40-9284-cdee8dc4b5db") : secret "serving-cert" not found Feb 20 14:46:52.997688 master-0 kubenswrapper[7744]: E0220 14:46:52.995310 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-client-ca podName:4cd1cbee-d11a-4f40-9284-cdee8dc4b5db nodeName:}" failed. No retries permitted until 2026-02-20 14:46:54.99529356 +0000 UTC m=+14.197493480 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-client-ca") pod "controller-manager-56bf465fcf-72pzv" (UID: "4cd1cbee-d11a-4f40-9284-cdee8dc4b5db") : configmap "client-ca" not found Feb 20 14:46:53.050996 master-0 kubenswrapper[7744]: I0220 14:46:53.050773 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14bdb95b-35f5-44f1-887c-c9e4a20948a6" path="/var/lib/kubelet/pods/14bdb95b-35f5-44f1-887c-c9e4a20948a6/volumes" Feb 20 14:46:53.165140 master-0 kubenswrapper[7744]: I0220 14:46:53.165076 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56bf465fcf-72pzv" Feb 20 14:46:53.165140 master-0 kubenswrapper[7744]: I0220 14:46:53.165108 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" event={"ID":"d3ca2d2f-9f31-4524-a28f-cf16b02dd711","Type":"ContainerStarted","Data":"e7d3fca444d3332e414ef45d428d9305bcf3afae66213559a3b368f710b1a743"} Feb 20 14:46:53.235756 master-0 kubenswrapper[7744]: I0220 14:46:53.235563 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56bf465fcf-72pzv"] Feb 20 14:46:53.237052 master-0 kubenswrapper[7744]: I0220 14:46:53.236991 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-866d45c75d-zvq4v"] Feb 20 14:46:53.237732 master-0 kubenswrapper[7744]: I0220 14:46:53.237672 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56bf465fcf-72pzv"] Feb 20 14:46:53.237905 master-0 kubenswrapper[7744]: I0220 14:46:53.237846 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:53.239508 master-0 kubenswrapper[7744]: I0220 14:46:53.239429 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 14:46:53.241425 master-0 kubenswrapper[7744]: I0220 14:46:53.240965 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 14:46:53.241425 master-0 kubenswrapper[7744]: I0220 14:46:53.241232 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 14:46:53.247900 master-0 kubenswrapper[7744]: I0220 14:46:53.241681 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 14:46:53.247900 master-0 kubenswrapper[7744]: I0220 14:46:53.244633 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 14:46:53.252750 master-0 kubenswrapper[7744]: I0220 14:46:53.252676 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-866d45c75d-zvq4v"] Feb 20 14:46:53.256477 master-0 kubenswrapper[7744]: I0220 14:46:53.256406 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 14:46:53.400211 master-0 kubenswrapper[7744]: I0220 14:46:53.400120 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-proxy-ca-bundles\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:53.400211 master-0 kubenswrapper[7744]: I0220 14:46:53.400209 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:53.400479 master-0 kubenswrapper[7744]: I0220 14:46:53.400279 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d5fl\" (UniqueName: \"kubernetes.io/projected/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-kube-api-access-5d5fl\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:53.400479 master-0 kubenswrapper[7744]: I0220 14:46:53.400456 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:53.400632 master-0 kubenswrapper[7744]: I0220 14:46:53.400514 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-config\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:53.400632 master-0 kubenswrapper[7744]: I0220 14:46:53.400593 7744 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:53.400746 master-0 kubenswrapper[7744]: I0220 14:46:53.400632 7744 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 14:46:53.501480 master-0 kubenswrapper[7744]: I0220 14:46:53.501269 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:53.501480 master-0 kubenswrapper[7744]: I0220 14:46:53.501325 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-proxy-ca-bundles\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:53.501480 master-0 kubenswrapper[7744]: I0220 14:46:53.501358 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:53.501480 master-0 kubenswrapper[7744]: I0220 14:46:53.501400 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d5fl\" (UniqueName: \"kubernetes.io/projected/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-kube-api-access-5d5fl\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:53.501803 master-0 kubenswrapper[7744]: E0220 14:46:53.501558 7744 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:46:53.501803 master-0 kubenswrapper[7744]: E0220 14:46:53.501625 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert podName:9c46d834-1901-4616-b598-7890bfe0bc72 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:57.501604945 +0000 UTC m=+16.703804875 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert") pod "route-controller-manager-74f69747fd-7s6sc" (UID: "9c46d834-1901-4616-b598-7890bfe0bc72") : secret "serving-cert" not found Feb 20 14:46:53.501803 master-0 kubenswrapper[7744]: I0220 14:46:53.501726 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:53.501803 master-0 kubenswrapper[7744]: I0220 14:46:53.501784 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:53.502045 master-0 kubenswrapper[7744]: I0220 14:46:53.501815 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-config\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:53.502045 master-0 kubenswrapper[7744]: E0220 14:46:53.502042 7744 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:46:53.502166 master-0 kubenswrapper[7744]: E0220 14:46:53.502076 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca podName:9c46d834-1901-4616-b598-7890bfe0bc72 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:57.502065866 +0000 UTC m=+16.704265796 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca") pod "route-controller-manager-74f69747fd-7s6sc" (UID: "9c46d834-1901-4616-b598-7890bfe0bc72") : configmap "client-ca" not found Feb 20 14:46:53.502423 master-0 kubenswrapper[7744]: E0220 14:46:53.502333 7744 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:46:53.502423 master-0 kubenswrapper[7744]: E0220 14:46:53.502397 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert podName:c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:54.002377453 +0000 UTC m=+13.204577403 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert") pod "controller-manager-866d45c75d-zvq4v" (UID: "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4") : secret "serving-cert" not found Feb 20 14:46:53.502596 master-0 kubenswrapper[7744]: E0220 14:46:53.502399 7744 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:46:53.502596 master-0 kubenswrapper[7744]: E0220 14:46:53.502530 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca podName:c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:54.002490566 +0000 UTC m=+13.204690556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca") pod "controller-manager-866d45c75d-zvq4v" (UID: "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4") : configmap "client-ca" not found Feb 20 14:46:53.502815 master-0 kubenswrapper[7744]: I0220 14:46:53.502772 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-config\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:53.504067 master-0 kubenswrapper[7744]: I0220 14:46:53.504012 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-proxy-ca-bundles\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:53.522450 master-0 kubenswrapper[7744]: I0220 14:46:53.522404 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d5fl\" (UniqueName: \"kubernetes.io/projected/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-kube-api-access-5d5fl\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:54.007278 master-0 kubenswrapper[7744]: I0220 14:46:54.007152 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:54.008329 master-0 kubenswrapper[7744]: E0220 14:46:54.007384 7744 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:46:54.008329 master-0 kubenswrapper[7744]: E0220 14:46:54.007537 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert podName:c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:55.007499548 +0000 UTC m=+14.209699558 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert") pod "controller-manager-866d45c75d-zvq4v" (UID: "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4") : secret "serving-cert" not found Feb 20 14:46:54.008329 master-0 kubenswrapper[7744]: I0220 14:46:54.007600 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:54.008329 master-0 kubenswrapper[7744]: E0220 14:46:54.007810 7744 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:46:54.008329 master-0 kubenswrapper[7744]: E0220 14:46:54.007872 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca podName:c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:55.007852716 +0000 UTC m=+14.210052676 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca") pod "controller-manager-866d45c75d-zvq4v" (UID: "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4") : configmap "client-ca" not found Feb 20 14:46:55.019225 master-0 kubenswrapper[7744]: I0220 14:46:55.018886 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:55.019225 master-0 kubenswrapper[7744]: E0220 14:46:55.019171 7744 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:46:55.020262 master-0 kubenswrapper[7744]: I0220 14:46:55.019254 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:55.020262 master-0 kubenswrapper[7744]: E0220 14:46:55.019310 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert podName:c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:57.019262495 +0000 UTC m=+16.221462475 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert") pod "controller-manager-866d45c75d-zvq4v" (UID: "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4") : secret "serving-cert" not found Feb 20 14:46:55.020262 master-0 kubenswrapper[7744]: E0220 14:46:55.019340 7744 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:46:55.020262 master-0 kubenswrapper[7744]: E0220 14:46:55.019394 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca podName:c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4 nodeName:}" failed. No retries permitted until 2026-02-20 14:46:57.019378818 +0000 UTC m=+16.221578748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca") pod "controller-manager-866d45c75d-zvq4v" (UID: "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4") : configmap "client-ca" not found Feb 20 14:46:55.045948 master-0 kubenswrapper[7744]: I0220 14:46:55.045777 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd1cbee-d11a-4f40-9284-cdee8dc4b5db" path="/var/lib/kubelet/pods/4cd1cbee-d11a-4f40-9284-cdee8dc4b5db/volumes" Feb 20 14:46:55.179002 master-0 kubenswrapper[7744]: I0220 14:46:55.178897 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr" event={"ID":"900e244c-67aa-402f-b5f0-d37c5c1cedf7","Type":"ContainerStarted","Data":"1c41dabedad84cad3a05f49f849b79c399c388e8f2b9c7bfb18efcd28c2ae0be"} Feb 20 14:46:56.162267 master-0 kubenswrapper[7744]: I0220 14:46:56.162122 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6"] Feb 20 14:46:56.164156 master-0 kubenswrapper[7744]: I0220 14:46:56.164115 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" Feb 20 14:46:56.185870 master-0 kubenswrapper[7744]: I0220 14:46:56.185809 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6"] Feb 20 14:46:56.341326 master-0 kubenswrapper[7744]: I0220 14:46:56.341229 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lp29\" (UniqueName: \"kubernetes.io/projected/a1af84e0-776b-4285-906a-6880dbc82a7b-kube-api-access-6lp29\") pod \"csi-snapshot-controller-6847bb4785-2mtj6\" (UID: \"a1af84e0-776b-4285-906a-6880dbc82a7b\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" Feb 20 14:46:56.442683 master-0 kubenswrapper[7744]: I0220 14:46:56.442603 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lp29\" (UniqueName: \"kubernetes.io/projected/a1af84e0-776b-4285-906a-6880dbc82a7b-kube-api-access-6lp29\") pod \"csi-snapshot-controller-6847bb4785-2mtj6\" (UID: \"a1af84e0-776b-4285-906a-6880dbc82a7b\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" Feb 20 14:46:56.478567 master-0 kubenswrapper[7744]: I0220 14:46:56.478507 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lp29\" (UniqueName: \"kubernetes.io/projected/a1af84e0-776b-4285-906a-6880dbc82a7b-kube-api-access-6lp29\") pod \"csi-snapshot-controller-6847bb4785-2mtj6\" (UID: \"a1af84e0-776b-4285-906a-6880dbc82a7b\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" Feb 20 14:46:56.527705 master-0 kubenswrapper[7744]: I0220 14:46:56.527635 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" Feb 20 14:46:56.764744 master-0 kubenswrapper[7744]: I0220 14:46:56.764391 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6"] Feb 20 14:46:57.050919 master-0 kubenswrapper[7744]: I0220 14:46:57.050851 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:57.051086 master-0 kubenswrapper[7744]: E0220 14:46:57.051015 7744 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:46:57.051179 master-0 kubenswrapper[7744]: E0220 14:46:57.051100 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca podName:c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:01.051077412 +0000 UTC m=+20.253277372 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca") pod "controller-manager-866d45c75d-zvq4v" (UID: "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4") : configmap "client-ca" not found Feb 20 14:46:57.051179 master-0 kubenswrapper[7744]: I0220 14:46:57.051134 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:46:57.051323 master-0 kubenswrapper[7744]: E0220 14:46:57.051298 7744 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:46:57.051392 master-0 kubenswrapper[7744]: E0220 14:46:57.051354 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert podName:c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:01.051340219 +0000 UTC m=+20.253540179 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert") pod "controller-manager-866d45c75d-zvq4v" (UID: "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4") : secret "serving-cert" not found Feb 20 14:46:57.188337 master-0 kubenswrapper[7744]: I0220 14:46:57.188224 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" event={"ID":"a1af84e0-776b-4285-906a-6880dbc82a7b","Type":"ContainerStarted","Data":"ba0f9ce144b093c1fbdb0462da21ced21845e2aa8fb2233766270fcddb816e51"} Feb 20 14:46:57.190558 master-0 kubenswrapper[7744]: I0220 14:46:57.190482 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" event={"ID":"c81ad608-a8ad-4289-a8d2-d48acb9b540c","Type":"ContainerStarted","Data":"5433accfcf1efda61ccbe8f683016067c773a6f6dbc87107ff277c75114e35c4"} Feb 20 14:46:57.557794 master-0 kubenswrapper[7744]: I0220 14:46:57.557699 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:57.558181 master-0 kubenswrapper[7744]: E0220 14:46:57.557988 7744 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:46:57.558181 master-0 kubenswrapper[7744]: E0220 14:46:57.558138 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca podName:9c46d834-1901-4616-b598-7890bfe0bc72 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:05.558093193 +0000 UTC m=+24.760293183 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca") pod "route-controller-manager-74f69747fd-7s6sc" (UID: "9c46d834-1901-4616-b598-7890bfe0bc72") : configmap "client-ca" not found Feb 20 14:46:57.559205 master-0 kubenswrapper[7744]: I0220 14:46:57.559126 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:46:57.559507 master-0 kubenswrapper[7744]: E0220 14:46:57.559428 7744 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:46:57.559625 master-0 kubenswrapper[7744]: E0220 14:46:57.559553 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert podName:9c46d834-1901-4616-b598-7890bfe0bc72 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:05.559524217 +0000 UTC m=+24.761724167 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert") pod "route-controller-manager-74f69747fd-7s6sc" (UID: "9c46d834-1901-4616-b598-7890bfe0bc72") : secret "serving-cert" not found Feb 20 14:46:57.762369 master-0 kubenswrapper[7744]: I0220 14:46:57.762305 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:46:57.762641 master-0 kubenswrapper[7744]: I0220 14:46:57.762403 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:57.762641 master-0 kubenswrapper[7744]: I0220 14:46:57.762428 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:46:57.762641 master-0 kubenswrapper[7744]: I0220 14:46:57.762462 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:46:57.762641 master-0 kubenswrapper[7744]: I0220 14:46:57.762506 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:46:57.762641 master-0 kubenswrapper[7744]: I0220 14:46:57.762530 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:46:57.762641 master-0 kubenswrapper[7744]: I0220 14:46:57.762550 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:46:57.762641 master-0 kubenswrapper[7744]: I0220 14:46:57.762578 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:46:57.763042 master-0 kubenswrapper[7744]: E0220 14:46:57.762697 7744 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 14:46:57.763042 master-0 kubenswrapper[7744]: E0220 14:46:57.762752 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert podName:1fe69517-eec2-4721-933c-fa27cea7ab1f nodeName:}" failed. No retries permitted until 2026-02-20 14:47:13.762733878 +0000 UTC m=+32.964933798 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2sw9z" (UID: "1fe69517-eec2-4721-933c-fa27cea7ab1f") : secret "package-server-manager-serving-cert" not found Feb 20 14:46:57.763042 master-0 kubenswrapper[7744]: E0220 14:46:57.762800 7744 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 20 14:46:57.763042 master-0 kubenswrapper[7744]: E0220 14:46:57.762895 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics podName:c0a3548f-299c-4234-9bf1-c93efcb9740b nodeName:}" failed. No retries permitted until 2026-02-20 14:47:13.762872931 +0000 UTC m=+32.965072851 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-97m7r" (UID: "c0a3548f-299c-4234-9bf1-c93efcb9740b") : secret "marketplace-operator-metrics" not found Feb 20 14:46:57.763042 master-0 kubenswrapper[7744]: E0220 14:46:57.762966 7744 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 20 14:46:57.763042 master-0 kubenswrapper[7744]: E0220 14:46:57.763001 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert podName:4cede061-d85a-4366-9f1e-90be51f726fc nodeName:}" failed. No retries permitted until 2026-02-20 14:47:13.762984304 +0000 UTC m=+32.965184294 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert") pod "cluster-version-operator-5cfd9759cf-jf2s9" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc") : secret "cluster-version-operator-serving-cert" not found Feb 20 14:46:57.763042 master-0 kubenswrapper[7744]: E0220 14:46:57.763044 7744 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:57.763401 master-0 kubenswrapper[7744]: E0220 14:46:57.763069 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:13.763060736 +0000 UTC m=+32.965260696 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "performance-addon-operator-webhook-cert" not found Feb 20 14:46:57.763401 master-0 kubenswrapper[7744]: E0220 14:46:57.763117 7744 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 20 14:46:57.763401 master-0 kubenswrapper[7744]: E0220 14:46:57.763139 7744 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 20 14:46:57.763401 master-0 kubenswrapper[7744]: E0220 14:46:57.763145 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls podName:d28490b0-96ca-4fe0-8fae-e6f8390f933b nodeName:}" failed. No retries permitted until 2026-02-20 14:47:13.763136888 +0000 UTC m=+32.965336858 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls") pod "dns-operator-8c7d49845-gkrph" (UID: "d28490b0-96ca-4fe0-8fae-e6f8390f933b") : secret "metrics-tls" not found Feb 20 14:46:57.763401 master-0 kubenswrapper[7744]: E0220 14:46:57.763188 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls podName:b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:13.763178929 +0000 UTC m=+32.965378929 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-g7glt" (UID: "b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1") : secret "image-registry-operator-tls" not found Feb 20 14:46:57.763401 master-0 kubenswrapper[7744]: E0220 14:46:57.763194 7744 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 20 14:46:57.763401 master-0 kubenswrapper[7744]: E0220 14:46:57.763221 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:13.763212759 +0000 UTC m=+32.965412759 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : secret "node-tuning-operator-tls" not found Feb 20 14:46:57.763401 master-0 kubenswrapper[7744]: E0220 14:46:57.763231 7744 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:57.763401 master-0 kubenswrapper[7744]: E0220 14:46:57.763254 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls podName:419f28a9-8fd7-4b59-9554-4d884a1208b5 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:13.76324702 +0000 UTC m=+32.965447010 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-p7mjp" (UID: "419f28a9-8fd7-4b59-9554-4d884a1208b5") : secret "cluster-monitoring-operator-tls" not found Feb 20 14:46:57.863569 master-0 kubenswrapper[7744]: I0220 14:46:57.863507 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:46:57.863569 master-0 kubenswrapper[7744]: I0220 14:46:57.863560 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:46:57.863975 master-0 kubenswrapper[7744]: E0220 14:46:57.863735 7744 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 20 14:46:57.863975 master-0 kubenswrapper[7744]: E0220 14:46:57.863779 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs podName:a1fb2774-6dd7-4429-9df3-4ddfcdaac939 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:13.863766919 +0000 UTC m=+33.065966839 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-wl49x" (UID: "a1fb2774-6dd7-4429-9df3-4ddfcdaac939") : secret "multus-admission-controller-secret" not found Feb 20 14:46:57.863975 master-0 kubenswrapper[7744]: E0220 14:46:57.863816 7744 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 20 14:46:57.863975 master-0 kubenswrapper[7744]: E0220 14:46:57.863832 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs podName:5ea4c132-b6d0-4dc9-942d-48e359eed418 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:13.863826771 +0000 UTC m=+33.066026691 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs") pod "network-metrics-daemon-99lkv" (UID: "5ea4c132-b6d0-4dc9-942d-48e359eed418") : secret "metrics-daemon-secret" not found Feb 20 14:46:58.195605 master-0 kubenswrapper[7744]: I0220 14:46:58.195558 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" event={"ID":"db9dc349-5216-43ff-8c17-3a9384a010ea","Type":"ContainerStarted","Data":"255184eff0270c34b8e6556e377cc8915ae25bb2f15df7164830c2551d563b2b"} Feb 20 14:46:59.203861 master-0 kubenswrapper[7744]: I0220 14:46:59.203452 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" event={"ID":"a1af84e0-776b-4285-906a-6880dbc82a7b","Type":"ContainerStarted","Data":"05169fed6fef4d82074b47315517f420ef327f3261f2444e53508e66bd83fdf7"} Feb 20 14:46:59.206101 master-0 kubenswrapper[7744]: I0220 14:46:59.206036 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-cgp8r" event={"ID":"87cf4690-1ec1-44fc-94bd-730d9f2e6762","Type":"ContainerStarted","Data":"7f8dbc22b8958f7d49d97b2d1cc7318ab14c413e48ae6f880d4b31bcda852197"} Feb 20 14:46:59.220376 master-0 kubenswrapper[7744]: I0220 14:46:59.220288 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" podStartSLOduration=1.579623928 podStartE2EDuration="3.220263445s" podCreationTimestamp="2026-02-20 14:46:56 +0000 UTC" firstStartedPulling="2026-02-20 14:46:57.034093698 +0000 UTC m=+16.236293658" lastFinishedPulling="2026-02-20 14:46:58.674733225 +0000 UTC m=+17.876933175" observedRunningTime="2026-02-20 14:46:59.218977413 +0000 UTC m=+18.421177363" watchObservedRunningTime="2026-02-20 14:46:59.220263445 +0000 UTC m=+18.422463395" Feb 20 14:46:59.794392 master-0 kubenswrapper[7744]: I0220 14:46:59.794157 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-fc795"] Feb 20 14:46:59.795189 master-0 kubenswrapper[7744]: I0220 14:46:59.795134 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 14:46:59.799284 master-0 kubenswrapper[7744]: I0220 14:46:59.799196 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 20 14:46:59.799454 master-0 kubenswrapper[7744]: I0220 14:46:59.799280 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 20 14:46:59.799970 master-0 kubenswrapper[7744]: I0220 14:46:59.799851 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 20 14:46:59.800594 master-0 kubenswrapper[7744]: I0220 14:46:59.800507 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 20 14:46:59.808861 master-0 kubenswrapper[7744]: I0220 14:46:59.808770 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-fc795"] Feb 20 14:46:59.890215 master-0 kubenswrapper[7744]: I0220 14:46:59.890099 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k6br\" (UniqueName: \"kubernetes.io/projected/787a4fee-6625-4df5-a432-c7e1190da777-kube-api-access-9k6br\") pod \"service-ca-576b4d78bd-fc795\" (UID: \"787a4fee-6625-4df5-a432-c7e1190da777\") " pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 14:46:59.890215 master-0 kubenswrapper[7744]: I0220 14:46:59.890198 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/787a4fee-6625-4df5-a432-c7e1190da777-signing-cabundle\") pod \"service-ca-576b4d78bd-fc795\" (UID: \"787a4fee-6625-4df5-a432-c7e1190da777\") " pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 14:46:59.890671 master-0 kubenswrapper[7744]: I0220 14:46:59.890281 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/787a4fee-6625-4df5-a432-c7e1190da777-signing-key\") pod \"service-ca-576b4d78bd-fc795\" (UID: \"787a4fee-6625-4df5-a432-c7e1190da777\") " pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 14:46:59.991453 master-0 kubenswrapper[7744]: I0220 14:46:59.991221 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k6br\" (UniqueName: \"kubernetes.io/projected/787a4fee-6625-4df5-a432-c7e1190da777-kube-api-access-9k6br\") pod \"service-ca-576b4d78bd-fc795\" (UID: \"787a4fee-6625-4df5-a432-c7e1190da777\") " pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 14:46:59.991453 master-0 kubenswrapper[7744]: I0220 14:46:59.991391 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/787a4fee-6625-4df5-a432-c7e1190da777-signing-cabundle\") pod \"service-ca-576b4d78bd-fc795\" (UID: \"787a4fee-6625-4df5-a432-c7e1190da777\") " pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 14:46:59.993133 master-0 kubenswrapper[7744]: I0220 14:46:59.991665 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/787a4fee-6625-4df5-a432-c7e1190da777-signing-key\") pod \"service-ca-576b4d78bd-fc795\" (UID: \"787a4fee-6625-4df5-a432-c7e1190da777\") " pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 14:46:59.993133 master-0 kubenswrapper[7744]: I0220 14:46:59.992855 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/787a4fee-6625-4df5-a432-c7e1190da777-signing-cabundle\") pod \"service-ca-576b4d78bd-fc795\" (UID: \"787a4fee-6625-4df5-a432-c7e1190da777\") " pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 14:47:00.004660 master-0 kubenswrapper[7744]: I0220 14:47:00.004417 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/787a4fee-6625-4df5-a432-c7e1190da777-signing-key\") pod \"service-ca-576b4d78bd-fc795\" (UID: \"787a4fee-6625-4df5-a432-c7e1190da777\") " pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 14:47:00.027656 master-0 kubenswrapper[7744]: I0220 14:47:00.027560 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k6br\" (UniqueName: \"kubernetes.io/projected/787a4fee-6625-4df5-a432-c7e1190da777-kube-api-access-9k6br\") pod \"service-ca-576b4d78bd-fc795\" (UID: \"787a4fee-6625-4df5-a432-c7e1190da777\") " pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 14:47:00.136963 master-0 kubenswrapper[7744]: I0220 14:47:00.135998 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 14:47:00.217572 master-0 kubenswrapper[7744]: I0220 14:47:00.215801 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" event={"ID":"8157f73d-c757-40c4-80bc-3c9de2f2288a","Type":"ContainerStarted","Data":"cc8ec7e8b926ba49c143a81485ff0f3a14da5399a34238c1afe1d5e4cc71a0ba"} Feb 20 14:47:00.435582 master-0 kubenswrapper[7744]: I0220 14:47:00.435411 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-fc795"] Feb 20 14:47:01.121326 master-0 kubenswrapper[7744]: I0220 14:47:01.120895 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:47:01.121563 master-0 kubenswrapper[7744]: I0220 14:47:01.121380 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:47:01.121563 master-0 kubenswrapper[7744]: E0220 14:47:01.121074 7744 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:47:01.121563 master-0 kubenswrapper[7744]: E0220 14:47:01.121479 7744 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:47:01.121563 master-0 kubenswrapper[7744]: E0220 14:47:01.121535 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca podName:c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:09.12151951 +0000 UTC m=+28.323719440 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca") pod "controller-manager-866d45c75d-zvq4v" (UID: "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4") : configmap "client-ca" not found Feb 20 14:47:01.121743 master-0 kubenswrapper[7744]: E0220 14:47:01.121637 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert podName:c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:09.121565521 +0000 UTC m=+28.323765481 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert") pod "controller-manager-866d45c75d-zvq4v" (UID: "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4") : secret "serving-cert" not found Feb 20 14:47:01.221089 master-0 kubenswrapper[7744]: I0220 14:47:01.221027 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-fc795" event={"ID":"787a4fee-6625-4df5-a432-c7e1190da777","Type":"ContainerStarted","Data":"49b822c3e47c1cd6ec2009b226ef965940d964b01865e3cc2dbdf575ba59319a"} Feb 20 14:47:01.221089 master-0 kubenswrapper[7744]: I0220 14:47:01.221094 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-fc795" event={"ID":"787a4fee-6625-4df5-a432-c7e1190da777","Type":"ContainerStarted","Data":"e1b8782a8564dd4906c6406ffd3ad6cd072d92723a07ad86ed42c394d07ab355"} Feb 20 14:47:01.223103 master-0 kubenswrapper[7744]: I0220 14:47:01.223067 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" event={"ID":"4c31b8a7-edcb-403d-9122-7eb740f7d659","Type":"ContainerStarted","Data":"941dd44ae98490c4a66ceb486a6367ef40fefdfd465008c4ef290585229b84c1"} Feb 20 14:47:01.275571 master-0 kubenswrapper[7744]: I0220 14:47:01.275213 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-576b4d78bd-fc795" podStartSLOduration=2.275180734 podStartE2EDuration="2.275180734s" podCreationTimestamp="2026-02-20 14:46:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:47:01.243547803 +0000 UTC m=+20.445747733" watchObservedRunningTime="2026-02-20 14:47:01.275180734 +0000 UTC m=+20.477380674" Feb 20 14:47:02.228315 master-0 kubenswrapper[7744]: I0220 14:47:02.228012 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" event={"ID":"8b73ae08-0ad7-4f99-8002-6df0d984cd2c","Type":"ContainerStarted","Data":"cfbd27b76aa0dc7c10ce1de7a1bdca66b3303ee8a7bc370fa5d11a1d913c8168"} Feb 20 14:47:02.966046 master-0 kubenswrapper[7744]: I0220 14:47:02.966001 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh"] Feb 20 14:47:02.966623 master-0 kubenswrapper[7744]: I0220 14:47:02.966598 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh" Feb 20 14:47:02.967941 master-0 kubenswrapper[7744]: I0220 14:47:02.967872 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 20 14:47:02.970805 master-0 kubenswrapper[7744]: I0220 14:47:02.970774 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 20 14:47:02.976586 master-0 kubenswrapper[7744]: I0220 14:47:02.976556 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh"] Feb 20 14:47:03.052705 master-0 kubenswrapper[7744]: I0220 14:47:03.052641 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrrq4\" (UniqueName: \"kubernetes.io/projected/af7b6f34-adca-4bdb-9e41-e2995a1d67a8-kube-api-access-nrrq4\") pod \"migrator-5c85bff57-9mbsh\" (UID: \"af7b6f34-adca-4bdb-9e41-e2995a1d67a8\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh" Feb 20 14:47:03.156362 master-0 kubenswrapper[7744]: I0220 14:47:03.156258 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrrq4\" (UniqueName: \"kubernetes.io/projected/af7b6f34-adca-4bdb-9e41-e2995a1d67a8-kube-api-access-nrrq4\") pod \"migrator-5c85bff57-9mbsh\" (UID: \"af7b6f34-adca-4bdb-9e41-e2995a1d67a8\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh" Feb 20 14:47:03.197814 master-0 kubenswrapper[7744]: I0220 14:47:03.197724 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrrq4\" (UniqueName: \"kubernetes.io/projected/af7b6f34-adca-4bdb-9e41-e2995a1d67a8-kube-api-access-nrrq4\") pod \"migrator-5c85bff57-9mbsh\" (UID: \"af7b6f34-adca-4bdb-9e41-e2995a1d67a8\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh" Feb 20 14:47:03.282127 master-0 kubenswrapper[7744]: I0220 14:47:03.282047 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh" Feb 20 14:47:03.356858 master-0 kubenswrapper[7744]: I0220 14:47:03.356749 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:47:03.357200 master-0 kubenswrapper[7744]: I0220 14:47:03.356915 7744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 14:47:03.397064 master-0 kubenswrapper[7744]: I0220 14:47:03.396972 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 14:47:03.514375 master-0 kubenswrapper[7744]: I0220 14:47:03.513974 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh"] Feb 20 14:47:04.040251 master-0 kubenswrapper[7744]: I0220 14:47:04.040174 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr"] Feb 20 14:47:04.041071 master-0 kubenswrapper[7744]: I0220 14:47:04.041042 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.043956 master-0 kubenswrapper[7744]: I0220 14:47:04.043907 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 20 14:47:04.044186 master-0 kubenswrapper[7744]: I0220 14:47:04.044154 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 20 14:47:04.044494 master-0 kubenswrapper[7744]: I0220 14:47:04.044477 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 20 14:47:04.067939 master-0 kubenswrapper[7744]: I0220 14:47:04.062678 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr"] Feb 20 14:47:04.067939 master-0 kubenswrapper[7744]: I0220 14:47:04.064916 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 20 14:47:04.069748 master-0 kubenswrapper[7744]: I0220 14:47:04.069146 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.069748 master-0 kubenswrapper[7744]: I0220 14:47:04.069214 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-cache\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.069748 master-0 kubenswrapper[7744]: I0220 14:47:04.069433 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.069748 master-0 kubenswrapper[7744]: I0220 14:47:04.069483 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.069748 master-0 kubenswrapper[7744]: I0220 14:47:04.069597 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.069748 master-0 kubenswrapper[7744]: I0220 14:47:04.069637 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lcqg\" (UniqueName: \"kubernetes.io/projected/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-kube-api-access-9lcqg\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.150620 master-0 kubenswrapper[7744]: I0220 14:47:04.150587 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd"] Feb 20 14:47:04.151368 master-0 kubenswrapper[7744]: I0220 14:47:04.151353 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.153346 master-0 kubenswrapper[7744]: I0220 14:47:04.153319 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 20 14:47:04.153440 master-0 kubenswrapper[7744]: I0220 14:47:04.153424 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 20 14:47:04.162938 master-0 kubenswrapper[7744]: I0220 14:47:04.162908 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 20 14:47:04.163785 master-0 kubenswrapper[7744]: I0220 14:47:04.163741 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd"] Feb 20 14:47:04.172264 master-0 kubenswrapper[7744]: I0220 14:47:04.171269 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-cache\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.172264 master-0 kubenswrapper[7744]: I0220 14:47:04.171335 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.172264 master-0 kubenswrapper[7744]: I0220 14:47:04.171398 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/84a61910-48eb-4c27-8d69-f6aa7ce912ca-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.172264 master-0 kubenswrapper[7744]: I0220 14:47:04.171480 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/84a61910-48eb-4c27-8d69-f6aa7ce912ca-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.172264 master-0 kubenswrapper[7744]: I0220 14:47:04.171545 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.172264 master-0 kubenswrapper[7744]: I0220 14:47:04.171578 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5fng\" (UniqueName: \"kubernetes.io/projected/84a61910-48eb-4c27-8d69-f6aa7ce912ca-kube-api-access-l5fng\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.172264 master-0 kubenswrapper[7744]: I0220 14:47:04.171635 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.172264 master-0 kubenswrapper[7744]: I0220 14:47:04.171721 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.172264 master-0 kubenswrapper[7744]: I0220 14:47:04.171735 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lcqg\" (UniqueName: \"kubernetes.io/projected/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-kube-api-access-9lcqg\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.172264 master-0 kubenswrapper[7744]: I0220 14:47:04.171800 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/84a61910-48eb-4c27-8d69-f6aa7ce912ca-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.172264 master-0 kubenswrapper[7744]: I0220 14:47:04.171904 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.172264 master-0 kubenswrapper[7744]: I0220 14:47:04.171989 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/84a61910-48eb-4c27-8d69-f6aa7ce912ca-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.173104 master-0 kubenswrapper[7744]: I0220 14:47:04.172889 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.175132 master-0 kubenswrapper[7744]: I0220 14:47:04.175100 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-cache\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.182166 master-0 kubenswrapper[7744]: I0220 14:47:04.182073 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.185633 master-0 kubenswrapper[7744]: I0220 14:47:04.185586 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.195703 master-0 kubenswrapper[7744]: I0220 14:47:04.195669 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lcqg\" (UniqueName: \"kubernetes.io/projected/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-kube-api-access-9lcqg\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.236457 master-0 kubenswrapper[7744]: I0220 14:47:04.236367 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh" event={"ID":"af7b6f34-adca-4bdb-9e41-e2995a1d67a8","Type":"ContainerStarted","Data":"118104a32f855cf343fc9a68201c174973d8b0ae6653c1a549eeef25c7c2eefa"} Feb 20 14:47:04.273267 master-0 kubenswrapper[7744]: I0220 14:47:04.273177 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/84a61910-48eb-4c27-8d69-f6aa7ce912ca-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.273267 master-0 kubenswrapper[7744]: I0220 14:47:04.273272 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/84a61910-48eb-4c27-8d69-f6aa7ce912ca-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.273513 master-0 kubenswrapper[7744]: I0220 14:47:04.273344 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/84a61910-48eb-4c27-8d69-f6aa7ce912ca-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.273513 master-0 kubenswrapper[7744]: I0220 14:47:04.273392 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5fng\" (UniqueName: \"kubernetes.io/projected/84a61910-48eb-4c27-8d69-f6aa7ce912ca-kube-api-access-l5fng\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.273513 master-0 kubenswrapper[7744]: I0220 14:47:04.273463 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/84a61910-48eb-4c27-8d69-f6aa7ce912ca-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.274901 master-0 kubenswrapper[7744]: I0220 14:47:04.274849 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/84a61910-48eb-4c27-8d69-f6aa7ce912ca-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.275000 master-0 kubenswrapper[7744]: I0220 14:47:04.274961 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/84a61910-48eb-4c27-8d69-f6aa7ce912ca-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.275227 master-0 kubenswrapper[7744]: I0220 14:47:04.275168 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/84a61910-48eb-4c27-8d69-f6aa7ce912ca-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.278972 master-0 kubenswrapper[7744]: I0220 14:47:04.278897 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/84a61910-48eb-4c27-8d69-f6aa7ce912ca-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.294785 master-0 kubenswrapper[7744]: I0220 14:47:04.294618 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5fng\" (UniqueName: \"kubernetes.io/projected/84a61910-48eb-4c27-8d69-f6aa7ce912ca-kube-api-access-l5fng\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.366422 master-0 kubenswrapper[7744]: I0220 14:47:04.366343 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:04.491862 master-0 kubenswrapper[7744]: I0220 14:47:04.491827 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:04.558500 master-0 kubenswrapper[7744]: I0220 14:47:04.558333 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr"] Feb 20 14:47:04.568341 master-0 kubenswrapper[7744]: W0220 14:47:04.568024 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc334fff_c0bf_4905_bcdb_b0d2a35b0590.slice/crio-0c48d8481d8bb6541d7d83f4ffc4e7c6003e82f4f8d378fb9a1333d706bc6f14 WatchSource:0}: Error finding container 0c48d8481d8bb6541d7d83f4ffc4e7c6003e82f4f8d378fb9a1333d706bc6f14: Status 404 returned error can't find the container with id 0c48d8481d8bb6541d7d83f4ffc4e7c6003e82f4f8d378fb9a1333d706bc6f14 Feb 20 14:47:04.685347 master-0 kubenswrapper[7744]: I0220 14:47:04.682308 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd"] Feb 20 14:47:05.083465 master-0 kubenswrapper[7744]: W0220 14:47:05.083149 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84a61910_48eb_4c27_8d69_f6aa7ce912ca.slice/crio-34bf21f0d5e74283c2c3382d9b925b925de6b532a3f67ab7bff4afdbe95f9332 WatchSource:0}: Error finding container 34bf21f0d5e74283c2c3382d9b925b925de6b532a3f67ab7bff4afdbe95f9332: Status 404 returned error can't find the container with id 34bf21f0d5e74283c2c3382d9b925b925de6b532a3f67ab7bff4afdbe95f9332 Feb 20 14:47:05.260090 master-0 kubenswrapper[7744]: I0220 14:47:05.260039 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" event={"ID":"84a61910-48eb-4c27-8d69-f6aa7ce912ca","Type":"ContainerStarted","Data":"34bf21f0d5e74283c2c3382d9b925b925de6b532a3f67ab7bff4afdbe95f9332"} Feb 20 14:47:05.261566 master-0 kubenswrapper[7744]: I0220 14:47:05.261528 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" event={"ID":"fc334fff-c0bf-4905-bcdb-b0d2a35b0590","Type":"ContainerStarted","Data":"2cc001d9b9602fb584b5d0b096a0d40fac4dbe465b509891b7825972dc39ddc5"} Feb 20 14:47:05.261635 master-0 kubenswrapper[7744]: I0220 14:47:05.261561 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" event={"ID":"fc334fff-c0bf-4905-bcdb-b0d2a35b0590","Type":"ContainerStarted","Data":"0c48d8481d8bb6541d7d83f4ffc4e7c6003e82f4f8d378fb9a1333d706bc6f14"} Feb 20 14:47:05.588600 master-0 kubenswrapper[7744]: I0220 14:47:05.588114 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:47:05.588600 master-0 kubenswrapper[7744]: E0220 14:47:05.588268 7744 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:47:05.611604 master-0 kubenswrapper[7744]: E0220 14:47:05.588689 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert podName:9c46d834-1901-4616-b598-7890bfe0bc72 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:21.588649193 +0000 UTC m=+40.790849113 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert") pod "route-controller-manager-74f69747fd-7s6sc" (UID: "9c46d834-1901-4616-b598-7890bfe0bc72") : secret "serving-cert" not found Feb 20 14:47:05.611604 master-0 kubenswrapper[7744]: I0220 14:47:05.588598 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca\") pod \"route-controller-manager-74f69747fd-7s6sc\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:47:05.611604 master-0 kubenswrapper[7744]: E0220 14:47:05.588688 7744 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 20 14:47:05.611604 master-0 kubenswrapper[7744]: E0220 14:47:05.588895 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca podName:9c46d834-1901-4616-b598-7890bfe0bc72 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:21.588887979 +0000 UTC m=+40.791087899 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca") pod "route-controller-manager-74f69747fd-7s6sc" (UID: "9c46d834-1901-4616-b598-7890bfe0bc72") : configmap "client-ca" not found Feb 20 14:47:06.270021 master-0 kubenswrapper[7744]: I0220 14:47:06.269915 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" event={"ID":"84a61910-48eb-4c27-8d69-f6aa7ce912ca","Type":"ContainerStarted","Data":"c611b24ddb76e62693aedc7b9d79cfbcb4b25fe7da745bd0f6bf1d9bed95789d"} Feb 20 14:47:06.270021 master-0 kubenswrapper[7744]: I0220 14:47:06.270011 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" event={"ID":"84a61910-48eb-4c27-8d69-f6aa7ce912ca","Type":"ContainerStarted","Data":"033a3d2eac65c1b4d9f27c950aeb8dc662b4f02d9215e718db95c771bce201e1"} Feb 20 14:47:06.271326 master-0 kubenswrapper[7744]: I0220 14:47:06.271289 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:06.274224 master-0 kubenswrapper[7744]: I0220 14:47:06.274162 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" event={"ID":"fc334fff-c0bf-4905-bcdb-b0d2a35b0590","Type":"ContainerStarted","Data":"c477064b0f3fd6cd0d107cda0e6daa47e69c108cc08e8c15adda744ad3c559d0"} Feb 20 14:47:06.274456 master-0 kubenswrapper[7744]: I0220 14:47:06.274418 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:06.276936 master-0 kubenswrapper[7744]: I0220 14:47:06.276865 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh" event={"ID":"af7b6f34-adca-4bdb-9e41-e2995a1d67a8","Type":"ContainerStarted","Data":"e969545d1642c072152a5ec102f1eb7f4892e0030ac35eab40601381088404b4"} Feb 20 14:47:06.277011 master-0 kubenswrapper[7744]: I0220 14:47:06.276980 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh" event={"ID":"af7b6f34-adca-4bdb-9e41-e2995a1d67a8","Type":"ContainerStarted","Data":"3ee41ba4abbcbb86e18b3b6f53b30fff65e0915edf9c525908f5d8d1e3b5de7b"} Feb 20 14:47:06.309993 master-0 kubenswrapper[7744]: I0220 14:47:06.309866 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" podStartSLOduration=2.309330159 podStartE2EDuration="2.309330159s" podCreationTimestamp="2026-02-20 14:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:47:06.301371156 +0000 UTC m=+25.503571086" watchObservedRunningTime="2026-02-20 14:47:06.309330159 +0000 UTC m=+25.511530079" Feb 20 14:47:06.323014 master-0 kubenswrapper[7744]: I0220 14:47:06.322647 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 20 14:47:06.323717 master-0 kubenswrapper[7744]: I0220 14:47:06.323411 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 14:47:06.330085 master-0 kubenswrapper[7744]: I0220 14:47:06.330029 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 20 14:47:06.341097 master-0 kubenswrapper[7744]: I0220 14:47:06.341005 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" podStartSLOduration=2.34097802 podStartE2EDuration="2.34097802s" podCreationTimestamp="2026-02-20 14:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:47:06.330314461 +0000 UTC m=+25.532514391" watchObservedRunningTime="2026-02-20 14:47:06.34097802 +0000 UTC m=+25.543178010" Feb 20 14:47:06.347144 master-0 kubenswrapper[7744]: I0220 14:47:06.343747 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 20 14:47:06.402745 master-0 kubenswrapper[7744]: I0220 14:47:06.402494 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80490ae2-6185-4c98-ad70-bb13da2fe3b0-kube-api-access\") pod \"installer-1-master-0\" (UID: \"80490ae2-6185-4c98-ad70-bb13da2fe3b0\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 14:47:06.402745 master-0 kubenswrapper[7744]: I0220 14:47:06.402703 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80490ae2-6185-4c98-ad70-bb13da2fe3b0-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"80490ae2-6185-4c98-ad70-bb13da2fe3b0\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 14:47:06.402986 master-0 kubenswrapper[7744]: I0220 14:47:06.402790 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80490ae2-6185-4c98-ad70-bb13da2fe3b0-var-lock\") pod \"installer-1-master-0\" (UID: \"80490ae2-6185-4c98-ad70-bb13da2fe3b0\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 14:47:06.418559 master-0 kubenswrapper[7744]: I0220 14:47:06.409514 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh" podStartSLOduration=2.791923725 podStartE2EDuration="4.409486609s" podCreationTimestamp="2026-02-20 14:47:02 +0000 UTC" firstStartedPulling="2026-02-20 14:47:03.526645202 +0000 UTC m=+22.728845142" lastFinishedPulling="2026-02-20 14:47:05.144208096 +0000 UTC m=+24.346408026" observedRunningTime="2026-02-20 14:47:06.404687562 +0000 UTC m=+25.606887482" watchObservedRunningTime="2026-02-20 14:47:06.409486609 +0000 UTC m=+25.611686549" Feb 20 14:47:06.503758 master-0 kubenswrapper[7744]: I0220 14:47:06.503719 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80490ae2-6185-4c98-ad70-bb13da2fe3b0-kube-api-access\") pod \"installer-1-master-0\" (UID: \"80490ae2-6185-4c98-ad70-bb13da2fe3b0\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 14:47:06.504039 master-0 kubenswrapper[7744]: I0220 14:47:06.504025 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80490ae2-6185-4c98-ad70-bb13da2fe3b0-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"80490ae2-6185-4c98-ad70-bb13da2fe3b0\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 14:47:06.504105 master-0 kubenswrapper[7744]: I0220 14:47:06.504074 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80490ae2-6185-4c98-ad70-bb13da2fe3b0-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"80490ae2-6185-4c98-ad70-bb13da2fe3b0\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 14:47:06.504237 master-0 kubenswrapper[7744]: I0220 14:47:06.504223 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80490ae2-6185-4c98-ad70-bb13da2fe3b0-var-lock\") pod \"installer-1-master-0\" (UID: \"80490ae2-6185-4c98-ad70-bb13da2fe3b0\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 14:47:06.504380 master-0 kubenswrapper[7744]: I0220 14:47:06.504367 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80490ae2-6185-4c98-ad70-bb13da2fe3b0-var-lock\") pod \"installer-1-master-0\" (UID: \"80490ae2-6185-4c98-ad70-bb13da2fe3b0\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 14:47:06.522096 master-0 kubenswrapper[7744]: I0220 14:47:06.521999 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80490ae2-6185-4c98-ad70-bb13da2fe3b0-kube-api-access\") pod \"installer-1-master-0\" (UID: \"80490ae2-6185-4c98-ad70-bb13da2fe3b0\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 14:47:06.728738 master-0 kubenswrapper[7744]: I0220 14:47:06.728656 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 14:47:07.034245 master-0 kubenswrapper[7744]: I0220 14:47:07.033811 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 20 14:47:07.284257 master-0 kubenswrapper[7744]: I0220 14:47:07.284158 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"80490ae2-6185-4c98-ad70-bb13da2fe3b0","Type":"ContainerStarted","Data":"0098096c222a7f7bbec901788c207239fe95e271e299dfb4562ca29e40273cc7"} Feb 20 14:47:08.292976 master-0 kubenswrapper[7744]: I0220 14:47:08.291545 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"80490ae2-6185-4c98-ad70-bb13da2fe3b0","Type":"ContainerStarted","Data":"063d7d38f9bc412babd73283f30cdd4274248e0467ee3c63ec3aa1207486311b"} Feb 20 14:47:08.341912 master-0 kubenswrapper[7744]: I0220 14:47:08.341805 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=2.3417849410000002 podStartE2EDuration="2.341784941s" podCreationTimestamp="2026-02-20 14:47:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:47:08.316614178 +0000 UTC m=+27.518814098" watchObservedRunningTime="2026-02-20 14:47:08.341784941 +0000 UTC m=+27.543984871" Feb 20 14:47:08.342569 master-0 kubenswrapper[7744]: I0220 14:47:08.342524 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-866d45c75d-zvq4v"] Feb 20 14:47:08.342822 master-0 kubenswrapper[7744]: E0220 14:47:08.342781 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" podUID="c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4" Feb 20 14:47:08.353237 master-0 kubenswrapper[7744]: I0220 14:47:08.353153 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc"] Feb 20 14:47:08.353584 master-0 kubenswrapper[7744]: E0220 14:47:08.353521 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" podUID="9c46d834-1901-4616-b598-7890bfe0bc72" Feb 20 14:47:09.147065 master-0 kubenswrapper[7744]: I0220 14:47:09.146681 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:47:09.147319 master-0 kubenswrapper[7744]: I0220 14:47:09.147211 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:47:09.150050 master-0 kubenswrapper[7744]: I0220 14:47:09.149149 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:47:09.154593 master-0 kubenswrapper[7744]: I0220 14:47:09.154537 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert\") pod \"controller-manager-866d45c75d-zvq4v\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:47:09.296589 master-0 kubenswrapper[7744]: I0220 14:47:09.296504 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:47:09.297393 master-0 kubenswrapper[7744]: I0220 14:47:09.296964 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:47:09.310066 master-0 kubenswrapper[7744]: I0220 14:47:09.309968 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:47:09.319689 master-0 kubenswrapper[7744]: I0220 14:47:09.319596 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:47:09.349556 master-0 kubenswrapper[7744]: I0220 14:47:09.349462 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf4d8\" (UniqueName: \"kubernetes.io/projected/9c46d834-1901-4616-b598-7890bfe0bc72-kube-api-access-kf4d8\") pod \"9c46d834-1901-4616-b598-7890bfe0bc72\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " Feb 20 14:47:09.349556 master-0 kubenswrapper[7744]: I0220 14:47:09.349519 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca\") pod \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " Feb 20 14:47:09.349556 master-0 kubenswrapper[7744]: I0220 14:47:09.349559 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-config\") pod \"9c46d834-1901-4616-b598-7890bfe0bc72\" (UID: \"9c46d834-1901-4616-b598-7890bfe0bc72\") " Feb 20 14:47:09.350021 master-0 kubenswrapper[7744]: I0220 14:47:09.349585 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert\") pod \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " Feb 20 14:47:09.350021 master-0 kubenswrapper[7744]: I0220 14:47:09.349610 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-proxy-ca-bundles\") pod \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " Feb 20 14:47:09.350021 master-0 kubenswrapper[7744]: I0220 14:47:09.349636 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d5fl\" (UniqueName: \"kubernetes.io/projected/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-kube-api-access-5d5fl\") pod \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " Feb 20 14:47:09.350021 master-0 kubenswrapper[7744]: I0220 14:47:09.349655 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-config\") pod \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\" (UID: \"c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4\") " Feb 20 14:47:09.350323 master-0 kubenswrapper[7744]: I0220 14:47:09.350258 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca" (OuterVolumeSpecName: "client-ca") pod "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4" (UID: "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:47:09.350719 master-0 kubenswrapper[7744]: I0220 14:47:09.350399 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-config" (OuterVolumeSpecName: "config") pod "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4" (UID: "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:47:09.350891 master-0 kubenswrapper[7744]: I0220 14:47:09.350827 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-config" (OuterVolumeSpecName: "config") pod "9c46d834-1901-4616-b598-7890bfe0bc72" (UID: "9c46d834-1901-4616-b598-7890bfe0bc72"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:47:09.351016 master-0 kubenswrapper[7744]: I0220 14:47:09.350918 7744 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:09.351016 master-0 kubenswrapper[7744]: I0220 14:47:09.350957 7744 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-config\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:09.351290 master-0 kubenswrapper[7744]: I0220 14:47:09.351235 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4" (UID: "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:47:09.354557 master-0 kubenswrapper[7744]: I0220 14:47:09.354501 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4" (UID: "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 14:47:09.355173 master-0 kubenswrapper[7744]: I0220 14:47:09.355111 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-kube-api-access-5d5fl" (OuterVolumeSpecName: "kube-api-access-5d5fl") pod "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4" (UID: "c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4"). InnerVolumeSpecName "kube-api-access-5d5fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:47:09.355583 master-0 kubenswrapper[7744]: I0220 14:47:09.355188 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c46d834-1901-4616-b598-7890bfe0bc72-kube-api-access-kf4d8" (OuterVolumeSpecName: "kube-api-access-kf4d8") pod "9c46d834-1901-4616-b598-7890bfe0bc72" (UID: "9c46d834-1901-4616-b598-7890bfe0bc72"). InnerVolumeSpecName "kube-api-access-kf4d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:47:09.454764 master-0 kubenswrapper[7744]: I0220 14:47:09.452453 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf4d8\" (UniqueName: \"kubernetes.io/projected/9c46d834-1901-4616-b598-7890bfe0bc72-kube-api-access-kf4d8\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:09.454764 master-0 kubenswrapper[7744]: I0220 14:47:09.452507 7744 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-config\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:09.454764 master-0 kubenswrapper[7744]: I0220 14:47:09.452526 7744 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:09.454764 master-0 kubenswrapper[7744]: I0220 14:47:09.452544 7744 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:09.454764 master-0 kubenswrapper[7744]: I0220 14:47:09.452565 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d5fl\" (UniqueName: \"kubernetes.io/projected/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4-kube-api-access-5d5fl\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:10.300134 master-0 kubenswrapper[7744]: I0220 14:47:10.299907 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc" Feb 20 14:47:10.301109 master-0 kubenswrapper[7744]: I0220 14:47:10.301060 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-866d45c75d-zvq4v" Feb 20 14:47:10.571817 master-0 kubenswrapper[7744]: I0220 14:47:10.571650 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 20 14:47:10.576746 master-0 kubenswrapper[7744]: I0220 14:47:10.576524 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 20 14:47:10.579571 master-0 kubenswrapper[7744]: I0220 14:47:10.579502 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 20 14:47:10.766416 master-0 kubenswrapper[7744]: I0220 14:47:10.766361 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53835140-8eed-401c-ac07-f89b554ff616-var-lock\") pod \"installer-1-master-0\" (UID: \"53835140-8eed-401c-ac07-f89b554ff616\") " pod="openshift-etcd/installer-1-master-0" Feb 20 14:47:10.766635 master-0 kubenswrapper[7744]: I0220 14:47:10.766519 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53835140-8eed-401c-ac07-f89b554ff616-kube-api-access\") pod \"installer-1-master-0\" (UID: \"53835140-8eed-401c-ac07-f89b554ff616\") " pod="openshift-etcd/installer-1-master-0" Feb 20 14:47:10.766635 master-0 kubenswrapper[7744]: I0220 14:47:10.766599 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53835140-8eed-401c-ac07-f89b554ff616-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"53835140-8eed-401c-ac07-f89b554ff616\") " pod="openshift-etcd/installer-1-master-0" Feb 20 14:47:10.868056 master-0 kubenswrapper[7744]: I0220 14:47:10.867835 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53835140-8eed-401c-ac07-f89b554ff616-kube-api-access\") pod \"installer-1-master-0\" (UID: \"53835140-8eed-401c-ac07-f89b554ff616\") " pod="openshift-etcd/installer-1-master-0" Feb 20 14:47:10.868056 master-0 kubenswrapper[7744]: I0220 14:47:10.868030 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53835140-8eed-401c-ac07-f89b554ff616-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"53835140-8eed-401c-ac07-f89b554ff616\") " pod="openshift-etcd/installer-1-master-0" Feb 20 14:47:10.868288 master-0 kubenswrapper[7744]: I0220 14:47:10.868130 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53835140-8eed-401c-ac07-f89b554ff616-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"53835140-8eed-401c-ac07-f89b554ff616\") " pod="openshift-etcd/installer-1-master-0" Feb 20 14:47:10.868345 master-0 kubenswrapper[7744]: I0220 14:47:10.868311 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53835140-8eed-401c-ac07-f89b554ff616-var-lock\") pod \"installer-1-master-0\" (UID: \"53835140-8eed-401c-ac07-f89b554ff616\") " pod="openshift-etcd/installer-1-master-0" Feb 20 14:47:10.868597 master-0 kubenswrapper[7744]: I0220 14:47:10.868520 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53835140-8eed-401c-ac07-f89b554ff616-var-lock\") pod \"installer-1-master-0\" (UID: \"53835140-8eed-401c-ac07-f89b554ff616\") " pod="openshift-etcd/installer-1-master-0" Feb 20 14:47:10.965990 master-0 kubenswrapper[7744]: I0220 14:47:10.961315 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 20 14:47:11.752934 master-0 kubenswrapper[7744]: I0220 14:47:11.752826 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53835140-8eed-401c-ac07-f89b554ff616-kube-api-access\") pod \"installer-1-master-0\" (UID: \"53835140-8eed-401c-ac07-f89b554ff616\") " pod="openshift-etcd/installer-1-master-0" Feb 20 14:47:11.802620 master-0 kubenswrapper[7744]: I0220 14:47:11.802494 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 20 14:47:11.851122 master-0 kubenswrapper[7744]: I0220 14:47:11.851018 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-5d5776985d-6b6pk"] Feb 20 14:47:11.852258 master-0 kubenswrapper[7744]: I0220 14:47:11.852200 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.859373 master-0 kubenswrapper[7744]: I0220 14:47:11.858747 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 20 14:47:11.859373 master-0 kubenswrapper[7744]: I0220 14:47:11.859091 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 20 14:47:11.859373 master-0 kubenswrapper[7744]: I0220 14:47:11.859309 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Feb 20 14:47:11.859628 master-0 kubenswrapper[7744]: I0220 14:47:11.859573 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 20 14:47:11.859863 master-0 kubenswrapper[7744]: I0220 14:47:11.859810 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Feb 20 14:47:11.860106 master-0 kubenswrapper[7744]: I0220 14:47:11.860066 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 20 14:47:11.860369 master-0 kubenswrapper[7744]: I0220 14:47:11.860329 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 20 14:47:11.860723 master-0 kubenswrapper[7744]: I0220 14:47:11.860681 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 20 14:47:11.860962 master-0 kubenswrapper[7744]: I0220 14:47:11.860896 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 20 14:47:11.875545 master-0 kubenswrapper[7744]: I0220 14:47:11.874471 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 20 14:47:11.890679 master-0 kubenswrapper[7744]: I0220 14:47:11.890615 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-image-import-ca\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.890679 master-0 kubenswrapper[7744]: I0220 14:47:11.890666 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-encryption-config\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.891051 master-0 kubenswrapper[7744]: I0220 14:47:11.890764 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c9c892d6-ba43-4490-96d8-dcb5758786f2-node-pullsecrets\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.891051 master-0 kubenswrapper[7744]: I0220 14:47:11.890813 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-trusted-ca-bundle\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.891051 master-0 kubenswrapper[7744]: I0220 14:47:11.890908 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-serving-cert\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.891266 master-0 kubenswrapper[7744]: I0220 14:47:11.891090 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-config\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.891266 master-0 kubenswrapper[7744]: I0220 14:47:11.891196 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit-dir\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.891402 master-0 kubenswrapper[7744]: I0220 14:47:11.891292 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfnp4\" (UniqueName: \"kubernetes.io/projected/c9c892d6-ba43-4490-96d8-dcb5758786f2-kube-api-access-xfnp4\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.891402 master-0 kubenswrapper[7744]: I0220 14:47:11.891336 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-etcd-serving-ca\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.891402 master-0 kubenswrapper[7744]: I0220 14:47:11.891382 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.891590 master-0 kubenswrapper[7744]: I0220 14:47:11.891489 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-etcd-client\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.992396 master-0 kubenswrapper[7744]: I0220 14:47:11.992342 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-etcd-serving-ca\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.992555 master-0 kubenswrapper[7744]: I0220 14:47:11.992418 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.992683 master-0 kubenswrapper[7744]: I0220 14:47:11.992651 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-etcd-client\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.992886 master-0 kubenswrapper[7744]: E0220 14:47:11.992833 7744 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 20 14:47:11.993062 master-0 kubenswrapper[7744]: E0220 14:47:11.993017 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit podName:c9c892d6-ba43-4490-96d8-dcb5758786f2 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:12.492978036 +0000 UTC m=+31.695177996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit") pod "apiserver-5d5776985d-6b6pk" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2") : configmap "audit-0" not found Feb 20 14:47:11.993208 master-0 kubenswrapper[7744]: I0220 14:47:11.993175 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-image-import-ca\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.993318 master-0 kubenswrapper[7744]: I0220 14:47:11.993287 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-encryption-config\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.993411 master-0 kubenswrapper[7744]: I0220 14:47:11.993383 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c9c892d6-ba43-4490-96d8-dcb5758786f2-node-pullsecrets\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.993682 master-0 kubenswrapper[7744]: I0220 14:47:11.993477 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-trusted-ca-bundle\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.993970 master-0 kubenswrapper[7744]: I0220 14:47:11.993796 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-serving-cert\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.993970 master-0 kubenswrapper[7744]: I0220 14:47:11.993810 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c9c892d6-ba43-4490-96d8-dcb5758786f2-node-pullsecrets\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.994281 master-0 kubenswrapper[7744]: E0220 14:47:11.993972 7744 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 20 14:47:11.994281 master-0 kubenswrapper[7744]: E0220 14:47:11.994045 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-serving-cert podName:c9c892d6-ba43-4490-96d8-dcb5758786f2 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:12.494025422 +0000 UTC m=+31.696225352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-serving-cert") pod "apiserver-5d5776985d-6b6pk" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2") : secret "serving-cert" not found Feb 20 14:47:11.994281 master-0 kubenswrapper[7744]: I0220 14:47:11.994071 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-image-import-ca\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.994433 master-0 kubenswrapper[7744]: I0220 14:47:11.994284 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-config\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.994474 master-0 kubenswrapper[7744]: I0220 14:47:11.994416 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-etcd-serving-ca\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.994674 master-0 kubenswrapper[7744]: I0220 14:47:11.994611 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit-dir\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.994764 master-0 kubenswrapper[7744]: I0220 14:47:11.994716 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit-dir\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.994833 master-0 kubenswrapper[7744]: I0220 14:47:11.994743 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfnp4\" (UniqueName: \"kubernetes.io/projected/c9c892d6-ba43-4490-96d8-dcb5758786f2-kube-api-access-xfnp4\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.995687 master-0 kubenswrapper[7744]: I0220 14:47:11.995612 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-config\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.996781 master-0 kubenswrapper[7744]: I0220 14:47:11.996094 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-trusted-ca-bundle\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:11.998063 master-0 kubenswrapper[7744]: I0220 14:47:11.998016 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-etcd-client\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:12.018712 master-0 kubenswrapper[7744]: I0220 14:47:12.018578 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-encryption-config\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:12.280431 master-0 kubenswrapper[7744]: I0220 14:47:12.280277 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-5d5776985d-6b6pk"] Feb 20 14:47:12.509959 master-0 kubenswrapper[7744]: I0220 14:47:12.509784 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:12.509959 master-0 kubenswrapper[7744]: I0220 14:47:12.509928 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-serving-cert\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:12.513953 master-0 kubenswrapper[7744]: E0220 14:47:12.510228 7744 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 20 14:47:12.513953 master-0 kubenswrapper[7744]: E0220 14:47:12.510309 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-serving-cert podName:c9c892d6-ba43-4490-96d8-dcb5758786f2 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:13.510286178 +0000 UTC m=+32.712486128 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-serving-cert") pod "apiserver-5d5776985d-6b6pk" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2") : secret "serving-cert" not found Feb 20 14:47:12.513953 master-0 kubenswrapper[7744]: E0220 14:47:12.510789 7744 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 20 14:47:12.513953 master-0 kubenswrapper[7744]: E0220 14:47:12.510838 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit podName:c9c892d6-ba43-4490-96d8-dcb5758786f2 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:13.510820551 +0000 UTC m=+32.713020511 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit") pod "apiserver-5d5776985d-6b6pk" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2") : configmap "audit-0" not found Feb 20 14:47:12.677042 master-0 kubenswrapper[7744]: I0220 14:47:12.676904 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 20 14:47:12.697748 master-0 kubenswrapper[7744]: I0220 14:47:12.695454 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc"] Feb 20 14:47:12.705980 master-0 kubenswrapper[7744]: I0220 14:47:12.699687 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfnp4\" (UniqueName: \"kubernetes.io/projected/c9c892d6-ba43-4490-96d8-dcb5758786f2-kube-api-access-xfnp4\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:12.705980 master-0 kubenswrapper[7744]: I0220 14:47:12.700869 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m"] Feb 20 14:47:12.705980 master-0 kubenswrapper[7744]: I0220 14:47:12.701458 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:12.705980 master-0 kubenswrapper[7744]: I0220 14:47:12.705974 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74f69747fd-7s6sc"] Feb 20 14:47:12.706271 master-0 kubenswrapper[7744]: I0220 14:47:12.706124 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 20 14:47:12.706318 master-0 kubenswrapper[7744]: I0220 14:47:12.706279 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 20 14:47:12.709568 master-0 kubenswrapper[7744]: I0220 14:47:12.706463 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 20 14:47:12.709568 master-0 kubenswrapper[7744]: I0220 14:47:12.706604 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 20 14:47:12.709568 master-0 kubenswrapper[7744]: I0220 14:47:12.706741 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 20 14:47:12.721879 master-0 kubenswrapper[7744]: I0220 14:47:12.718636 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m"] Feb 20 14:47:12.737929 master-0 kubenswrapper[7744]: I0220 14:47:12.737879 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-866d45c75d-zvq4v"] Feb 20 14:47:12.741796 master-0 kubenswrapper[7744]: I0220 14:47:12.741753 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-866d45c75d-zvq4v"] Feb 20 14:47:12.814961 master-0 kubenswrapper[7744]: I0220 14:47:12.812588 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a51babf8-ee1c-4a83-a537-e40d5dc9b425-client-ca\") pod \"route-controller-manager-869b4b97fd-fvb7m\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:12.814961 master-0 kubenswrapper[7744]: I0220 14:47:12.812628 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/a51babf8-ee1c-4a83-a537-e40d5dc9b425-kube-api-access-zcqxx\") pod \"route-controller-manager-869b4b97fd-fvb7m\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:12.814961 master-0 kubenswrapper[7744]: I0220 14:47:12.812649 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a51babf8-ee1c-4a83-a537-e40d5dc9b425-serving-cert\") pod \"route-controller-manager-869b4b97fd-fvb7m\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:12.814961 master-0 kubenswrapper[7744]: I0220 14:47:12.812718 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a51babf8-ee1c-4a83-a537-e40d5dc9b425-config\") pod \"route-controller-manager-869b4b97fd-fvb7m\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:12.814961 master-0 kubenswrapper[7744]: I0220 14:47:12.812751 7744 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c46d834-1901-4616-b598-7890bfe0bc72-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:12.814961 master-0 kubenswrapper[7744]: I0220 14:47:12.812762 7744 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c46d834-1901-4616-b598-7890bfe0bc72-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:12.913952 master-0 kubenswrapper[7744]: I0220 14:47:12.913875 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a51babf8-ee1c-4a83-a537-e40d5dc9b425-config\") pod \"route-controller-manager-869b4b97fd-fvb7m\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:12.914141 master-0 kubenswrapper[7744]: I0220 14:47:12.914061 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a51babf8-ee1c-4a83-a537-e40d5dc9b425-client-ca\") pod \"route-controller-manager-869b4b97fd-fvb7m\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:12.914141 master-0 kubenswrapper[7744]: I0220 14:47:12.914104 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/a51babf8-ee1c-4a83-a537-e40d5dc9b425-kube-api-access-zcqxx\") pod \"route-controller-manager-869b4b97fd-fvb7m\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:12.914226 master-0 kubenswrapper[7744]: I0220 14:47:12.914151 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a51babf8-ee1c-4a83-a537-e40d5dc9b425-serving-cert\") pod \"route-controller-manager-869b4b97fd-fvb7m\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:12.914545 master-0 kubenswrapper[7744]: E0220 14:47:12.914504 7744 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:47:12.914605 master-0 kubenswrapper[7744]: E0220 14:47:12.914591 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a51babf8-ee1c-4a83-a537-e40d5dc9b425-serving-cert podName:a51babf8-ee1c-4a83-a537-e40d5dc9b425 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:13.414568278 +0000 UTC m=+32.616768238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a51babf8-ee1c-4a83-a537-e40d5dc9b425-serving-cert") pod "route-controller-manager-869b4b97fd-fvb7m" (UID: "a51babf8-ee1c-4a83-a537-e40d5dc9b425") : secret "serving-cert" not found Feb 20 14:47:12.915995 master-0 kubenswrapper[7744]: I0220 14:47:12.915237 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a51babf8-ee1c-4a83-a537-e40d5dc9b425-config\") pod \"route-controller-manager-869b4b97fd-fvb7m\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:12.915995 master-0 kubenswrapper[7744]: I0220 14:47:12.915316 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a51babf8-ee1c-4a83-a537-e40d5dc9b425-client-ca\") pod \"route-controller-manager-869b4b97fd-fvb7m\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:12.933656 master-0 kubenswrapper[7744]: I0220 14:47:12.933458 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/a51babf8-ee1c-4a83-a537-e40d5dc9b425-kube-api-access-zcqxx\") pod \"route-controller-manager-869b4b97fd-fvb7m\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:13.046771 master-0 kubenswrapper[7744]: I0220 14:47:13.046713 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c46d834-1901-4616-b598-7890bfe0bc72" path="/var/lib/kubelet/pods/9c46d834-1901-4616-b598-7890bfe0bc72/volumes" Feb 20 14:47:13.047528 master-0 kubenswrapper[7744]: I0220 14:47:13.047495 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4" path="/var/lib/kubelet/pods/c5861ef8-cb9e-4a6c-99e4-cfd78ed8b3a4/volumes" Feb 20 14:47:13.316703 master-0 kubenswrapper[7744]: I0220 14:47:13.316610 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"53835140-8eed-401c-ac07-f89b554ff616","Type":"ContainerStarted","Data":"ac1ebe21f01db828cbdc3775b7cb4f962d321758483e5f64757855bd43976682"} Feb 20 14:47:13.316703 master-0 kubenswrapper[7744]: I0220 14:47:13.316676 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"53835140-8eed-401c-ac07-f89b554ff616","Type":"ContainerStarted","Data":"823465cca5c74108f34569b06808ad03bfdc5a9d5fe983b835a9ba1e796ceb31"} Feb 20 14:47:13.419364 master-0 kubenswrapper[7744]: I0220 14:47:13.419276 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a51babf8-ee1c-4a83-a537-e40d5dc9b425-serving-cert\") pod \"route-controller-manager-869b4b97fd-fvb7m\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:13.419652 master-0 kubenswrapper[7744]: E0220 14:47:13.419506 7744 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:47:13.419652 master-0 kubenswrapper[7744]: E0220 14:47:13.419646 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a51babf8-ee1c-4a83-a537-e40d5dc9b425-serving-cert podName:a51babf8-ee1c-4a83-a537-e40d5dc9b425 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:14.419617301 +0000 UTC m=+33.621817251 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a51babf8-ee1c-4a83-a537-e40d5dc9b425-serving-cert") pod "route-controller-manager-869b4b97fd-fvb7m" (UID: "a51babf8-ee1c-4a83-a537-e40d5dc9b425") : secret "serving-cert" not found Feb 20 14:47:13.520129 master-0 kubenswrapper[7744]: I0220 14:47:13.520043 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:13.520386 master-0 kubenswrapper[7744]: E0220 14:47:13.520214 7744 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 20 14:47:13.520386 master-0 kubenswrapper[7744]: E0220 14:47:13.520300 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit podName:c9c892d6-ba43-4490-96d8-dcb5758786f2 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:15.520276764 +0000 UTC m=+34.722476724 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit") pod "apiserver-5d5776985d-6b6pk" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2") : configmap "audit-0" not found Feb 20 14:47:13.520514 master-0 kubenswrapper[7744]: I0220 14:47:13.520403 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-serving-cert\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:13.520654 master-0 kubenswrapper[7744]: E0220 14:47:13.520604 7744 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 20 14:47:13.521186 master-0 kubenswrapper[7744]: E0220 14:47:13.520683 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-serving-cert podName:c9c892d6-ba43-4490-96d8-dcb5758786f2 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:15.520664693 +0000 UTC m=+34.722864643 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-serving-cert") pod "apiserver-5d5776985d-6b6pk" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2") : secret "serving-cert" not found Feb 20 14:47:13.823737 master-0 kubenswrapper[7744]: I0220 14:47:13.823669 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:47:13.825194 master-0 kubenswrapper[7744]: I0220 14:47:13.823754 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:47:13.825194 master-0 kubenswrapper[7744]: I0220 14:47:13.823825 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:47:13.825194 master-0 kubenswrapper[7744]: I0220 14:47:13.823893 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:47:13.825194 master-0 kubenswrapper[7744]: I0220 14:47:13.823983 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:47:13.825194 master-0 kubenswrapper[7744]: I0220 14:47:13.824024 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:47:13.825194 master-0 kubenswrapper[7744]: I0220 14:47:13.824083 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:47:13.825194 master-0 kubenswrapper[7744]: I0220 14:47:13.824140 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:47:13.832151 master-0 kubenswrapper[7744]: E0220 14:47:13.832105 7744 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 20 14:47:13.832295 master-0 kubenswrapper[7744]: E0220 14:47:13.832183 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert podName:1fe69517-eec2-4721-933c-fa27cea7ab1f nodeName:}" failed. No retries permitted until 2026-02-20 14:47:45.832164192 +0000 UTC m=+65.034364112 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-2sw9z" (UID: "1fe69517-eec2-4721-933c-fa27cea7ab1f") : secret "package-server-manager-serving-cert" not found Feb 20 14:47:13.832884 master-0 kubenswrapper[7744]: I0220 14:47:13.832845 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:47:13.837708 master-0 kubenswrapper[7744]: I0220 14:47:13.837644 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:47:13.838001 master-0 kubenswrapper[7744]: I0220 14:47:13.837725 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:47:13.838723 master-0 kubenswrapper[7744]: I0220 14:47:13.838667 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:47:13.839034 master-0 kubenswrapper[7744]: I0220 14:47:13.838978 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:47:13.843331 master-0 kubenswrapper[7744]: I0220 14:47:13.843252 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-jf2s9\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:47:13.845477 master-0 kubenswrapper[7744]: I0220 14:47:13.845424 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:47:13.929330 master-0 kubenswrapper[7744]: I0220 14:47:13.929254 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:47:13.929330 master-0 kubenswrapper[7744]: I0220 14:47:13.929301 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:47:13.933166 master-0 kubenswrapper[7744]: I0220 14:47:13.933111 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-wl49x\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:47:13.933304 master-0 kubenswrapper[7744]: I0220 14:47:13.933282 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:47:14.085553 master-0 kubenswrapper[7744]: I0220 14:47:14.085427 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:47:14.087837 master-0 kubenswrapper[7744]: I0220 14:47:14.087784 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 14:47:14.088914 master-0 kubenswrapper[7744]: I0220 14:47:14.088871 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:47:14.089048 master-0 kubenswrapper[7744]: I0220 14:47:14.089012 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 14:47:14.089125 master-0 kubenswrapper[7744]: I0220 14:47:14.089071 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 14:47:14.089351 master-0 kubenswrapper[7744]: I0220 14:47:14.089318 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 14:47:14.095526 master-0 kubenswrapper[7744]: I0220 14:47:14.095491 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 14:47:14.099189 master-0 kubenswrapper[7744]: I0220 14:47:14.099138 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:47:14.323443 master-0 kubenswrapper[7744]: I0220 14:47:14.322699 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" event={"ID":"4cede061-d85a-4366-9f1e-90be51f726fc","Type":"ContainerStarted","Data":"855c381c7e9503bada08349e9cd9fb33b869da71527d36e4ab32698f02bf192b"} Feb 20 14:47:14.378981 master-0 kubenswrapper[7744]: I0220 14:47:14.378594 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:47:14.411660 master-0 kubenswrapper[7744]: I0220 14:47:14.405794 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=4.405775735 podStartE2EDuration="4.405775735s" podCreationTimestamp="2026-02-20 14:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:47:13.336803454 +0000 UTC m=+32.539003404" watchObservedRunningTime="2026-02-20 14:47:14.405775735 +0000 UTC m=+33.607975665" Feb 20 14:47:14.437629 master-0 kubenswrapper[7744]: I0220 14:47:14.437574 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-5d5776985d-6b6pk"] Feb 20 14:47:14.437826 master-0 kubenswrapper[7744]: E0220 14:47:14.437801 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" podUID="c9c892d6-ba43-4490-96d8-dcb5758786f2" Feb 20 14:47:14.438061 master-0 kubenswrapper[7744]: I0220 14:47:14.437953 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a51babf8-ee1c-4a83-a537-e40d5dc9b425-serving-cert\") pod \"route-controller-manager-869b4b97fd-fvb7m\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:14.438190 master-0 kubenswrapper[7744]: E0220 14:47:14.438151 7744 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 20 14:47:14.438244 master-0 kubenswrapper[7744]: E0220 14:47:14.438229 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a51babf8-ee1c-4a83-a537-e40d5dc9b425-serving-cert podName:a51babf8-ee1c-4a83-a537-e40d5dc9b425 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:16.438212086 +0000 UTC m=+35.640412006 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a51babf8-ee1c-4a83-a537-e40d5dc9b425-serving-cert") pod "route-controller-manager-869b4b97fd-fvb7m" (UID: "a51babf8-ee1c-4a83-a537-e40d5dc9b425") : secret "serving-cert" not found Feb 20 14:47:14.465260 master-0 kubenswrapper[7744]: I0220 14:47:14.465126 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt"] Feb 20 14:47:14.470884 master-0 kubenswrapper[7744]: I0220 14:47:14.470702 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-97m7r"] Feb 20 14:47:14.490722 master-0 kubenswrapper[7744]: I0220 14:47:14.488041 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-gkrph"] Feb 20 14:47:14.516978 master-0 kubenswrapper[7744]: I0220 14:47:14.513543 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:47:14.566455 master-0 kubenswrapper[7744]: I0220 14:47:14.566367 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x"] Feb 20 14:47:14.590780 master-0 kubenswrapper[7744]: W0220 14:47:14.590710 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1fb2774_6dd7_4429_9df3_4ddfcdaac939.slice/crio-57d1f27e3b1777057d880d98efb4a8e0d90629f2aa4281ad872ea1245d8afe4d WatchSource:0}: Error finding container 57d1f27e3b1777057d880d98efb4a8e0d90629f2aa4281ad872ea1245d8afe4d: Status 404 returned error can't find the container with id 57d1f27e3b1777057d880d98efb4a8e0d90629f2aa4281ad872ea1245d8afe4d Feb 20 14:47:14.715496 master-0 kubenswrapper[7744]: I0220 14:47:14.715426 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4"] Feb 20 14:47:14.718217 master-0 kubenswrapper[7744]: I0220 14:47:14.718161 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-99lkv"] Feb 20 14:47:14.727258 master-0 kubenswrapper[7744]: I0220 14:47:14.727212 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp"] Feb 20 14:47:14.727578 master-0 kubenswrapper[7744]: W0220 14:47:14.727537 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31d71c90_cab7_4411_9426_0713cb026294.slice/crio-81b14b205a5b43d7cf78b359f564d3ae3e67aaf00f87262df973d130ce6f30c0 WatchSource:0}: Error finding container 81b14b205a5b43d7cf78b359f564d3ae3e67aaf00f87262df973d130ce6f30c0: Status 404 returned error can't find the container with id 81b14b205a5b43d7cf78b359f564d3ae3e67aaf00f87262df973d130ce6f30c0 Feb 20 14:47:14.729060 master-0 kubenswrapper[7744]: W0220 14:47:14.728974 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ea4c132_b6d0_4dc9_942d_48e359eed418.slice/crio-7d7dfb1a01a9470453018e9e4e99ad966573e066e4eb9b370f42ef7d7426a75e WatchSource:0}: Error finding container 7d7dfb1a01a9470453018e9e4e99ad966573e066e4eb9b370f42ef7d7426a75e: Status 404 returned error can't find the container with id 7d7dfb1a01a9470453018e9e4e99ad966573e066e4eb9b370f42ef7d7426a75e Feb 20 14:47:14.736308 master-0 kubenswrapper[7744]: W0220 14:47:14.736259 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod419f28a9_8fd7_4b59_9554_4d884a1208b5.slice/crio-84da6dcc282a18c48a027b33cd2404e3592b75c697de5dd4ab39e2cebf5cff28 WatchSource:0}: Error finding container 84da6dcc282a18c48a027b33cd2404e3592b75c697de5dd4ab39e2cebf5cff28: Status 404 returned error can't find the container with id 84da6dcc282a18c48a027b33cd2404e3592b75c697de5dd4ab39e2cebf5cff28 Feb 20 14:47:15.322680 master-0 kubenswrapper[7744]: I0220 14:47:15.317378 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6947c8468f-9tjhp"] Feb 20 14:47:15.322680 master-0 kubenswrapper[7744]: I0220 14:47:15.317914 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:15.322680 master-0 kubenswrapper[7744]: I0220 14:47:15.320300 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 14:47:15.322680 master-0 kubenswrapper[7744]: I0220 14:47:15.320520 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 14:47:15.322680 master-0 kubenswrapper[7744]: I0220 14:47:15.321075 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 14:47:15.322680 master-0 kubenswrapper[7744]: I0220 14:47:15.321253 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 14:47:15.322680 master-0 kubenswrapper[7744]: I0220 14:47:15.321600 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 14:47:15.331553 master-0 kubenswrapper[7744]: I0220 14:47:15.329725 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6947c8468f-9tjhp"] Feb 20 14:47:15.333755 master-0 kubenswrapper[7744]: I0220 14:47:15.333723 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 14:47:15.334703 master-0 kubenswrapper[7744]: I0220 14:47:15.334642 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" event={"ID":"31d71c90-cab7-4411-9426-0713cb026294","Type":"ContainerStarted","Data":"81b14b205a5b43d7cf78b359f564d3ae3e67aaf00f87262df973d130ce6f30c0"} Feb 20 14:47:15.337390 master-0 kubenswrapper[7744]: I0220 14:47:15.336810 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-99lkv" event={"ID":"5ea4c132-b6d0-4dc9-942d-48e359eed418","Type":"ContainerStarted","Data":"7d7dfb1a01a9470453018e9e4e99ad966573e066e4eb9b370f42ef7d7426a75e"} Feb 20 14:47:15.338963 master-0 kubenswrapper[7744]: I0220 14:47:15.338428 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" event={"ID":"419f28a9-8fd7-4b59-9554-4d884a1208b5","Type":"ContainerStarted","Data":"84da6dcc282a18c48a027b33cd2404e3592b75c697de5dd4ab39e2cebf5cff28"} Feb 20 14:47:15.341659 master-0 kubenswrapper[7744]: I0220 14:47:15.339365 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" event={"ID":"a1fb2774-6dd7-4429-9df3-4ddfcdaac939","Type":"ContainerStarted","Data":"57d1f27e3b1777057d880d98efb4a8e0d90629f2aa4281ad872ea1245d8afe4d"} Feb 20 14:47:15.341659 master-0 kubenswrapper[7744]: I0220 14:47:15.341549 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" event={"ID":"c0a3548f-299c-4234-9bf1-c93efcb9740b","Type":"ContainerStarted","Data":"3e54884bb129553f96e22ded74db5788d449f044a28bbdd487ce407f3c14ba01"} Feb 20 14:47:15.343812 master-0 kubenswrapper[7744]: I0220 14:47:15.343789 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" event={"ID":"d28490b0-96ca-4fe0-8fae-e6f8390f933b","Type":"ContainerStarted","Data":"80b53aa57494cc0bc6bbacad6b2e04131adc3c0ab6e7a77f83dd0c6c91461d7d"} Feb 20 14:47:15.346254 master-0 kubenswrapper[7744]: I0220 14:47:15.346223 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2027dfb0-2633-4e59-bcad-24ec1658029d-serving-cert\") pod \"controller-manager-6947c8468f-9tjhp\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:15.346327 master-0 kubenswrapper[7744]: I0220 14:47:15.346255 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szmpv\" (UniqueName: \"kubernetes.io/projected/2027dfb0-2633-4e59-bcad-24ec1658029d-kube-api-access-szmpv\") pod \"controller-manager-6947c8468f-9tjhp\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:15.346327 master-0 kubenswrapper[7744]: I0220 14:47:15.346296 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-proxy-ca-bundles\") pod \"controller-manager-6947c8468f-9tjhp\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:15.346393 master-0 kubenswrapper[7744]: I0220 14:47:15.346330 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-config\") pod \"controller-manager-6947c8468f-9tjhp\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:15.346393 master-0 kubenswrapper[7744]: I0220 14:47:15.346350 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-client-ca\") pod \"controller-manager-6947c8468f-9tjhp\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:15.346938 master-0 kubenswrapper[7744]: I0220 14:47:15.346889 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" event={"ID":"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1","Type":"ContainerStarted","Data":"5c30b9cdcf13e6a3816e39ff92455fc96f090fac8eb9899e480122d604e7a1b8"} Feb 20 14:47:15.347005 master-0 kubenswrapper[7744]: I0220 14:47:15.346928 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:15.352453 master-0 kubenswrapper[7744]: I0220 14:47:15.352412 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:15.447053 master-0 kubenswrapper[7744]: I0220 14:47:15.447009 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c9c892d6-ba43-4490-96d8-dcb5758786f2-node-pullsecrets\") pod \"c9c892d6-ba43-4490-96d8-dcb5758786f2\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " Feb 20 14:47:15.447053 master-0 kubenswrapper[7744]: I0220 14:47:15.447059 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-trusted-ca-bundle\") pod \"c9c892d6-ba43-4490-96d8-dcb5758786f2\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " Feb 20 14:47:15.447272 master-0 kubenswrapper[7744]: I0220 14:47:15.447107 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfnp4\" (UniqueName: \"kubernetes.io/projected/c9c892d6-ba43-4490-96d8-dcb5758786f2-kube-api-access-xfnp4\") pod \"c9c892d6-ba43-4490-96d8-dcb5758786f2\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " Feb 20 14:47:15.447272 master-0 kubenswrapper[7744]: I0220 14:47:15.447129 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-etcd-client\") pod \"c9c892d6-ba43-4490-96d8-dcb5758786f2\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " Feb 20 14:47:15.447272 master-0 kubenswrapper[7744]: I0220 14:47:15.447170 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-encryption-config\") pod \"c9c892d6-ba43-4490-96d8-dcb5758786f2\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " Feb 20 14:47:15.447272 master-0 kubenswrapper[7744]: I0220 14:47:15.447190 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-etcd-serving-ca\") pod \"c9c892d6-ba43-4490-96d8-dcb5758786f2\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " Feb 20 14:47:15.447272 master-0 kubenswrapper[7744]: I0220 14:47:15.447214 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-config\") pod \"c9c892d6-ba43-4490-96d8-dcb5758786f2\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " Feb 20 14:47:15.447272 master-0 kubenswrapper[7744]: I0220 14:47:15.447241 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-image-import-ca\") pod \"c9c892d6-ba43-4490-96d8-dcb5758786f2\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " Feb 20 14:47:15.447272 master-0 kubenswrapper[7744]: I0220 14:47:15.447270 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit-dir\") pod \"c9c892d6-ba43-4490-96d8-dcb5758786f2\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " Feb 20 14:47:15.447468 master-0 kubenswrapper[7744]: I0220 14:47:15.447365 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-proxy-ca-bundles\") pod \"controller-manager-6947c8468f-9tjhp\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:15.447468 master-0 kubenswrapper[7744]: I0220 14:47:15.447411 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-config\") pod \"controller-manager-6947c8468f-9tjhp\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:15.447468 master-0 kubenswrapper[7744]: I0220 14:47:15.447431 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-client-ca\") pod \"controller-manager-6947c8468f-9tjhp\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:15.447551 master-0 kubenswrapper[7744]: I0220 14:47:15.447469 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2027dfb0-2633-4e59-bcad-24ec1658029d-serving-cert\") pod \"controller-manager-6947c8468f-9tjhp\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:15.447551 master-0 kubenswrapper[7744]: I0220 14:47:15.447491 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szmpv\" (UniqueName: \"kubernetes.io/projected/2027dfb0-2633-4e59-bcad-24ec1658029d-kube-api-access-szmpv\") pod \"controller-manager-6947c8468f-9tjhp\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:15.448506 master-0 kubenswrapper[7744]: I0220 14:47:15.447204 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9c892d6-ba43-4490-96d8-dcb5758786f2-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c9c892d6-ba43-4490-96d8-dcb5758786f2" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:47:15.448556 master-0 kubenswrapper[7744]: I0220 14:47:15.448217 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c9c892d6-ba43-4490-96d8-dcb5758786f2" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:47:15.448556 master-0 kubenswrapper[7744]: I0220 14:47:15.448435 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "c9c892d6-ba43-4490-96d8-dcb5758786f2" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:47:15.448556 master-0 kubenswrapper[7744]: I0220 14:47:15.448462 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "c9c892d6-ba43-4490-96d8-dcb5758786f2" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:47:15.449166 master-0 kubenswrapper[7744]: I0220 14:47:15.449131 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c9c892d6-ba43-4490-96d8-dcb5758786f2" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:47:15.449875 master-0 kubenswrapper[7744]: I0220 14:47:15.449842 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-config" (OuterVolumeSpecName: "config") pod "c9c892d6-ba43-4490-96d8-dcb5758786f2" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:47:15.449983 master-0 kubenswrapper[7744]: I0220 14:47:15.449951 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-config\") pod \"controller-manager-6947c8468f-9tjhp\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:15.450515 master-0 kubenswrapper[7744]: I0220 14:47:15.450477 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-client-ca\") pod \"controller-manager-6947c8468f-9tjhp\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:15.450582 master-0 kubenswrapper[7744]: I0220 14:47:15.450523 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-proxy-ca-bundles\") pod \"controller-manager-6947c8468f-9tjhp\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:15.453382 master-0 kubenswrapper[7744]: I0220 14:47:15.453084 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2027dfb0-2633-4e59-bcad-24ec1658029d-serving-cert\") pod \"controller-manager-6947c8468f-9tjhp\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:15.453453 master-0 kubenswrapper[7744]: I0220 14:47:15.453252 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "c9c892d6-ba43-4490-96d8-dcb5758786f2" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 14:47:15.454273 master-0 kubenswrapper[7744]: I0220 14:47:15.454229 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "c9c892d6-ba43-4490-96d8-dcb5758786f2" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 14:47:15.455313 master-0 kubenswrapper[7744]: I0220 14:47:15.455237 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9c892d6-ba43-4490-96d8-dcb5758786f2-kube-api-access-xfnp4" (OuterVolumeSpecName: "kube-api-access-xfnp4") pod "c9c892d6-ba43-4490-96d8-dcb5758786f2" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2"). InnerVolumeSpecName "kube-api-access-xfnp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:47:15.464169 master-0 kubenswrapper[7744]: I0220 14:47:15.464146 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szmpv\" (UniqueName: \"kubernetes.io/projected/2027dfb0-2633-4e59-bcad-24ec1658029d-kube-api-access-szmpv\") pod \"controller-manager-6947c8468f-9tjhp\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:15.549212 master-0 kubenswrapper[7744]: I0220 14:47:15.549163 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:15.549418 master-0 kubenswrapper[7744]: I0220 14:47:15.549254 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-serving-cert\") pod \"apiserver-5d5776985d-6b6pk\" (UID: \"c9c892d6-ba43-4490-96d8-dcb5758786f2\") " pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:15.549418 master-0 kubenswrapper[7744]: I0220 14:47:15.549299 7744 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:15.549418 master-0 kubenswrapper[7744]: I0220 14:47:15.549313 7744 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c9c892d6-ba43-4490-96d8-dcb5758786f2-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:15.549418 master-0 kubenswrapper[7744]: I0220 14:47:15.549322 7744 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:15.549418 master-0 kubenswrapper[7744]: I0220 14:47:15.549330 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfnp4\" (UniqueName: \"kubernetes.io/projected/c9c892d6-ba43-4490-96d8-dcb5758786f2-kube-api-access-xfnp4\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:15.549418 master-0 kubenswrapper[7744]: I0220 14:47:15.549339 7744 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-etcd-client\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:15.549418 master-0 kubenswrapper[7744]: E0220 14:47:15.549336 7744 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 20 14:47:15.549418 master-0 kubenswrapper[7744]: E0220 14:47:15.549415 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit podName:c9c892d6-ba43-4490-96d8-dcb5758786f2 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:19.549394924 +0000 UTC m=+38.751594844 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit") pod "apiserver-5d5776985d-6b6pk" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2") : configmap "audit-0" not found Feb 20 14:47:15.549663 master-0 kubenswrapper[7744]: E0220 14:47:15.549431 7744 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 20 14:47:15.549663 master-0 kubenswrapper[7744]: I0220 14:47:15.549347 7744 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-encryption-config\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:15.549663 master-0 kubenswrapper[7744]: I0220 14:47:15.549502 7744 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:15.549663 master-0 kubenswrapper[7744]: I0220 14:47:15.549514 7744 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-config\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:15.549663 master-0 kubenswrapper[7744]: I0220 14:47:15.549525 7744 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-image-import-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:15.549663 master-0 kubenswrapper[7744]: E0220 14:47:15.549573 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-serving-cert podName:c9c892d6-ba43-4490-96d8-dcb5758786f2 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:19.549558478 +0000 UTC m=+38.751758398 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-serving-cert") pod "apiserver-5d5776985d-6b6pk" (UID: "c9c892d6-ba43-4490-96d8-dcb5758786f2") : secret "serving-cert" not found Feb 20 14:47:15.646797 master-0 kubenswrapper[7744]: I0220 14:47:15.646686 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:16.353865 master-0 kubenswrapper[7744]: I0220 14:47:16.353782 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5d5776985d-6b6pk" Feb 20 14:47:16.476040 master-0 kubenswrapper[7744]: I0220 14:47:16.475978 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a51babf8-ee1c-4a83-a537-e40d5dc9b425-serving-cert\") pod \"route-controller-manager-869b4b97fd-fvb7m\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:16.481420 master-0 kubenswrapper[7744]: I0220 14:47:16.481375 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a51babf8-ee1c-4a83-a537-e40d5dc9b425-serving-cert\") pod \"route-controller-manager-869b4b97fd-fvb7m\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:16.864850 master-0 kubenswrapper[7744]: I0220 14:47:16.864805 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:16.867847 master-0 kubenswrapper[7744]: I0220 14:47:16.867783 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 14:47:16.931170 master-0 kubenswrapper[7744]: I0220 14:47:16.930417 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 20 14:47:16.931170 master-0 kubenswrapper[7744]: I0220 14:47:16.930601 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="80490ae2-6185-4c98-ad70-bb13da2fe3b0" containerName="installer" containerID="cri-o://063d7d38f9bc412babd73283f30cdd4274248e0467ee3c63ec3aa1207486311b" gracePeriod=30 Feb 20 14:47:16.965478 master-0 kubenswrapper[7744]: I0220 14:47:16.964763 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-776c8f54bc-gmvx8"] Feb 20 14:47:16.965478 master-0 kubenswrapper[7744]: I0220 14:47:16.965432 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-5d5776985d-6b6pk"] Feb 20 14:47:16.965696 master-0 kubenswrapper[7744]: I0220 14:47:16.965521 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:16.975983 master-0 kubenswrapper[7744]: I0220 14:47:16.973816 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 20 14:47:16.975983 master-0 kubenswrapper[7744]: I0220 14:47:16.973851 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 20 14:47:16.975983 master-0 kubenswrapper[7744]: I0220 14:47:16.974135 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 20 14:47:16.975983 master-0 kubenswrapper[7744]: I0220 14:47:16.974215 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 20 14:47:16.976521 master-0 kubenswrapper[7744]: I0220 14:47:16.976482 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-5d5776985d-6b6pk"] Feb 20 14:47:16.978599 master-0 kubenswrapper[7744]: I0220 14:47:16.978465 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 20 14:47:16.978599 master-0 kubenswrapper[7744]: I0220 14:47:16.978491 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 20 14:47:16.978599 master-0 kubenswrapper[7744]: I0220 14:47:16.978524 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 20 14:47:16.978867 master-0 kubenswrapper[7744]: I0220 14:47:16.978744 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 20 14:47:16.978867 master-0 kubenswrapper[7744]: I0220 14:47:16.978769 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 20 14:47:16.988780 master-0 kubenswrapper[7744]: I0220 14:47:16.986889 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 20 14:47:16.988780 master-0 kubenswrapper[7744]: I0220 14:47:16.988360 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-776c8f54bc-gmvx8"] Feb 20 14:47:17.043401 master-0 kubenswrapper[7744]: I0220 14:47:17.043357 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9c892d6-ba43-4490-96d8-dcb5758786f2" path="/var/lib/kubelet/pods/c9c892d6-ba43-4490-96d8-dcb5758786f2/volumes" Feb 20 14:47:17.070896 master-0 kubenswrapper[7744]: I0220 14:47:17.070858 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-serving-cert\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.070896 master-0 kubenswrapper[7744]: I0220 14:47:17.070901 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-audit-dir\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.071119 master-0 kubenswrapper[7744]: I0220 14:47:17.071034 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-etcd-serving-ca\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.071119 master-0 kubenswrapper[7744]: I0220 14:47:17.071091 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-encryption-config\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.071189 master-0 kubenswrapper[7744]: I0220 14:47:17.071127 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-trusted-ca-bundle\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.071189 master-0 kubenswrapper[7744]: I0220 14:47:17.071148 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-config\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.071338 master-0 kubenswrapper[7744]: I0220 14:47:17.071289 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-image-import-ca\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.071481 master-0 kubenswrapper[7744]: I0220 14:47:17.071423 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-audit\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.071554 master-0 kubenswrapper[7744]: I0220 14:47:17.071533 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-etcd-client\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.071623 master-0 kubenswrapper[7744]: I0220 14:47:17.071599 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-node-pullsecrets\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.071671 master-0 kubenswrapper[7744]: I0220 14:47:17.071645 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b54xg\" (UniqueName: \"kubernetes.io/projected/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-kube-api-access-b54xg\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.071806 master-0 kubenswrapper[7744]: I0220 14:47:17.071778 7744 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c9c892d6-ba43-4490-96d8-dcb5758786f2-audit\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:17.071843 master-0 kubenswrapper[7744]: I0220 14:47:17.071825 7744 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9c892d6-ba43-4490-96d8-dcb5758786f2-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:17.173472 master-0 kubenswrapper[7744]: I0220 14:47:17.173331 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-audit\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.174125 master-0 kubenswrapper[7744]: I0220 14:47:17.174084 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-audit\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.174208 master-0 kubenswrapper[7744]: I0220 14:47:17.174132 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-etcd-client\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.174208 master-0 kubenswrapper[7744]: I0220 14:47:17.174165 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-node-pullsecrets\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.174208 master-0 kubenswrapper[7744]: I0220 14:47:17.174184 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b54xg\" (UniqueName: \"kubernetes.io/projected/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-kube-api-access-b54xg\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.174208 master-0 kubenswrapper[7744]: I0220 14:47:17.174210 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-serving-cert\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.174398 master-0 kubenswrapper[7744]: I0220 14:47:17.174233 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-audit-dir\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.174398 master-0 kubenswrapper[7744]: I0220 14:47:17.174258 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-etcd-serving-ca\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.174398 master-0 kubenswrapper[7744]: I0220 14:47:17.174278 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-encryption-config\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.174398 master-0 kubenswrapper[7744]: I0220 14:47:17.174296 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-trusted-ca-bundle\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.174398 master-0 kubenswrapper[7744]: I0220 14:47:17.174312 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-config\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.174398 master-0 kubenswrapper[7744]: I0220 14:47:17.174330 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-image-import-ca\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.174717 master-0 kubenswrapper[7744]: I0220 14:47:17.174653 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-image-import-ca\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.179569 master-0 kubenswrapper[7744]: I0220 14:47:17.179306 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-etcd-client\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.181034 master-0 kubenswrapper[7744]: I0220 14:47:17.180999 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-trusted-ca-bundle\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.181716 master-0 kubenswrapper[7744]: I0220 14:47:17.181678 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-etcd-serving-ca\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.182241 master-0 kubenswrapper[7744]: I0220 14:47:17.182214 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-config\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.183592 master-0 kubenswrapper[7744]: I0220 14:47:17.183559 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-serving-cert\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.183671 master-0 kubenswrapper[7744]: I0220 14:47:17.174996 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-node-pullsecrets\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.183671 master-0 kubenswrapper[7744]: I0220 14:47:17.175103 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-audit-dir\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.194707 master-0 kubenswrapper[7744]: I0220 14:47:17.194667 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-encryption-config\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.196549 master-0 kubenswrapper[7744]: I0220 14:47:17.196523 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b54xg\" (UniqueName: \"kubernetes.io/projected/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-kube-api-access-b54xg\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:17.293562 master-0 kubenswrapper[7744]: I0220 14:47:17.293475 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:19.326273 master-0 kubenswrapper[7744]: I0220 14:47:19.324461 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 20 14:47:19.326273 master-0 kubenswrapper[7744]: I0220 14:47:19.325084 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 14:47:19.330957 master-0 kubenswrapper[7744]: I0220 14:47:19.330892 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 20 14:47:19.399760 master-0 kubenswrapper[7744]: I0220 14:47:19.399714 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-var-lock\") pod \"installer-2-master-0\" (UID: \"7f939e3a-7a2a-4888-8c2b-0ec6944c0369\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 14:47:19.399853 master-0 kubenswrapper[7744]: I0220 14:47:19.399766 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7f939e3a-7a2a-4888-8c2b-0ec6944c0369\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 14:47:19.399853 master-0 kubenswrapper[7744]: I0220 14:47:19.399848 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7f939e3a-7a2a-4888-8c2b-0ec6944c0369\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 14:47:19.501277 master-0 kubenswrapper[7744]: I0220 14:47:19.501214 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7f939e3a-7a2a-4888-8c2b-0ec6944c0369\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 14:47:19.501484 master-0 kubenswrapper[7744]: I0220 14:47:19.501328 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-var-lock\") pod \"installer-2-master-0\" (UID: \"7f939e3a-7a2a-4888-8c2b-0ec6944c0369\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 14:47:19.501484 master-0 kubenswrapper[7744]: I0220 14:47:19.501397 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7f939e3a-7a2a-4888-8c2b-0ec6944c0369\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 14:47:19.501484 master-0 kubenswrapper[7744]: I0220 14:47:19.501445 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-var-lock\") pod \"installer-2-master-0\" (UID: \"7f939e3a-7a2a-4888-8c2b-0ec6944c0369\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 14:47:19.501484 master-0 kubenswrapper[7744]: I0220 14:47:19.501463 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7f939e3a-7a2a-4888-8c2b-0ec6944c0369\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 14:47:19.526076 master-0 kubenswrapper[7744]: I0220 14:47:19.525715 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7f939e3a-7a2a-4888-8c2b-0ec6944c0369\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 14:47:19.710144 master-0 kubenswrapper[7744]: I0220 14:47:19.710096 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 14:47:22.575483 master-0 kubenswrapper[7744]: I0220 14:47:22.574301 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-776c8f54bc-gmvx8"] Feb 20 14:47:22.745149 master-0 kubenswrapper[7744]: I0220 14:47:22.738090 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 20 14:47:22.745149 master-0 kubenswrapper[7744]: I0220 14:47:22.739089 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6947c8468f-9tjhp"] Feb 20 14:47:22.759643 master-0 kubenswrapper[7744]: W0220 14:47:22.759521 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2027dfb0_2633_4e59_bcad_24ec1658029d.slice/crio-4adc46d882aecc7f951d7c476f4435231300360db0e7948597d13b0d353cdb17 WatchSource:0}: Error finding container 4adc46d882aecc7f951d7c476f4435231300360db0e7948597d13b0d353cdb17: Status 404 returned error can't find the container with id 4adc46d882aecc7f951d7c476f4435231300360db0e7948597d13b0d353cdb17 Feb 20 14:47:22.774110 master-0 kubenswrapper[7744]: W0220 14:47:22.769281 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7f939e3a_7a2a_4888_8c2b_0ec6944c0369.slice/crio-0be6bc01f07eb32ca821be8ad8bfcf22d73cf30ee1c73629be37b1774d26349f WatchSource:0}: Error finding container 0be6bc01f07eb32ca821be8ad8bfcf22d73cf30ee1c73629be37b1774d26349f: Status 404 returned error can't find the container with id 0be6bc01f07eb32ca821be8ad8bfcf22d73cf30ee1c73629be37b1774d26349f Feb 20 14:47:22.785004 master-0 kubenswrapper[7744]: I0220 14:47:22.784952 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m"] Feb 20 14:47:22.819220 master-0 kubenswrapper[7744]: W0220 14:47:22.818953 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda51babf8_ee1c_4a83_a537_e40d5dc9b425.slice/crio-01d0df3bbdb5eda3970456ee9bc8ebaf7bd105b3c70d34b98d4e9e5b9678b3a1 WatchSource:0}: Error finding container 01d0df3bbdb5eda3970456ee9bc8ebaf7bd105b3c70d34b98d4e9e5b9678b3a1: Status 404 returned error can't find the container with id 01d0df3bbdb5eda3970456ee9bc8ebaf7bd105b3c70d34b98d4e9e5b9678b3a1 Feb 20 14:47:22.820135 master-0 kubenswrapper[7744]: I0220 14:47:22.820104 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-jc4wl"] Feb 20 14:47:22.820594 master-0 kubenswrapper[7744]: I0220 14:47:22.820578 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.871140 master-0 kubenswrapper[7744]: I0220 14:47:22.871098 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-kubernetes\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.871140 master-0 kubenswrapper[7744]: I0220 14:47:22.871140 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/19ce4b45-db46-4fc3-8d72-963de22f026b-tmp\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.871284 master-0 kubenswrapper[7744]: I0220 14:47:22.871158 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-var-lib-kubelet\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.871284 master-0 kubenswrapper[7744]: I0220 14:47:22.871194 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysctl-d\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.871284 master-0 kubenswrapper[7744]: I0220 14:47:22.871210 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysctl-conf\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.871284 master-0 kubenswrapper[7744]: I0220 14:47:22.871235 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysconfig\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.871284 master-0 kubenswrapper[7744]: I0220 14:47:22.871250 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-lib-modules\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.871284 master-0 kubenswrapper[7744]: I0220 14:47:22.871265 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-tuned\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.871284 master-0 kubenswrapper[7744]: I0220 14:47:22.871280 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45226\" (UniqueName: \"kubernetes.io/projected/19ce4b45-db46-4fc3-8d72-963de22f026b-kube-api-access-45226\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.871476 master-0 kubenswrapper[7744]: I0220 14:47:22.871300 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-host\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.871476 master-0 kubenswrapper[7744]: I0220 14:47:22.871315 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-sys\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.871476 master-0 kubenswrapper[7744]: I0220 14:47:22.871358 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-systemd\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.871476 master-0 kubenswrapper[7744]: I0220 14:47:22.871380 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-modprobe-d\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.871476 master-0 kubenswrapper[7744]: I0220 14:47:22.871399 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-run\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.975505 master-0 kubenswrapper[7744]: I0220 14:47:22.975457 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/19ce4b45-db46-4fc3-8d72-963de22f026b-tmp\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.975505 master-0 kubenswrapper[7744]: I0220 14:47:22.975500 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-var-lib-kubelet\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.975660 master-0 kubenswrapper[7744]: I0220 14:47:22.975569 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-var-lib-kubelet\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.975660 master-0 kubenswrapper[7744]: I0220 14:47:22.975618 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysctl-d\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.975660 master-0 kubenswrapper[7744]: I0220 14:47:22.975639 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysctl-conf\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.975775 master-0 kubenswrapper[7744]: I0220 14:47:22.975663 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysconfig\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.975775 master-0 kubenswrapper[7744]: I0220 14:47:22.975682 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-lib-modules\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.975775 master-0 kubenswrapper[7744]: I0220 14:47:22.975725 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysctl-d\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.977748 master-0 kubenswrapper[7744]: I0220 14:47:22.975827 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysconfig\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.977748 master-0 kubenswrapper[7744]: I0220 14:47:22.975882 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-lib-modules\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.977748 master-0 kubenswrapper[7744]: I0220 14:47:22.975895 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-tuned\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.977748 master-0 kubenswrapper[7744]: I0220 14:47:22.977526 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45226\" (UniqueName: \"kubernetes.io/projected/19ce4b45-db46-4fc3-8d72-963de22f026b-kube-api-access-45226\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.977748 master-0 kubenswrapper[7744]: I0220 14:47:22.977578 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-sys\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.977748 master-0 kubenswrapper[7744]: I0220 14:47:22.977594 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-host\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.977748 master-0 kubenswrapper[7744]: I0220 14:47:22.977622 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-systemd\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.977748 master-0 kubenswrapper[7744]: I0220 14:47:22.977643 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-modprobe-d\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.978012 master-0 kubenswrapper[7744]: I0220 14:47:22.976550 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysctl-conf\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.978012 master-0 kubenswrapper[7744]: I0220 14:47:22.977935 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-sys\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.978012 master-0 kubenswrapper[7744]: I0220 14:47:22.977946 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-host\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.978012 master-0 kubenswrapper[7744]: I0220 14:47:22.977976 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-systemd\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.978123 master-0 kubenswrapper[7744]: I0220 14:47:22.978040 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-run\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.978123 master-0 kubenswrapper[7744]: I0220 14:47:22.978058 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-modprobe-d\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.978123 master-0 kubenswrapper[7744]: I0220 14:47:22.978099 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-kubernetes\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.978236 master-0 kubenswrapper[7744]: I0220 14:47:22.978214 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-run\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.978281 master-0 kubenswrapper[7744]: I0220 14:47:22.978257 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-kubernetes\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.981943 master-0 kubenswrapper[7744]: I0220 14:47:22.981887 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-tuned\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.991197 master-0 kubenswrapper[7744]: I0220 14:47:22.991032 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/19ce4b45-db46-4fc3-8d72-963de22f026b-tmp\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:22.997228 master-0 kubenswrapper[7744]: I0220 14:47:22.996992 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45226\" (UniqueName: \"kubernetes.io/projected/19ce4b45-db46-4fc3-8d72-963de22f026b-kube-api-access-45226\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:23.139104 master-0 kubenswrapper[7744]: I0220 14:47:23.139026 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 14:47:23.324336 master-0 kubenswrapper[7744]: I0220 14:47:23.321379 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-dzfl8"] Feb 20 14:47:23.324336 master-0 kubenswrapper[7744]: I0220 14:47:23.322177 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dzfl8" Feb 20 14:47:23.327810 master-0 kubenswrapper[7744]: I0220 14:47:23.327727 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 20 14:47:23.328472 master-0 kubenswrapper[7744]: I0220 14:47:23.328449 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 20 14:47:23.329081 master-0 kubenswrapper[7744]: I0220 14:47:23.328673 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 20 14:47:23.329081 master-0 kubenswrapper[7744]: I0220 14:47:23.328891 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 20 14:47:23.337951 master-0 kubenswrapper[7744]: I0220 14:47:23.336287 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dzfl8"] Feb 20 14:47:23.381522 master-0 kubenswrapper[7744]: I0220 14:47:23.381464 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-config-volume\") pod \"dns-default-dzfl8\" (UID: \"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0\") " pod="openshift-dns/dns-default-dzfl8" Feb 20 14:47:23.381702 master-0 kubenswrapper[7744]: I0220 14:47:23.381530 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tthkk\" (UniqueName: \"kubernetes.io/projected/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-kube-api-access-tthkk\") pod \"dns-default-dzfl8\" (UID: \"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0\") " pod="openshift-dns/dns-default-dzfl8" Feb 20 14:47:23.381702 master-0 kubenswrapper[7744]: I0220 14:47:23.381587 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-metrics-tls\") pod \"dns-default-dzfl8\" (UID: \"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0\") " pod="openshift-dns/dns-default-dzfl8" Feb 20 14:47:23.398901 master-0 kubenswrapper[7744]: I0220 14:47:23.398860 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" event={"ID":"2027dfb0-2633-4e59-bcad-24ec1658029d","Type":"ContainerStarted","Data":"4adc46d882aecc7f951d7c476f4435231300360db0e7948597d13b0d353cdb17"} Feb 20 14:47:23.400422 master-0 kubenswrapper[7744]: I0220 14:47:23.400365 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" event={"ID":"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1","Type":"ContainerStarted","Data":"1a2c604d274d1ad76efc44b881e36ebb1157c8f409246d344247ee87da1d2861"} Feb 20 14:47:23.401423 master-0 kubenswrapper[7744]: I0220 14:47:23.401327 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" event={"ID":"a51babf8-ee1c-4a83-a537-e40d5dc9b425","Type":"ContainerStarted","Data":"01d0df3bbdb5eda3970456ee9bc8ebaf7bd105b3c70d34b98d4e9e5b9678b3a1"} Feb 20 14:47:23.403265 master-0 kubenswrapper[7744]: I0220 14:47:23.403194 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-99lkv" event={"ID":"5ea4c132-b6d0-4dc9-942d-48e359eed418","Type":"ContainerStarted","Data":"f999ff4f0f2066a6276195a06d42bb6e1b1ff00d93613cff0f6a63447e475eb5"} Feb 20 14:47:23.403265 master-0 kubenswrapper[7744]: I0220 14:47:23.403253 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-99lkv" event={"ID":"5ea4c132-b6d0-4dc9-942d-48e359eed418","Type":"ContainerStarted","Data":"6de47c9027ea6c2d8b35fbec623ec40ec3080ea6f35588d0df87b3d552d897e5"} Feb 20 14:47:23.410166 master-0 kubenswrapper[7744]: I0220 14:47:23.405372 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" event={"ID":"419f28a9-8fd7-4b59-9554-4d884a1208b5","Type":"ContainerStarted","Data":"8b6e9e82961a1b1569e8b4f8d72a5575024ad0e3ea1e52d7f46885fdbde3d82b"} Feb 20 14:47:23.410496 master-0 kubenswrapper[7744]: I0220 14:47:23.410452 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" event={"ID":"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb","Type":"ContainerStarted","Data":"aa71c4fa879120a78bd3b6a5ee4f553adcd2305018af6f53632371d2a776a283"} Feb 20 14:47:23.411820 master-0 kubenswrapper[7744]: I0220 14:47:23.411789 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" event={"ID":"4cede061-d85a-4366-9f1e-90be51f726fc","Type":"ContainerStarted","Data":"5bfa599cbd27c25639ce1eac310bdf292fe58fdc151e3afbbf7cb5c9a001d3b5"} Feb 20 14:47:23.413426 master-0 kubenswrapper[7744]: I0220 14:47:23.413390 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" event={"ID":"19ce4b45-db46-4fc3-8d72-963de22f026b","Type":"ContainerStarted","Data":"4b1e59dab15a09dcf91960fbb173589cc11af80f8d763992d3393dd40ec3c134"} Feb 20 14:47:23.413426 master-0 kubenswrapper[7744]: I0220 14:47:23.413422 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" event={"ID":"19ce4b45-db46-4fc3-8d72-963de22f026b","Type":"ContainerStarted","Data":"ea1c995ced42f5f70bb2e5d4eefdbbd65e8b628be215ee59daaba52d55c8ad0f"} Feb 20 14:47:23.416293 master-0 kubenswrapper[7744]: I0220 14:47:23.416241 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"7f939e3a-7a2a-4888-8c2b-0ec6944c0369","Type":"ContainerStarted","Data":"b700909d7ea6e14b52ffbfc78c56b71e289b566ad774e4a5644e93849ba59083"} Feb 20 14:47:23.416293 master-0 kubenswrapper[7744]: I0220 14:47:23.416290 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"7f939e3a-7a2a-4888-8c2b-0ec6944c0369","Type":"ContainerStarted","Data":"0be6bc01f07eb32ca821be8ad8bfcf22d73cf30ee1c73629be37b1774d26349f"} Feb 20 14:47:23.418281 master-0 kubenswrapper[7744]: I0220 14:47:23.418236 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" event={"ID":"31d71c90-cab7-4411-9426-0713cb026294","Type":"ContainerStarted","Data":"87f837e420d5053d4442b3cdb2fe63d6e5ee3cff979f63a5c0302a5647f7f2f6"} Feb 20 14:47:23.421105 master-0 kubenswrapper[7744]: I0220 14:47:23.421073 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" event={"ID":"a1fb2774-6dd7-4429-9df3-4ddfcdaac939","Type":"ContainerStarted","Data":"8e0b13405e2daee0a5927d9bce9075eb42d4f3573a0155428e0f7790d97b9deb"} Feb 20 14:47:23.421105 master-0 kubenswrapper[7744]: I0220 14:47:23.421101 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" event={"ID":"a1fb2774-6dd7-4429-9df3-4ddfcdaac939","Type":"ContainerStarted","Data":"4ddd9de39788fb0527d2c46757c0e3580e52a5c156ee1c89170d5a4e7024de06"} Feb 20 14:47:23.423584 master-0 kubenswrapper[7744]: I0220 14:47:23.423551 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" event={"ID":"c0a3548f-299c-4234-9bf1-c93efcb9740b","Type":"ContainerStarted","Data":"52bf43d0e30c121fdb642cca3e4e8c737348e2c0806817b6c660ae4bd355d192"} Feb 20 14:47:23.424243 master-0 kubenswrapper[7744]: I0220 14:47:23.424096 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:47:23.425852 master-0 kubenswrapper[7744]: I0220 14:47:23.425831 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" event={"ID":"d28490b0-96ca-4fe0-8fae-e6f8390f933b","Type":"ContainerStarted","Data":"e59cdaeddac19ab24ca5869bcd614625e7a228c980805267b2b8efe30053d76b"} Feb 20 14:47:23.426017 master-0 kubenswrapper[7744]: I0220 14:47:23.425854 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" event={"ID":"d28490b0-96ca-4fe0-8fae-e6f8390f933b","Type":"ContainerStarted","Data":"a6bc9f04ac2dc38938d0e42cdeacec4b6423ef553431eb66d814af552bb9732b"} Feb 20 14:47:23.430888 master-0 kubenswrapper[7744]: I0220 14:47:23.430850 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:47:23.485438 master-0 kubenswrapper[7744]: I0220 14:47:23.483460 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-config-volume\") pod \"dns-default-dzfl8\" (UID: \"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0\") " pod="openshift-dns/dns-default-dzfl8" Feb 20 14:47:23.485438 master-0 kubenswrapper[7744]: I0220 14:47:23.483517 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tthkk\" (UniqueName: \"kubernetes.io/projected/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-kube-api-access-tthkk\") pod \"dns-default-dzfl8\" (UID: \"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0\") " pod="openshift-dns/dns-default-dzfl8" Feb 20 14:47:23.485438 master-0 kubenswrapper[7744]: I0220 14:47:23.483547 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-metrics-tls\") pod \"dns-default-dzfl8\" (UID: \"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0\") " pod="openshift-dns/dns-default-dzfl8" Feb 20 14:47:23.488782 master-0 kubenswrapper[7744]: I0220 14:47:23.486342 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" podStartSLOduration=1.486332584 podStartE2EDuration="1.486332584s" podCreationTimestamp="2026-02-20 14:47:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:47:23.484877348 +0000 UTC m=+42.687077268" watchObservedRunningTime="2026-02-20 14:47:23.486332584 +0000 UTC m=+42.688532494" Feb 20 14:47:23.488782 master-0 kubenswrapper[7744]: E0220 14:47:23.486493 7744 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 20 14:47:23.488782 master-0 kubenswrapper[7744]: E0220 14:47:23.486542 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-metrics-tls podName:bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0 nodeName:}" failed. No retries permitted until 2026-02-20 14:47:23.986528098 +0000 UTC m=+43.188728018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-metrics-tls") pod "dns-default-dzfl8" (UID: "bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0") : secret "dns-default-metrics-tls" not found Feb 20 14:47:23.488782 master-0 kubenswrapper[7744]: I0220 14:47:23.488632 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-config-volume\") pod \"dns-default-dzfl8\" (UID: \"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0\") " pod="openshift-dns/dns-default-dzfl8" Feb 20 14:47:23.523994 master-0 kubenswrapper[7744]: I0220 14:47:23.523056 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tthkk\" (UniqueName: \"kubernetes.io/projected/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-kube-api-access-tthkk\") pod \"dns-default-dzfl8\" (UID: \"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0\") " pod="openshift-dns/dns-default-dzfl8" Feb 20 14:47:23.552488 master-0 kubenswrapper[7744]: I0220 14:47:23.550976 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=4.550959618 podStartE2EDuration="4.550959618s" podCreationTimestamp="2026-02-20 14:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:47:23.549845541 +0000 UTC m=+42.752045461" watchObservedRunningTime="2026-02-20 14:47:23.550959618 +0000 UTC m=+42.753159538" Feb 20 14:47:23.700646 master-0 kubenswrapper[7744]: I0220 14:47:23.699882 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-djs75"] Feb 20 14:47:23.700646 master-0 kubenswrapper[7744]: I0220 14:47:23.700373 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-djs75" Feb 20 14:47:23.788589 master-0 kubenswrapper[7744]: I0220 14:47:23.788380 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/448aafd2-ffb3-42c5-8085-f6194d7862e5-hosts-file\") pod \"node-resolver-djs75\" (UID: \"448aafd2-ffb3-42c5-8085-f6194d7862e5\") " pod="openshift-dns/node-resolver-djs75" Feb 20 14:47:23.788589 master-0 kubenswrapper[7744]: I0220 14:47:23.788467 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv57n\" (UniqueName: \"kubernetes.io/projected/448aafd2-ffb3-42c5-8085-f6194d7862e5-kube-api-access-nv57n\") pod \"node-resolver-djs75\" (UID: \"448aafd2-ffb3-42c5-8085-f6194d7862e5\") " pod="openshift-dns/node-resolver-djs75" Feb 20 14:47:23.890005 master-0 kubenswrapper[7744]: I0220 14:47:23.889958 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv57n\" (UniqueName: \"kubernetes.io/projected/448aafd2-ffb3-42c5-8085-f6194d7862e5-kube-api-access-nv57n\") pod \"node-resolver-djs75\" (UID: \"448aafd2-ffb3-42c5-8085-f6194d7862e5\") " pod="openshift-dns/node-resolver-djs75" Feb 20 14:47:23.890208 master-0 kubenswrapper[7744]: I0220 14:47:23.890142 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/448aafd2-ffb3-42c5-8085-f6194d7862e5-hosts-file\") pod \"node-resolver-djs75\" (UID: \"448aafd2-ffb3-42c5-8085-f6194d7862e5\") " pod="openshift-dns/node-resolver-djs75" Feb 20 14:47:23.890355 master-0 kubenswrapper[7744]: I0220 14:47:23.890311 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/448aafd2-ffb3-42c5-8085-f6194d7862e5-hosts-file\") pod \"node-resolver-djs75\" (UID: \"448aafd2-ffb3-42c5-8085-f6194d7862e5\") " pod="openshift-dns/node-resolver-djs75" Feb 20 14:47:23.907680 master-0 kubenswrapper[7744]: I0220 14:47:23.907634 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv57n\" (UniqueName: \"kubernetes.io/projected/448aafd2-ffb3-42c5-8085-f6194d7862e5-kube-api-access-nv57n\") pod \"node-resolver-djs75\" (UID: \"448aafd2-ffb3-42c5-8085-f6194d7862e5\") " pod="openshift-dns/node-resolver-djs75" Feb 20 14:47:23.990784 master-0 kubenswrapper[7744]: I0220 14:47:23.990740 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-metrics-tls\") pod \"dns-default-dzfl8\" (UID: \"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0\") " pod="openshift-dns/dns-default-dzfl8" Feb 20 14:47:23.995001 master-0 kubenswrapper[7744]: I0220 14:47:23.994983 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-metrics-tls\") pod \"dns-default-dzfl8\" (UID: \"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0\") " pod="openshift-dns/dns-default-dzfl8" Feb 20 14:47:24.021081 master-0 kubenswrapper[7744]: I0220 14:47:24.021036 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-djs75" Feb 20 14:47:24.037034 master-0 kubenswrapper[7744]: W0220 14:47:24.036990 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod448aafd2_ffb3_42c5_8085_f6194d7862e5.slice/crio-0170d69b891340d8304a044f9ba11f3c45572b8e1e7f16d78f09e0c25d8c5a22 WatchSource:0}: Error finding container 0170d69b891340d8304a044f9ba11f3c45572b8e1e7f16d78f09e0c25d8c5a22: Status 404 returned error can't find the container with id 0170d69b891340d8304a044f9ba11f3c45572b8e1e7f16d78f09e0c25d8c5a22 Feb 20 14:47:24.272152 master-0 kubenswrapper[7744]: I0220 14:47:24.270222 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dzfl8" Feb 20 14:47:24.432343 master-0 kubenswrapper[7744]: I0220 14:47:24.432293 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-djs75" event={"ID":"448aafd2-ffb3-42c5-8085-f6194d7862e5","Type":"ContainerStarted","Data":"3c665ce9d0faa8d1bd8dd54a769a338f58b327d40c8c797dba804d0cf7affadc"} Feb 20 14:47:24.432343 master-0 kubenswrapper[7744]: I0220 14:47:24.432341 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-djs75" event={"ID":"448aafd2-ffb3-42c5-8085-f6194d7862e5","Type":"ContainerStarted","Data":"0170d69b891340d8304a044f9ba11f3c45572b8e1e7f16d78f09e0c25d8c5a22"} Feb 20 14:47:24.447216 master-0 kubenswrapper[7744]: I0220 14:47:24.447157 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-djs75" podStartSLOduration=1.44713683 podStartE2EDuration="1.44713683s" podCreationTimestamp="2026-02-20 14:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:47:24.443297756 +0000 UTC m=+43.645497686" watchObservedRunningTime="2026-02-20 14:47:24.44713683 +0000 UTC m=+43.649336750" Feb 20 14:47:24.748556 master-0 kubenswrapper[7744]: I0220 14:47:24.748505 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dzfl8"] Feb 20 14:47:24.757467 master-0 kubenswrapper[7744]: W0220 14:47:24.757434 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf7fe27e_1de0_4d90_9cd9_8625ac4e01d0.slice/crio-fa9d778b1d5703420b9779e5e17c8c6a6104fc97f8264778eb9ed382719853b9 WatchSource:0}: Error finding container fa9d778b1d5703420b9779e5e17c8c6a6104fc97f8264778eb9ed382719853b9: Status 404 returned error can't find the container with id fa9d778b1d5703420b9779e5e17c8c6a6104fc97f8264778eb9ed382719853b9 Feb 20 14:47:25.052973 master-0 kubenswrapper[7744]: I0220 14:47:25.051350 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7659f6b598-z8454"] Feb 20 14:47:25.052973 master-0 kubenswrapper[7744]: I0220 14:47:25.052715 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.072962 master-0 kubenswrapper[7744]: W0220 14:47:25.062366 7744 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'master-0' and this object Feb 20 14:47:25.072962 master-0 kubenswrapper[7744]: E0220 14:47:25.062437 7744 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 20 14:47:25.072962 master-0 kubenswrapper[7744]: W0220 14:47:25.062524 7744 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'master-0' and this object Feb 20 14:47:25.072962 master-0 kubenswrapper[7744]: E0220 14:47:25.062543 7744 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 20 14:47:25.072962 master-0 kubenswrapper[7744]: I0220 14:47:25.069440 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 20 14:47:25.072962 master-0 kubenswrapper[7744]: I0220 14:47:25.069481 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 20 14:47:25.072962 master-0 kubenswrapper[7744]: I0220 14:47:25.069439 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 20 14:47:25.072962 master-0 kubenswrapper[7744]: I0220 14:47:25.069674 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 20 14:47:25.072962 master-0 kubenswrapper[7744]: I0220 14:47:25.070278 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 20 14:47:25.072962 master-0 kubenswrapper[7744]: I0220 14:47:25.071876 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 20 14:47:25.085009 master-0 kubenswrapper[7744]: I0220 14:47:25.075305 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7659f6b598-z8454"] Feb 20 14:47:25.126885 master-0 kubenswrapper[7744]: I0220 14:47:25.126822 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-audit-policies\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.127122 master-0 kubenswrapper[7744]: I0220 14:47:25.126906 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mchbh\" (UniqueName: \"kubernetes.io/projected/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-kube-api-access-mchbh\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.127122 master-0 kubenswrapper[7744]: I0220 14:47:25.126948 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-etcd-serving-ca\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.127122 master-0 kubenswrapper[7744]: I0220 14:47:25.126980 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-encryption-config\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.127122 master-0 kubenswrapper[7744]: I0220 14:47:25.127005 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-audit-dir\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.127122 master-0 kubenswrapper[7744]: I0220 14:47:25.127060 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-serving-cert\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.127122 master-0 kubenswrapper[7744]: I0220 14:47:25.127081 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-trusted-ca-bundle\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.127122 master-0 kubenswrapper[7744]: I0220 14:47:25.127102 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-etcd-client\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.228429 master-0 kubenswrapper[7744]: I0220 14:47:25.228356 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-serving-cert\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.228429 master-0 kubenswrapper[7744]: I0220 14:47:25.228426 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-trusted-ca-bundle\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.228860 master-0 kubenswrapper[7744]: I0220 14:47:25.228458 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-etcd-client\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.228860 master-0 kubenswrapper[7744]: I0220 14:47:25.228502 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-audit-policies\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.228860 master-0 kubenswrapper[7744]: I0220 14:47:25.228550 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mchbh\" (UniqueName: \"kubernetes.io/projected/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-kube-api-access-mchbh\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.228860 master-0 kubenswrapper[7744]: I0220 14:47:25.228690 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-etcd-serving-ca\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.228860 master-0 kubenswrapper[7744]: I0220 14:47:25.228742 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-encryption-config\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.228860 master-0 kubenswrapper[7744]: I0220 14:47:25.228769 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-audit-dir\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.229503 master-0 kubenswrapper[7744]: I0220 14:47:25.229270 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-trusted-ca-bundle\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.229617 master-0 kubenswrapper[7744]: I0220 14:47:25.229578 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-audit-dir\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.229617 master-0 kubenswrapper[7744]: I0220 14:47:25.229591 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-audit-policies\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.230141 master-0 kubenswrapper[7744]: I0220 14:47:25.230096 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-etcd-serving-ca\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.234033 master-0 kubenswrapper[7744]: I0220 14:47:25.233953 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-encryption-config\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.251259 master-0 kubenswrapper[7744]: I0220 14:47:25.251210 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mchbh\" (UniqueName: \"kubernetes.io/projected/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-kube-api-access-mchbh\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:25.442990 master-0 kubenswrapper[7744]: I0220 14:47:25.442854 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dzfl8" event={"ID":"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0","Type":"ContainerStarted","Data":"fa9d778b1d5703420b9779e5e17c8c6a6104fc97f8264778eb9ed382719853b9"} Feb 20 14:47:26.229407 master-0 kubenswrapper[7744]: E0220 14:47:26.229338 7744 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 14:47:26.230001 master-0 kubenswrapper[7744]: E0220 14:47:26.229516 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-serving-cert podName:a8c0a6d2-f1f9-49e3-9475-4983b50667bf nodeName:}" failed. No retries permitted until 2026-02-20 14:47:26.729424017 +0000 UTC m=+45.931623957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-serving-cert") pod "apiserver-7659f6b598-z8454" (UID: "a8c0a6d2-f1f9-49e3-9475-4983b50667bf") : failed to sync secret cache: timed out waiting for the condition Feb 20 14:47:26.230001 master-0 kubenswrapper[7744]: E0220 14:47:26.229736 7744 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 20 14:47:26.230001 master-0 kubenswrapper[7744]: E0220 14:47:26.229786 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-etcd-client podName:a8c0a6d2-f1f9-49e3-9475-4983b50667bf nodeName:}" failed. No retries permitted until 2026-02-20 14:47:26.729768925 +0000 UTC m=+45.931968865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-etcd-client") pod "apiserver-7659f6b598-z8454" (UID: "a8c0a6d2-f1f9-49e3-9475-4983b50667bf") : failed to sync secret cache: timed out waiting for the condition Feb 20 14:47:26.329804 master-0 kubenswrapper[7744]: I0220 14:47:26.329750 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 20 14:47:26.467442 master-0 kubenswrapper[7744]: I0220 14:47:26.467388 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 20 14:47:26.751808 master-0 kubenswrapper[7744]: I0220 14:47:26.751749 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-serving-cert\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:26.751808 master-0 kubenswrapper[7744]: I0220 14:47:26.751816 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-etcd-client\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:26.755621 master-0 kubenswrapper[7744]: I0220 14:47:26.755585 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-serving-cert\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:26.756606 master-0 kubenswrapper[7744]: I0220 14:47:26.756558 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-etcd-client\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:26.890233 master-0 kubenswrapper[7744]: I0220 14:47:26.890123 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:27.934252 master-0 kubenswrapper[7744]: I0220 14:47:27.934173 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6947c8468f-9tjhp"] Feb 20 14:47:27.953848 master-0 kubenswrapper[7744]: I0220 14:47:27.953778 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m"] Feb 20 14:47:28.522418 master-0 kubenswrapper[7744]: I0220 14:47:28.519666 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 20 14:47:28.522418 master-0 kubenswrapper[7744]: I0220 14:47:28.519908 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="7f939e3a-7a2a-4888-8c2b-0ec6944c0369" containerName="installer" containerID="cri-o://b700909d7ea6e14b52ffbfc78c56b71e289b566ad774e4a5644e93849ba59083" gracePeriod=30 Feb 20 14:47:28.734080 master-0 kubenswrapper[7744]: I0220 14:47:28.733962 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7659f6b598-z8454"] Feb 20 14:47:28.976274 master-0 kubenswrapper[7744]: W0220 14:47:28.976213 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8c0a6d2_f1f9_49e3_9475_4983b50667bf.slice/crio-e4a8f393be39a3a9efde4bf2412add15fe01a8acdf8e5580190095494f3e6b47 WatchSource:0}: Error finding container e4a8f393be39a3a9efde4bf2412add15fe01a8acdf8e5580190095494f3e6b47: Status 404 returned error can't find the container with id e4a8f393be39a3a9efde4bf2412add15fe01a8acdf8e5580190095494f3e6b47 Feb 20 14:47:29.467237 master-0 kubenswrapper[7744]: I0220 14:47:29.467179 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" event={"ID":"a8c0a6d2-f1f9-49e3-9475-4983b50667bf","Type":"ContainerStarted","Data":"e4a8f393be39a3a9efde4bf2412add15fe01a8acdf8e5580190095494f3e6b47"} Feb 20 14:47:29.468639 master-0 kubenswrapper[7744]: I0220 14:47:29.468609 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_7f939e3a-7a2a-4888-8c2b-0ec6944c0369/installer/0.log" Feb 20 14:47:29.468698 master-0 kubenswrapper[7744]: I0220 14:47:29.468648 7744 generic.go:334] "Generic (PLEG): container finished" podID="7f939e3a-7a2a-4888-8c2b-0ec6944c0369" containerID="b700909d7ea6e14b52ffbfc78c56b71e289b566ad774e4a5644e93849ba59083" exitCode=1 Feb 20 14:47:29.468698 master-0 kubenswrapper[7744]: I0220 14:47:29.468667 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"7f939e3a-7a2a-4888-8c2b-0ec6944c0369","Type":"ContainerDied","Data":"b700909d7ea6e14b52ffbfc78c56b71e289b566ad774e4a5644e93849ba59083"} Feb 20 14:47:30.351599 master-0 kubenswrapper[7744]: I0220 14:47:30.351555 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_7f939e3a-7a2a-4888-8c2b-0ec6944c0369/installer/0.log" Feb 20 14:47:30.370512 master-0 kubenswrapper[7744]: I0220 14:47:30.351618 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 14:47:30.474964 master-0 kubenswrapper[7744]: I0220 14:47:30.474905 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" event={"ID":"2027dfb0-2633-4e59-bcad-24ec1658029d","Type":"ContainerStarted","Data":"6c2417a3f56bfa34f5d10a7f8c4ba992502fc4350d06aba97cc8cba61b41332e"} Feb 20 14:47:30.475232 master-0 kubenswrapper[7744]: I0220 14:47:30.474970 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" podUID="2027dfb0-2633-4e59-bcad-24ec1658029d" containerName="controller-manager" containerID="cri-o://6c2417a3f56bfa34f5d10a7f8c4ba992502fc4350d06aba97cc8cba61b41332e" gracePeriod=30 Feb 20 14:47:30.475418 master-0 kubenswrapper[7744]: I0220 14:47:30.475370 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:30.477312 master-0 kubenswrapper[7744]: I0220 14:47:30.477267 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_7f939e3a-7a2a-4888-8c2b-0ec6944c0369/installer/0.log" Feb 20 14:47:30.477399 master-0 kubenswrapper[7744]: I0220 14:47:30.477352 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"7f939e3a-7a2a-4888-8c2b-0ec6944c0369","Type":"ContainerDied","Data":"0be6bc01f07eb32ca821be8ad8bfcf22d73cf30ee1c73629be37b1774d26349f"} Feb 20 14:47:30.477449 master-0 kubenswrapper[7744]: I0220 14:47:30.477411 7744 scope.go:117] "RemoveContainer" containerID="b700909d7ea6e14b52ffbfc78c56b71e289b566ad774e4a5644e93849ba59083" Feb 20 14:47:30.477535 master-0 kubenswrapper[7744]: I0220 14:47:30.477504 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 20 14:47:30.477851 master-0 kubenswrapper[7744]: I0220 14:47:30.477811 7744 patch_prober.go:28] interesting pod/controller-manager-6947c8468f-9tjhp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.36:8443/healthz\": dial tcp 10.128.0.36:8443: connect: connection refused" start-of-body= Feb 20 14:47:30.478026 master-0 kubenswrapper[7744]: I0220 14:47:30.477855 7744 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" podUID="2027dfb0-2633-4e59-bcad-24ec1658029d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.36:8443/healthz\": dial tcp 10.128.0.36:8443: connect: connection refused" Feb 20 14:47:30.484582 master-0 kubenswrapper[7744]: I0220 14:47:30.484532 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" event={"ID":"a51babf8-ee1c-4a83-a537-e40d5dc9b425","Type":"ContainerStarted","Data":"8ba678c3198913424bc6e8bfcb0063581e275ddd32b1a476eb210f4b2acb05d1"} Feb 20 14:47:30.484724 master-0 kubenswrapper[7744]: I0220 14:47:30.484651 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" podUID="a51babf8-ee1c-4a83-a537-e40d5dc9b425" containerName="route-controller-manager" containerID="cri-o://8ba678c3198913424bc6e8bfcb0063581e275ddd32b1a476eb210f4b2acb05d1" gracePeriod=30 Feb 20 14:47:30.484790 master-0 kubenswrapper[7744]: I0220 14:47:30.484766 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:30.487427 master-0 kubenswrapper[7744]: I0220 14:47:30.487387 7744 generic.go:334] "Generic (PLEG): container finished" podID="c5429ce9-f3b7-4024-ac77-3a93a2ac77bb" containerID="18d29e751749b7ea9876b738d17d6268502d86978c1804f16e31b40402471107" exitCode=0 Feb 20 14:47:30.487509 master-0 kubenswrapper[7744]: I0220 14:47:30.487425 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" event={"ID":"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb","Type":"ContainerDied","Data":"18d29e751749b7ea9876b738d17d6268502d86978c1804f16e31b40402471107"} Feb 20 14:47:30.487558 master-0 kubenswrapper[7744]: I0220 14:47:30.487525 7744 patch_prober.go:28] interesting pod/route-controller-manager-869b4b97fd-fvb7m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.35:8443/healthz\": dial tcp 10.128.0.35:8443: connect: connection refused" start-of-body= Feb 20 14:47:30.487603 master-0 kubenswrapper[7744]: I0220 14:47:30.487561 7744 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" podUID="a51babf8-ee1c-4a83-a537-e40d5dc9b425" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.35:8443/healthz\": dial tcp 10.128.0.35:8443: connect: connection refused" Feb 20 14:47:30.492600 master-0 kubenswrapper[7744]: I0220 14:47:30.492542 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" podStartSLOduration=15.078784145 podStartE2EDuration="22.492527429s" podCreationTimestamp="2026-02-20 14:47:08 +0000 UTC" firstStartedPulling="2026-02-20 14:47:22.774348549 +0000 UTC m=+41.976548469" lastFinishedPulling="2026-02-20 14:47:30.188091833 +0000 UTC m=+49.390291753" observedRunningTime="2026-02-20 14:47:30.492300014 +0000 UTC m=+49.694499944" watchObservedRunningTime="2026-02-20 14:47:30.492527429 +0000 UTC m=+49.694727349" Feb 20 14:47:30.514499 master-0 kubenswrapper[7744]: I0220 14:47:30.512777 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-kube-api-access\") pod \"7f939e3a-7a2a-4888-8c2b-0ec6944c0369\" (UID: \"7f939e3a-7a2a-4888-8c2b-0ec6944c0369\") " Feb 20 14:47:30.514499 master-0 kubenswrapper[7744]: I0220 14:47:30.512910 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-var-lock\") pod \"7f939e3a-7a2a-4888-8c2b-0ec6944c0369\" (UID: \"7f939e3a-7a2a-4888-8c2b-0ec6944c0369\") " Feb 20 14:47:30.514499 master-0 kubenswrapper[7744]: I0220 14:47:30.512979 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-kubelet-dir\") pod \"7f939e3a-7a2a-4888-8c2b-0ec6944c0369\" (UID: \"7f939e3a-7a2a-4888-8c2b-0ec6944c0369\") " Feb 20 14:47:30.514499 master-0 kubenswrapper[7744]: I0220 14:47:30.513135 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-var-lock" (OuterVolumeSpecName: "var-lock") pod "7f939e3a-7a2a-4888-8c2b-0ec6944c0369" (UID: "7f939e3a-7a2a-4888-8c2b-0ec6944c0369"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:47:30.514499 master-0 kubenswrapper[7744]: I0220 14:47:30.513231 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7f939e3a-7a2a-4888-8c2b-0ec6944c0369" (UID: "7f939e3a-7a2a-4888-8c2b-0ec6944c0369"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:47:30.514499 master-0 kubenswrapper[7744]: I0220 14:47:30.513355 7744 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:30.514499 master-0 kubenswrapper[7744]: I0220 14:47:30.513381 7744 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:30.519392 master-0 kubenswrapper[7744]: I0220 14:47:30.519069 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7f939e3a-7a2a-4888-8c2b-0ec6944c0369" (UID: "7f939e3a-7a2a-4888-8c2b-0ec6944c0369"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:47:30.615994 master-0 kubenswrapper[7744]: I0220 14:47:30.615938 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f939e3a-7a2a-4888-8c2b-0ec6944c0369-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:30.721511 master-0 kubenswrapper[7744]: I0220 14:47:30.721319 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" podStartSLOduration=17.190222062 podStartE2EDuration="22.721300073s" podCreationTimestamp="2026-02-20 14:47:08 +0000 UTC" firstStartedPulling="2026-02-20 14:47:22.827035833 +0000 UTC m=+42.029235753" lastFinishedPulling="2026-02-20 14:47:28.358113844 +0000 UTC m=+47.560313764" observedRunningTime="2026-02-20 14:47:30.530764571 +0000 UTC m=+49.732964491" watchObservedRunningTime="2026-02-20 14:47:30.721300073 +0000 UTC m=+49.923499993" Feb 20 14:47:30.721985 master-0 kubenswrapper[7744]: I0220 14:47:30.721902 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 20 14:47:30.722157 master-0 kubenswrapper[7744]: E0220 14:47:30.722125 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f939e3a-7a2a-4888-8c2b-0ec6944c0369" containerName="installer" Feb 20 14:47:30.722157 master-0 kubenswrapper[7744]: I0220 14:47:30.722145 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f939e3a-7a2a-4888-8c2b-0ec6944c0369" containerName="installer" Feb 20 14:47:30.722255 master-0 kubenswrapper[7744]: I0220 14:47:30.722236 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f939e3a-7a2a-4888-8c2b-0ec6944c0369" containerName="installer" Feb 20 14:47:30.722806 master-0 kubenswrapper[7744]: I0220 14:47:30.722772 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 14:47:30.729562 master-0 kubenswrapper[7744]: I0220 14:47:30.728034 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 20 14:47:30.803193 master-0 kubenswrapper[7744]: I0220 14:47:30.803148 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 20 14:47:30.823053 master-0 kubenswrapper[7744]: I0220 14:47:30.822966 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/975d0fde-cb2f-4599-b3b7-7de876307a61-kube-api-access\") pod \"installer-3-master-0\" (UID: \"975d0fde-cb2f-4599-b3b7-7de876307a61\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 14:47:30.823053 master-0 kubenswrapper[7744]: I0220 14:47:30.823025 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/975d0fde-cb2f-4599-b3b7-7de876307a61-var-lock\") pod \"installer-3-master-0\" (UID: \"975d0fde-cb2f-4599-b3b7-7de876307a61\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 14:47:30.823271 master-0 kubenswrapper[7744]: I0220 14:47:30.823116 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/975d0fde-cb2f-4599-b3b7-7de876307a61-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"975d0fde-cb2f-4599-b3b7-7de876307a61\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 14:47:30.825292 master-0 kubenswrapper[7744]: I0220 14:47:30.825249 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 20 14:47:30.833274 master-0 kubenswrapper[7744]: I0220 14:47:30.833244 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6947c8468f-9tjhp_2027dfb0-2633-4e59-bcad-24ec1658029d/controller-manager/0.log" Feb 20 14:47:30.833445 master-0 kubenswrapper[7744]: I0220 14:47:30.833323 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:30.925187 master-0 kubenswrapper[7744]: I0220 14:47:30.925145 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-config\") pod \"2027dfb0-2633-4e59-bcad-24ec1658029d\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " Feb 20 14:47:30.925286 master-0 kubenswrapper[7744]: I0220 14:47:30.925194 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-proxy-ca-bundles\") pod \"2027dfb0-2633-4e59-bcad-24ec1658029d\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " Feb 20 14:47:30.925331 master-0 kubenswrapper[7744]: I0220 14:47:30.925287 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szmpv\" (UniqueName: \"kubernetes.io/projected/2027dfb0-2633-4e59-bcad-24ec1658029d-kube-api-access-szmpv\") pod \"2027dfb0-2633-4e59-bcad-24ec1658029d\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " Feb 20 14:47:30.925331 master-0 kubenswrapper[7744]: I0220 14:47:30.925321 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2027dfb0-2633-4e59-bcad-24ec1658029d-serving-cert\") pod \"2027dfb0-2633-4e59-bcad-24ec1658029d\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " Feb 20 14:47:30.925395 master-0 kubenswrapper[7744]: I0220 14:47:30.925349 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-client-ca\") pod \"2027dfb0-2633-4e59-bcad-24ec1658029d\" (UID: \"2027dfb0-2633-4e59-bcad-24ec1658029d\") " Feb 20 14:47:30.926448 master-0 kubenswrapper[7744]: I0220 14:47:30.926414 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-config" (OuterVolumeSpecName: "config") pod "2027dfb0-2633-4e59-bcad-24ec1658029d" (UID: "2027dfb0-2633-4e59-bcad-24ec1658029d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:47:30.926740 master-0 kubenswrapper[7744]: I0220 14:47:30.926709 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-client-ca" (OuterVolumeSpecName: "client-ca") pod "2027dfb0-2633-4e59-bcad-24ec1658029d" (UID: "2027dfb0-2633-4e59-bcad-24ec1658029d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:47:30.926786 master-0 kubenswrapper[7744]: I0220 14:47:30.926747 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2027dfb0-2633-4e59-bcad-24ec1658029d" (UID: "2027dfb0-2633-4e59-bcad-24ec1658029d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:47:30.926908 master-0 kubenswrapper[7744]: I0220 14:47:30.926886 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/975d0fde-cb2f-4599-b3b7-7de876307a61-var-lock\") pod \"installer-3-master-0\" (UID: \"975d0fde-cb2f-4599-b3b7-7de876307a61\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 14:47:30.926968 master-0 kubenswrapper[7744]: I0220 14:47:30.926932 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/975d0fde-cb2f-4599-b3b7-7de876307a61-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"975d0fde-cb2f-4599-b3b7-7de876307a61\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 14:47:30.927176 master-0 kubenswrapper[7744]: I0220 14:47:30.926995 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/975d0fde-cb2f-4599-b3b7-7de876307a61-var-lock\") pod \"installer-3-master-0\" (UID: \"975d0fde-cb2f-4599-b3b7-7de876307a61\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 14:47:30.927176 master-0 kubenswrapper[7744]: I0220 14:47:30.927029 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/975d0fde-cb2f-4599-b3b7-7de876307a61-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"975d0fde-cb2f-4599-b3b7-7de876307a61\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 14:47:30.927176 master-0 kubenswrapper[7744]: I0220 14:47:30.927089 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/975d0fde-cb2f-4599-b3b7-7de876307a61-kube-api-access\") pod \"installer-3-master-0\" (UID: \"975d0fde-cb2f-4599-b3b7-7de876307a61\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 14:47:30.927176 master-0 kubenswrapper[7744]: I0220 14:47:30.927144 7744 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-config\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:30.927176 master-0 kubenswrapper[7744]: I0220 14:47:30.927155 7744 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:30.927176 master-0 kubenswrapper[7744]: I0220 14:47:30.927167 7744 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2027dfb0-2633-4e59-bcad-24ec1658029d-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:30.929274 master-0 kubenswrapper[7744]: I0220 14:47:30.929242 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2027dfb0-2633-4e59-bcad-24ec1658029d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2027dfb0-2633-4e59-bcad-24ec1658029d" (UID: "2027dfb0-2633-4e59-bcad-24ec1658029d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 14:47:30.929480 master-0 kubenswrapper[7744]: I0220 14:47:30.929452 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2027dfb0-2633-4e59-bcad-24ec1658029d-kube-api-access-szmpv" (OuterVolumeSpecName: "kube-api-access-szmpv") pod "2027dfb0-2633-4e59-bcad-24ec1658029d" (UID: "2027dfb0-2633-4e59-bcad-24ec1658029d"). InnerVolumeSpecName "kube-api-access-szmpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:47:30.942098 master-0 kubenswrapper[7744]: I0220 14:47:30.942071 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/975d0fde-cb2f-4599-b3b7-7de876307a61-kube-api-access\") pod \"installer-3-master-0\" (UID: \"975d0fde-cb2f-4599-b3b7-7de876307a61\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 14:47:31.028445 master-0 kubenswrapper[7744]: I0220 14:47:31.028411 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szmpv\" (UniqueName: \"kubernetes.io/projected/2027dfb0-2633-4e59-bcad-24ec1658029d-kube-api-access-szmpv\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:31.028445 master-0 kubenswrapper[7744]: I0220 14:47:31.028442 7744 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2027dfb0-2633-4e59-bcad-24ec1658029d-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:31.038279 master-0 kubenswrapper[7744]: I0220 14:47:31.038239 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 14:47:31.049744 master-0 kubenswrapper[7744]: I0220 14:47:31.049693 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f939e3a-7a2a-4888-8c2b-0ec6944c0369" path="/var/lib/kubelet/pods/7f939e3a-7a2a-4888-8c2b-0ec6944c0369/volumes" Feb 20 14:47:31.425860 master-0 kubenswrapper[7744]: I0220 14:47:31.423969 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-869b4b97fd-fvb7m_a51babf8-ee1c-4a83-a537-e40d5dc9b425/route-controller-manager/0.log" Feb 20 14:47:31.425860 master-0 kubenswrapper[7744]: I0220 14:47:31.424029 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:31.480670 master-0 kubenswrapper[7744]: I0220 14:47:31.480547 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 20 14:47:31.498401 master-0 kubenswrapper[7744]: W0220 14:47:31.498359 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod975d0fde_cb2f_4599_b3b7_7de876307a61.slice/crio-60d5adb534a09e9e59437dcfd0ecba0aa4cc034a5ffeab8cd4bf643934aa8641 WatchSource:0}: Error finding container 60d5adb534a09e9e59437dcfd0ecba0aa4cc034a5ffeab8cd4bf643934aa8641: Status 404 returned error can't find the container with id 60d5adb534a09e9e59437dcfd0ecba0aa4cc034a5ffeab8cd4bf643934aa8641 Feb 20 14:47:31.504491 master-0 kubenswrapper[7744]: I0220 14:47:31.504447 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" event={"ID":"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb","Type":"ContainerStarted","Data":"e951f4f03371ec55dc5f3e48a1367b2b71d375b075a902f792202889dbbea009"} Feb 20 14:47:31.504576 master-0 kubenswrapper[7744]: I0220 14:47:31.504500 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" event={"ID":"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb","Type":"ContainerStarted","Data":"d02919abbb4b42258350951ea9c9a4298d4828f8160b708b55f0a6383e536f04"} Feb 20 14:47:31.507661 master-0 kubenswrapper[7744]: I0220 14:47:31.507641 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-6947c8468f-9tjhp_2027dfb0-2633-4e59-bcad-24ec1658029d/controller-manager/0.log" Feb 20 14:47:31.507747 master-0 kubenswrapper[7744]: I0220 14:47:31.507669 7744 generic.go:334] "Generic (PLEG): container finished" podID="2027dfb0-2633-4e59-bcad-24ec1658029d" containerID="6c2417a3f56bfa34f5d10a7f8c4ba992502fc4350d06aba97cc8cba61b41332e" exitCode=2 Feb 20 14:47:31.507747 master-0 kubenswrapper[7744]: I0220 14:47:31.507703 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" event={"ID":"2027dfb0-2633-4e59-bcad-24ec1658029d","Type":"ContainerDied","Data":"6c2417a3f56bfa34f5d10a7f8c4ba992502fc4350d06aba97cc8cba61b41332e"} Feb 20 14:47:31.507747 master-0 kubenswrapper[7744]: I0220 14:47:31.507720 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" event={"ID":"2027dfb0-2633-4e59-bcad-24ec1658029d","Type":"ContainerDied","Data":"4adc46d882aecc7f951d7c476f4435231300360db0e7948597d13b0d353cdb17"} Feb 20 14:47:31.507747 master-0 kubenswrapper[7744]: I0220 14:47:31.507736 7744 scope.go:117] "RemoveContainer" containerID="6c2417a3f56bfa34f5d10a7f8c4ba992502fc4350d06aba97cc8cba61b41332e" Feb 20 14:47:31.507896 master-0 kubenswrapper[7744]: I0220 14:47:31.507804 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6947c8468f-9tjhp" Feb 20 14:47:31.513084 master-0 kubenswrapper[7744]: I0220 14:47:31.513031 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dzfl8" event={"ID":"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0","Type":"ContainerStarted","Data":"ba357a800fe404f4a9748472118f17be9b12e6d6ab1016a71232a56b0b7e5488"} Feb 20 14:47:31.513179 master-0 kubenswrapper[7744]: I0220 14:47:31.513087 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dzfl8" event={"ID":"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0","Type":"ContainerStarted","Data":"4431b0961f5ef90f6fd38e08f66e9d8f2d37ec169bbd7cc70b8c5597be2182b0"} Feb 20 14:47:31.514174 master-0 kubenswrapper[7744]: I0220 14:47:31.514147 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-dzfl8" Feb 20 14:47:31.515017 master-0 kubenswrapper[7744]: I0220 14:47:31.514997 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-869b4b97fd-fvb7m_a51babf8-ee1c-4a83-a537-e40d5dc9b425/route-controller-manager/0.log" Feb 20 14:47:31.515086 master-0 kubenswrapper[7744]: I0220 14:47:31.515029 7744 generic.go:334] "Generic (PLEG): container finished" podID="a51babf8-ee1c-4a83-a537-e40d5dc9b425" containerID="8ba678c3198913424bc6e8bfcb0063581e275ddd32b1a476eb210f4b2acb05d1" exitCode=255 Feb 20 14:47:31.515086 master-0 kubenswrapper[7744]: I0220 14:47:31.515051 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" event={"ID":"a51babf8-ee1c-4a83-a537-e40d5dc9b425","Type":"ContainerDied","Data":"8ba678c3198913424bc6e8bfcb0063581e275ddd32b1a476eb210f4b2acb05d1"} Feb 20 14:47:31.515086 master-0 kubenswrapper[7744]: I0220 14:47:31.515067 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" event={"ID":"a51babf8-ee1c-4a83-a537-e40d5dc9b425","Type":"ContainerDied","Data":"01d0df3bbdb5eda3970456ee9bc8ebaf7bd105b3c70d34b98d4e9e5b9678b3a1"} Feb 20 14:47:31.515212 master-0 kubenswrapper[7744]: I0220 14:47:31.515113 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m" Feb 20 14:47:31.535030 master-0 kubenswrapper[7744]: I0220 14:47:31.533497 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a51babf8-ee1c-4a83-a537-e40d5dc9b425-client-ca\") pod \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " Feb 20 14:47:31.535030 master-0 kubenswrapper[7744]: I0220 14:47:31.533541 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/a51babf8-ee1c-4a83-a537-e40d5dc9b425-kube-api-access-zcqxx\") pod \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " Feb 20 14:47:31.535030 master-0 kubenswrapper[7744]: I0220 14:47:31.533578 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a51babf8-ee1c-4a83-a537-e40d5dc9b425-serving-cert\") pod \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " Feb 20 14:47:31.535030 master-0 kubenswrapper[7744]: I0220 14:47:31.533593 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a51babf8-ee1c-4a83-a537-e40d5dc9b425-config\") pod \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\" (UID: \"a51babf8-ee1c-4a83-a537-e40d5dc9b425\") " Feb 20 14:47:31.535030 master-0 kubenswrapper[7744]: I0220 14:47:31.534907 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" podStartSLOduration=11.136679878 podStartE2EDuration="17.534857002s" podCreationTimestamp="2026-02-20 14:47:14 +0000 UTC" firstStartedPulling="2026-02-20 14:47:22.583870429 +0000 UTC m=+41.786070349" lastFinishedPulling="2026-02-20 14:47:28.982047553 +0000 UTC m=+48.184247473" observedRunningTime="2026-02-20 14:47:31.53313656 +0000 UTC m=+50.735336480" watchObservedRunningTime="2026-02-20 14:47:31.534857002 +0000 UTC m=+50.737056922" Feb 20 14:47:31.535507 master-0 kubenswrapper[7744]: I0220 14:47:31.535103 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a51babf8-ee1c-4a83-a537-e40d5dc9b425-client-ca" (OuterVolumeSpecName: "client-ca") pod "a51babf8-ee1c-4a83-a537-e40d5dc9b425" (UID: "a51babf8-ee1c-4a83-a537-e40d5dc9b425"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:47:31.536403 master-0 kubenswrapper[7744]: I0220 14:47:31.536379 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a51babf8-ee1c-4a83-a537-e40d5dc9b425-config" (OuterVolumeSpecName: "config") pod "a51babf8-ee1c-4a83-a537-e40d5dc9b425" (UID: "a51babf8-ee1c-4a83-a537-e40d5dc9b425"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:47:31.539792 master-0 kubenswrapper[7744]: I0220 14:47:31.539728 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a51babf8-ee1c-4a83-a537-e40d5dc9b425-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a51babf8-ee1c-4a83-a537-e40d5dc9b425" (UID: "a51babf8-ee1c-4a83-a537-e40d5dc9b425"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 14:47:31.540140 master-0 kubenswrapper[7744]: I0220 14:47:31.539863 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a51babf8-ee1c-4a83-a537-e40d5dc9b425-kube-api-access-zcqxx" (OuterVolumeSpecName: "kube-api-access-zcqxx") pod "a51babf8-ee1c-4a83-a537-e40d5dc9b425" (UID: "a51babf8-ee1c-4a83-a537-e40d5dc9b425"). InnerVolumeSpecName "kube-api-access-zcqxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:47:31.544802 master-0 kubenswrapper[7744]: I0220 14:47:31.544757 7744 scope.go:117] "RemoveContainer" containerID="6c2417a3f56bfa34f5d10a7f8c4ba992502fc4350d06aba97cc8cba61b41332e" Feb 20 14:47:31.545314 master-0 kubenswrapper[7744]: E0220 14:47:31.545253 7744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c2417a3f56bfa34f5d10a7f8c4ba992502fc4350d06aba97cc8cba61b41332e\": container with ID starting with 6c2417a3f56bfa34f5d10a7f8c4ba992502fc4350d06aba97cc8cba61b41332e not found: ID does not exist" containerID="6c2417a3f56bfa34f5d10a7f8c4ba992502fc4350d06aba97cc8cba61b41332e" Feb 20 14:47:31.545314 master-0 kubenswrapper[7744]: I0220 14:47:31.545278 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c2417a3f56bfa34f5d10a7f8c4ba992502fc4350d06aba97cc8cba61b41332e"} err="failed to get container status \"6c2417a3f56bfa34f5d10a7f8c4ba992502fc4350d06aba97cc8cba61b41332e\": rpc error: code = NotFound desc = could not find container \"6c2417a3f56bfa34f5d10a7f8c4ba992502fc4350d06aba97cc8cba61b41332e\": container with ID starting with 6c2417a3f56bfa34f5d10a7f8c4ba992502fc4350d06aba97cc8cba61b41332e not found: ID does not exist" Feb 20 14:47:31.545314 master-0 kubenswrapper[7744]: I0220 14:47:31.545316 7744 scope.go:117] "RemoveContainer" containerID="8ba678c3198913424bc6e8bfcb0063581e275ddd32b1a476eb210f4b2acb05d1" Feb 20 14:47:31.553808 master-0 kubenswrapper[7744]: I0220 14:47:31.553747 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6947c8468f-9tjhp"] Feb 20 14:47:31.560220 master-0 kubenswrapper[7744]: I0220 14:47:31.560176 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6947c8468f-9tjhp"] Feb 20 14:47:31.565711 master-0 kubenswrapper[7744]: I0220 14:47:31.565668 7744 scope.go:117] "RemoveContainer" containerID="8ba678c3198913424bc6e8bfcb0063581e275ddd32b1a476eb210f4b2acb05d1" Feb 20 14:47:31.566334 master-0 kubenswrapper[7744]: E0220 14:47:31.566288 7744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ba678c3198913424bc6e8bfcb0063581e275ddd32b1a476eb210f4b2acb05d1\": container with ID starting with 8ba678c3198913424bc6e8bfcb0063581e275ddd32b1a476eb210f4b2acb05d1 not found: ID does not exist" containerID="8ba678c3198913424bc6e8bfcb0063581e275ddd32b1a476eb210f4b2acb05d1" Feb 20 14:47:31.566420 master-0 kubenswrapper[7744]: I0220 14:47:31.566335 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ba678c3198913424bc6e8bfcb0063581e275ddd32b1a476eb210f4b2acb05d1"} err="failed to get container status \"8ba678c3198913424bc6e8bfcb0063581e275ddd32b1a476eb210f4b2acb05d1\": rpc error: code = NotFound desc = could not find container \"8ba678c3198913424bc6e8bfcb0063581e275ddd32b1a476eb210f4b2acb05d1\": container with ID starting with 8ba678c3198913424bc6e8bfcb0063581e275ddd32b1a476eb210f4b2acb05d1 not found: ID does not exist" Feb 20 14:47:31.578706 master-0 kubenswrapper[7744]: I0220 14:47:31.578651 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-dzfl8" podStartSLOduration=3.113557735 podStartE2EDuration="8.578631278s" podCreationTimestamp="2026-02-20 14:47:23 +0000 UTC" firstStartedPulling="2026-02-20 14:47:24.75955788 +0000 UTC m=+43.961757800" lastFinishedPulling="2026-02-20 14:47:30.224631383 +0000 UTC m=+49.426831343" observedRunningTime="2026-02-20 14:47:31.577754947 +0000 UTC m=+50.779954877" watchObservedRunningTime="2026-02-20 14:47:31.578631278 +0000 UTC m=+50.780831198" Feb 20 14:47:31.635623 master-0 kubenswrapper[7744]: I0220 14:47:31.635562 7744 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a51babf8-ee1c-4a83-a537-e40d5dc9b425-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:31.635623 master-0 kubenswrapper[7744]: I0220 14:47:31.635600 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/a51babf8-ee1c-4a83-a537-e40d5dc9b425-kube-api-access-zcqxx\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:31.635623 master-0 kubenswrapper[7744]: I0220 14:47:31.635621 7744 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a51babf8-ee1c-4a83-a537-e40d5dc9b425-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:31.635623 master-0 kubenswrapper[7744]: I0220 14:47:31.635634 7744 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a51babf8-ee1c-4a83-a537-e40d5dc9b425-config\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:31.850699 master-0 kubenswrapper[7744]: I0220 14:47:31.850653 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m"] Feb 20 14:47:31.856235 master-0 kubenswrapper[7744]: I0220 14:47:31.856195 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-869b4b97fd-fvb7m"] Feb 20 14:47:32.294311 master-0 kubenswrapper[7744]: I0220 14:47:32.294247 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:32.294311 master-0 kubenswrapper[7744]: I0220 14:47:32.294298 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: I0220 14:47:32.303362 7744 patch_prober.go:28] interesting pod/apiserver-776c8f54bc-gmvx8 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: [+]log ok Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: [+]etcd ok Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: [+]poststarthook/generic-apiserver-start-informers ok Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: [+]poststarthook/max-in-flight-filter ok Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: [+]poststarthook/project.openshift.io-projectcache ok Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: [+]poststarthook/openshift.io-startinformers ok Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: livez check failed Feb 20 14:47:32.305036 master-0 kubenswrapper[7744]: I0220 14:47:32.303449 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" podUID="c5429ce9-f3b7-4024-ac77-3a93a2ac77bb" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:47:32.525396 master-0 kubenswrapper[7744]: I0220 14:47:32.525298 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"975d0fde-cb2f-4599-b3b7-7de876307a61","Type":"ContainerStarted","Data":"a59f2b3ca51cdc733c2fc543e6bd0ce183b3347c73680778d3a84d4f88dd4a1f"} Feb 20 14:47:32.525396 master-0 kubenswrapper[7744]: I0220 14:47:32.525359 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"975d0fde-cb2f-4599-b3b7-7de876307a61","Type":"ContainerStarted","Data":"60d5adb534a09e9e59437dcfd0ecba0aa4cc034a5ffeab8cd4bf643934aa8641"} Feb 20 14:47:32.539842 master-0 kubenswrapper[7744]: I0220 14:47:32.538222 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=2.538201124 podStartE2EDuration="2.538201124s" podCreationTimestamp="2026-02-20 14:47:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:47:32.53723351 +0000 UTC m=+51.739433450" watchObservedRunningTime="2026-02-20 14:47:32.538201124 +0000 UTC m=+51.740401054" Feb 20 14:47:32.827120 master-0 kubenswrapper[7744]: I0220 14:47:32.827078 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 20 14:47:32.827280 master-0 kubenswrapper[7744]: E0220 14:47:32.827262 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a51babf8-ee1c-4a83-a537-e40d5dc9b425" containerName="route-controller-manager" Feb 20 14:47:32.827280 master-0 kubenswrapper[7744]: I0220 14:47:32.827277 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="a51babf8-ee1c-4a83-a537-e40d5dc9b425" containerName="route-controller-manager" Feb 20 14:47:32.827353 master-0 kubenswrapper[7744]: E0220 14:47:32.827297 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2027dfb0-2633-4e59-bcad-24ec1658029d" containerName="controller-manager" Feb 20 14:47:32.827353 master-0 kubenswrapper[7744]: I0220 14:47:32.827306 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="2027dfb0-2633-4e59-bcad-24ec1658029d" containerName="controller-manager" Feb 20 14:47:32.827429 master-0 kubenswrapper[7744]: I0220 14:47:32.827381 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="a51babf8-ee1c-4a83-a537-e40d5dc9b425" containerName="route-controller-manager" Feb 20 14:47:32.827429 master-0 kubenswrapper[7744]: I0220 14:47:32.827401 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="2027dfb0-2633-4e59-bcad-24ec1658029d" containerName="controller-manager" Feb 20 14:47:32.827821 master-0 kubenswrapper[7744]: I0220 14:47:32.827791 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 14:47:32.831223 master-0 kubenswrapper[7744]: I0220 14:47:32.831202 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 20 14:47:32.840087 master-0 kubenswrapper[7744]: I0220 14:47:32.838791 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 20 14:47:32.955975 master-0 kubenswrapper[7744]: I0220 14:47:32.954613 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/986049a1-b3e4-4dca-b178-55eaa7a27bfb-kube-api-access\") pod \"installer-1-master-0\" (UID: \"986049a1-b3e4-4dca-b178-55eaa7a27bfb\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 14:47:32.955975 master-0 kubenswrapper[7744]: I0220 14:47:32.954656 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/986049a1-b3e4-4dca-b178-55eaa7a27bfb-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"986049a1-b3e4-4dca-b178-55eaa7a27bfb\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 14:47:32.955975 master-0 kubenswrapper[7744]: I0220 14:47:32.954753 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/986049a1-b3e4-4dca-b178-55eaa7a27bfb-var-lock\") pod \"installer-1-master-0\" (UID: \"986049a1-b3e4-4dca-b178-55eaa7a27bfb\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 14:47:33.043605 master-0 kubenswrapper[7744]: I0220 14:47:33.043140 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2027dfb0-2633-4e59-bcad-24ec1658029d" path="/var/lib/kubelet/pods/2027dfb0-2633-4e59-bcad-24ec1658029d/volumes" Feb 20 14:47:33.044213 master-0 kubenswrapper[7744]: I0220 14:47:33.044168 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a51babf8-ee1c-4a83-a537-e40d5dc9b425" path="/var/lib/kubelet/pods/a51babf8-ee1c-4a83-a537-e40d5dc9b425/volumes" Feb 20 14:47:33.056330 master-0 kubenswrapper[7744]: I0220 14:47:33.056194 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/986049a1-b3e4-4dca-b178-55eaa7a27bfb-var-lock\") pod \"installer-1-master-0\" (UID: \"986049a1-b3e4-4dca-b178-55eaa7a27bfb\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 14:47:33.056330 master-0 kubenswrapper[7744]: I0220 14:47:33.056285 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/986049a1-b3e4-4dca-b178-55eaa7a27bfb-kube-api-access\") pod \"installer-1-master-0\" (UID: \"986049a1-b3e4-4dca-b178-55eaa7a27bfb\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 14:47:33.056549 master-0 kubenswrapper[7744]: I0220 14:47:33.056352 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/986049a1-b3e4-4dca-b178-55eaa7a27bfb-var-lock\") pod \"installer-1-master-0\" (UID: \"986049a1-b3e4-4dca-b178-55eaa7a27bfb\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 14:47:33.056549 master-0 kubenswrapper[7744]: I0220 14:47:33.056417 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/986049a1-b3e4-4dca-b178-55eaa7a27bfb-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"986049a1-b3e4-4dca-b178-55eaa7a27bfb\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 14:47:33.056549 master-0 kubenswrapper[7744]: I0220 14:47:33.056524 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/986049a1-b3e4-4dca-b178-55eaa7a27bfb-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"986049a1-b3e4-4dca-b178-55eaa7a27bfb\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 14:47:33.074915 master-0 kubenswrapper[7744]: I0220 14:47:33.074636 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/986049a1-b3e4-4dca-b178-55eaa7a27bfb-kube-api-access\") pod \"installer-1-master-0\" (UID: \"986049a1-b3e4-4dca-b178-55eaa7a27bfb\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 14:47:33.152716 master-0 kubenswrapper[7744]: I0220 14:47:33.152660 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 14:47:33.334890 master-0 kubenswrapper[7744]: I0220 14:47:33.334221 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69686f5989-xkhpg"] Feb 20 14:47:33.338178 master-0 kubenswrapper[7744]: I0220 14:47:33.336481 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj"] Feb 20 14:47:33.338178 master-0 kubenswrapper[7744]: I0220 14:47:33.337387 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:47:33.338178 master-0 kubenswrapper[7744]: I0220 14:47:33.337992 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:33.339809 master-0 kubenswrapper[7744]: I0220 14:47:33.339774 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69686f5989-xkhpg"] Feb 20 14:47:33.342026 master-0 kubenswrapper[7744]: I0220 14:47:33.341996 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj"] Feb 20 14:47:33.343452 master-0 kubenswrapper[7744]: I0220 14:47:33.343417 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 20 14:47:33.345168 master-0 kubenswrapper[7744]: I0220 14:47:33.345145 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 20 14:47:33.345396 master-0 kubenswrapper[7744]: I0220 14:47:33.345377 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 14:47:33.345640 master-0 kubenswrapper[7744]: I0220 14:47:33.345611 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 14:47:33.345789 master-0 kubenswrapper[7744]: I0220 14:47:33.345772 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 20 14:47:33.345942 master-0 kubenswrapper[7744]: I0220 14:47:33.345905 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 20 14:47:33.346118 master-0 kubenswrapper[7744]: I0220 14:47:33.346065 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 20 14:47:33.347132 master-0 kubenswrapper[7744]: I0220 14:47:33.347101 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 14:47:33.347278 master-0 kubenswrapper[7744]: I0220 14:47:33.347250 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 14:47:33.347345 master-0 kubenswrapper[7744]: I0220 14:47:33.347310 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 14:47:33.353805 master-0 kubenswrapper[7744]: I0220 14:47:33.353771 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 14:47:33.460723 master-0 kubenswrapper[7744]: I0220 14:47:33.460648 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-proxy-ca-bundles\") pod \"controller-manager-69686f5989-xkhpg\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:33.460912 master-0 kubenswrapper[7744]: I0220 14:47:33.460724 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-serving-cert\") pod \"route-controller-manager-6c6947b888-mrmnj\" (UID: \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\") " pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:47:33.460912 master-0 kubenswrapper[7744]: I0220 14:47:33.460765 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-client-ca\") pod \"controller-manager-69686f5989-xkhpg\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:33.460912 master-0 kubenswrapper[7744]: I0220 14:47:33.460802 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26c30461-efe3-4999-9698-f3c478c71fa0-serving-cert\") pod \"controller-manager-69686f5989-xkhpg\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:33.460912 master-0 kubenswrapper[7744]: I0220 14:47:33.460828 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9v89\" (UniqueName: \"kubernetes.io/projected/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-kube-api-access-c9v89\") pod \"route-controller-manager-6c6947b888-mrmnj\" (UID: \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\") " pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:47:33.460912 master-0 kubenswrapper[7744]: I0220 14:47:33.460880 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-client-ca\") pod \"route-controller-manager-6c6947b888-mrmnj\" (UID: \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\") " pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:47:33.460912 master-0 kubenswrapper[7744]: I0220 14:47:33.460910 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-config\") pod \"controller-manager-69686f5989-xkhpg\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:33.461149 master-0 kubenswrapper[7744]: I0220 14:47:33.460958 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxxc8\" (UniqueName: \"kubernetes.io/projected/26c30461-efe3-4999-9698-f3c478c71fa0-kube-api-access-gxxc8\") pod \"controller-manager-69686f5989-xkhpg\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:33.461149 master-0 kubenswrapper[7744]: I0220 14:47:33.461006 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-config\") pod \"route-controller-manager-6c6947b888-mrmnj\" (UID: \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\") " pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:47:33.530108 master-0 kubenswrapper[7744]: I0220 14:47:33.530044 7744 generic.go:334] "Generic (PLEG): container finished" podID="a8c0a6d2-f1f9-49e3-9475-4983b50667bf" containerID="a27dacc9767bb08d41caf26b14c781b3928a704b21733f539c8b91a44b0c4d18" exitCode=0 Feb 20 14:47:33.530628 master-0 kubenswrapper[7744]: I0220 14:47:33.530199 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" event={"ID":"a8c0a6d2-f1f9-49e3-9475-4983b50667bf","Type":"ContainerDied","Data":"a27dacc9767bb08d41caf26b14c781b3928a704b21733f539c8b91a44b0c4d18"} Feb 20 14:47:33.562943 master-0 kubenswrapper[7744]: I0220 14:47:33.562272 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-serving-cert\") pod \"route-controller-manager-6c6947b888-mrmnj\" (UID: \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\") " pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:47:33.562943 master-0 kubenswrapper[7744]: I0220 14:47:33.562320 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-client-ca\") pod \"controller-manager-69686f5989-xkhpg\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:33.562943 master-0 kubenswrapper[7744]: I0220 14:47:33.562348 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26c30461-efe3-4999-9698-f3c478c71fa0-serving-cert\") pod \"controller-manager-69686f5989-xkhpg\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:33.562943 master-0 kubenswrapper[7744]: I0220 14:47:33.562367 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9v89\" (UniqueName: \"kubernetes.io/projected/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-kube-api-access-c9v89\") pod \"route-controller-manager-6c6947b888-mrmnj\" (UID: \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\") " pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:47:33.562943 master-0 kubenswrapper[7744]: I0220 14:47:33.562399 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-client-ca\") pod \"route-controller-manager-6c6947b888-mrmnj\" (UID: \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\") " pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:47:33.562943 master-0 kubenswrapper[7744]: I0220 14:47:33.562504 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-config\") pod \"controller-manager-69686f5989-xkhpg\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:33.562943 master-0 kubenswrapper[7744]: I0220 14:47:33.562526 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxxc8\" (UniqueName: \"kubernetes.io/projected/26c30461-efe3-4999-9698-f3c478c71fa0-kube-api-access-gxxc8\") pod \"controller-manager-69686f5989-xkhpg\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:33.562943 master-0 kubenswrapper[7744]: I0220 14:47:33.562561 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-config\") pod \"route-controller-manager-6c6947b888-mrmnj\" (UID: \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\") " pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:47:33.562943 master-0 kubenswrapper[7744]: I0220 14:47:33.562579 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-proxy-ca-bundles\") pod \"controller-manager-69686f5989-xkhpg\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:33.563414 master-0 kubenswrapper[7744]: I0220 14:47:33.563330 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-client-ca\") pod \"controller-manager-69686f5989-xkhpg\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:33.563687 master-0 kubenswrapper[7744]: I0220 14:47:33.563515 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-proxy-ca-bundles\") pod \"controller-manager-69686f5989-xkhpg\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:33.564655 master-0 kubenswrapper[7744]: I0220 14:47:33.564621 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-config\") pod \"controller-manager-69686f5989-xkhpg\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:33.565050 master-0 kubenswrapper[7744]: I0220 14:47:33.565013 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-config\") pod \"route-controller-manager-6c6947b888-mrmnj\" (UID: \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\") " pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:47:33.565226 master-0 kubenswrapper[7744]: I0220 14:47:33.565136 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-client-ca\") pod \"route-controller-manager-6c6947b888-mrmnj\" (UID: \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\") " pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:47:33.569059 master-0 kubenswrapper[7744]: I0220 14:47:33.569012 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-serving-cert\") pod \"route-controller-manager-6c6947b888-mrmnj\" (UID: \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\") " pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:47:33.570047 master-0 kubenswrapper[7744]: I0220 14:47:33.569909 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26c30461-efe3-4999-9698-f3c478c71fa0-serving-cert\") pod \"controller-manager-69686f5989-xkhpg\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:33.583473 master-0 kubenswrapper[7744]: I0220 14:47:33.583260 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9v89\" (UniqueName: \"kubernetes.io/projected/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-kube-api-access-c9v89\") pod \"route-controller-manager-6c6947b888-mrmnj\" (UID: \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\") " pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:47:33.587043 master-0 kubenswrapper[7744]: I0220 14:47:33.586956 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxxc8\" (UniqueName: \"kubernetes.io/projected/26c30461-efe3-4999-9698-f3c478c71fa0-kube-api-access-gxxc8\") pod \"controller-manager-69686f5989-xkhpg\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:33.594107 master-0 kubenswrapper[7744]: I0220 14:47:33.594063 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 20 14:47:33.608062 master-0 kubenswrapper[7744]: W0220 14:47:33.608002 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod986049a1_b3e4_4dca_b178_55eaa7a27bfb.slice/crio-c5695ade0d175e611702bd38877ae968a3b086637dc9039f70a7cafe4447aa4c WatchSource:0}: Error finding container c5695ade0d175e611702bd38877ae968a3b086637dc9039f70a7cafe4447aa4c: Status 404 returned error can't find the container with id c5695ade0d175e611702bd38877ae968a3b086637dc9039f70a7cafe4447aa4c Feb 20 14:47:33.663988 master-0 kubenswrapper[7744]: I0220 14:47:33.663937 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:47:33.686078 master-0 kubenswrapper[7744]: I0220 14:47:33.685991 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:34.125015 master-0 kubenswrapper[7744]: I0220 14:47:34.124859 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj"] Feb 20 14:47:34.135094 master-0 kubenswrapper[7744]: W0220 14:47:34.135003 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5aae2e2_7323_4927_a5ca_645e2a8b7bf9.slice/crio-d6248ee06a48f06e478ef5063a098128ee8d0223c87106accc15750d4a9a382d WatchSource:0}: Error finding container d6248ee06a48f06e478ef5063a098128ee8d0223c87106accc15750d4a9a382d: Status 404 returned error can't find the container with id d6248ee06a48f06e478ef5063a098128ee8d0223c87106accc15750d4a9a382d Feb 20 14:47:34.191298 master-0 kubenswrapper[7744]: I0220 14:47:34.191136 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69686f5989-xkhpg"] Feb 20 14:47:34.198493 master-0 kubenswrapper[7744]: W0220 14:47:34.198245 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26c30461_efe3_4999_9698_f3c478c71fa0.slice/crio-3a519667a7ca7f07e39cecb256979b5861f401752064e994ca5bfe33609b5b72 WatchSource:0}: Error finding container 3a519667a7ca7f07e39cecb256979b5861f401752064e994ca5bfe33609b5b72: Status 404 returned error can't find the container with id 3a519667a7ca7f07e39cecb256979b5861f401752064e994ca5bfe33609b5b72 Feb 20 14:47:34.542456 master-0 kubenswrapper[7744]: I0220 14:47:34.542278 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" event={"ID":"a8c0a6d2-f1f9-49e3-9475-4983b50667bf","Type":"ContainerStarted","Data":"f8a433dd00d15430b30f07f3b74ccacefa6f8385a2e11e771e5ee34057464565"} Feb 20 14:47:34.544030 master-0 kubenswrapper[7744]: I0220 14:47:34.543989 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" event={"ID":"26c30461-efe3-4999-9698-f3c478c71fa0","Type":"ContainerStarted","Data":"d8497a234a46a68aecc54d147968178bed221b9af852b897918c6819201a92bf"} Feb 20 14:47:34.544030 master-0 kubenswrapper[7744]: I0220 14:47:34.544028 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" event={"ID":"26c30461-efe3-4999-9698-f3c478c71fa0","Type":"ContainerStarted","Data":"3a519667a7ca7f07e39cecb256979b5861f401752064e994ca5bfe33609b5b72"} Feb 20 14:47:34.544533 master-0 kubenswrapper[7744]: I0220 14:47:34.544473 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:34.547936 master-0 kubenswrapper[7744]: I0220 14:47:34.547849 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" event={"ID":"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9","Type":"ContainerStarted","Data":"229362d733da42839f2135af7287e0625de00b859da8c4afba1d54667a79cddd"} Feb 20 14:47:34.548039 master-0 kubenswrapper[7744]: I0220 14:47:34.547943 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" event={"ID":"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9","Type":"ContainerStarted","Data":"d6248ee06a48f06e478ef5063a098128ee8d0223c87106accc15750d4a9a382d"} Feb 20 14:47:34.548415 master-0 kubenswrapper[7744]: I0220 14:47:34.548362 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:47:34.554142 master-0 kubenswrapper[7744]: I0220 14:47:34.554071 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:47:34.558215 master-0 kubenswrapper[7744]: I0220 14:47:34.558129 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"986049a1-b3e4-4dca-b178-55eaa7a27bfb","Type":"ContainerStarted","Data":"6f844b10f8ac3c87a0a1682a1e7ea9ccbec49915b04b1fd7a88cca60f9004b80"} Feb 20 14:47:34.558353 master-0 kubenswrapper[7744]: I0220 14:47:34.558239 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"986049a1-b3e4-4dca-b178-55eaa7a27bfb","Type":"ContainerStarted","Data":"c5695ade0d175e611702bd38877ae968a3b086637dc9039f70a7cafe4447aa4c"} Feb 20 14:47:34.598104 master-0 kubenswrapper[7744]: I0220 14:47:34.598032 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" podStartSLOduration=6.598011522 podStartE2EDuration="6.598011522s" podCreationTimestamp="2026-02-20 14:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:47:34.596226188 +0000 UTC m=+53.798426118" watchObservedRunningTime="2026-02-20 14:47:34.598011522 +0000 UTC m=+53.800211452" Feb 20 14:47:34.598899 master-0 kubenswrapper[7744]: I0220 14:47:34.598854 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" podStartSLOduration=6.003505357 podStartE2EDuration="9.598848262s" podCreationTimestamp="2026-02-20 14:47:25 +0000 UTC" firstStartedPulling="2026-02-20 14:47:28.977768399 +0000 UTC m=+48.179968319" lastFinishedPulling="2026-02-20 14:47:32.573111294 +0000 UTC m=+51.775311224" observedRunningTime="2026-02-20 14:47:34.580418593 +0000 UTC m=+53.782618523" watchObservedRunningTime="2026-02-20 14:47:34.598848262 +0000 UTC m=+53.801048202" Feb 20 14:47:34.641353 master-0 kubenswrapper[7744]: I0220 14:47:34.639792 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" podStartSLOduration=7.639775629 podStartE2EDuration="7.639775629s" podCreationTimestamp="2026-02-20 14:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:47:34.620199932 +0000 UTC m=+53.822399852" watchObservedRunningTime="2026-02-20 14:47:34.639775629 +0000 UTC m=+53.841975549" Feb 20 14:47:34.802477 master-0 kubenswrapper[7744]: I0220 14:47:34.802419 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:47:34.822755 master-0 kubenswrapper[7744]: I0220 14:47:34.822651 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=2.822622054 podStartE2EDuration="2.822622054s" podCreationTimestamp="2026-02-20 14:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:47:34.642227989 +0000 UTC m=+53.844427919" watchObservedRunningTime="2026-02-20 14:47:34.822622054 +0000 UTC m=+54.024822014" Feb 20 14:47:36.890910 master-0 kubenswrapper[7744]: I0220 14:47:36.890832 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:36.890910 master-0 kubenswrapper[7744]: I0220 14:47:36.890890 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:36.898297 master-0 kubenswrapper[7744]: I0220 14:47:36.898259 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:37.302080 master-0 kubenswrapper[7744]: I0220 14:47:37.302031 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:37.307248 master-0 kubenswrapper[7744]: I0220 14:47:37.307125 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 14:47:37.314482 master-0 kubenswrapper[7744]: I0220 14:47:37.314424 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9"] Feb 20 14:47:37.314838 master-0 kubenswrapper[7744]: I0220 14:47:37.314656 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" podUID="4cede061-d85a-4366-9f1e-90be51f726fc" containerName="cluster-version-operator" containerID="cri-o://5bfa599cbd27c25639ce1eac310bdf292fe58fdc151e3afbbf7cb5c9a001d3b5" gracePeriod=130 Feb 20 14:47:37.578194 master-0 kubenswrapper[7744]: I0220 14:47:37.578105 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 14:47:37.968509 master-0 kubenswrapper[7744]: I0220 14:47:37.968459 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:47:38.128916 master-0 kubenswrapper[7744]: I0220 14:47:38.128778 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cede061-d85a-4366-9f1e-90be51f726fc-service-ca\") pod \"4cede061-d85a-4366-9f1e-90be51f726fc\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " Feb 20 14:47:38.128916 master-0 kubenswrapper[7744]: I0220 14:47:38.128865 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") pod \"4cede061-d85a-4366-9f1e-90be51f726fc\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " Feb 20 14:47:38.128916 master-0 kubenswrapper[7744]: I0220 14:47:38.128895 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cede061-d85a-4366-9f1e-90be51f726fc-kube-api-access\") pod \"4cede061-d85a-4366-9f1e-90be51f726fc\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " Feb 20 14:47:38.129183 master-0 kubenswrapper[7744]: I0220 14:47:38.128994 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-ssl-certs\") pod \"4cede061-d85a-4366-9f1e-90be51f726fc\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " Feb 20 14:47:38.129183 master-0 kubenswrapper[7744]: I0220 14:47:38.129020 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-cvo-updatepayloads\") pod \"4cede061-d85a-4366-9f1e-90be51f726fc\" (UID: \"4cede061-d85a-4366-9f1e-90be51f726fc\") " Feb 20 14:47:38.129271 master-0 kubenswrapper[7744]: I0220 14:47:38.129177 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "4cede061-d85a-4366-9f1e-90be51f726fc" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:47:38.129314 master-0 kubenswrapper[7744]: I0220 14:47:38.129259 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "4cede061-d85a-4366-9f1e-90be51f726fc" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:47:38.129390 master-0 kubenswrapper[7744]: I0220 14:47:38.129348 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cede061-d85a-4366-9f1e-90be51f726fc-service-ca" (OuterVolumeSpecName: "service-ca") pod "4cede061-d85a-4366-9f1e-90be51f726fc" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:47:38.134038 master-0 kubenswrapper[7744]: I0220 14:47:38.133976 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cede061-d85a-4366-9f1e-90be51f726fc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4cede061-d85a-4366-9f1e-90be51f726fc" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:47:38.135354 master-0 kubenswrapper[7744]: I0220 14:47:38.135299 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4cede061-d85a-4366-9f1e-90be51f726fc" (UID: "4cede061-d85a-4366-9f1e-90be51f726fc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 14:47:38.230092 master-0 kubenswrapper[7744]: I0220 14:47:38.230025 7744 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cede061-d85a-4366-9f1e-90be51f726fc-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:38.230092 master-0 kubenswrapper[7744]: I0220 14:47:38.230075 7744 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cede061-d85a-4366-9f1e-90be51f726fc-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:38.230092 master-0 kubenswrapper[7744]: I0220 14:47:38.230099 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cede061-d85a-4366-9f1e-90be51f726fc-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:38.230364 master-0 kubenswrapper[7744]: I0220 14:47:38.230117 7744 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:38.230364 master-0 kubenswrapper[7744]: I0220 14:47:38.230139 7744 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4cede061-d85a-4366-9f1e-90be51f726fc-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:38.580725 master-0 kubenswrapper[7744]: I0220 14:47:38.580652 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_80490ae2-6185-4c98-ad70-bb13da2fe3b0/installer/0.log" Feb 20 14:47:38.580891 master-0 kubenswrapper[7744]: I0220 14:47:38.580739 7744 generic.go:334] "Generic (PLEG): container finished" podID="80490ae2-6185-4c98-ad70-bb13da2fe3b0" containerID="063d7d38f9bc412babd73283f30cdd4274248e0467ee3c63ec3aa1207486311b" exitCode=1 Feb 20 14:47:38.581122 master-0 kubenswrapper[7744]: I0220 14:47:38.580836 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"80490ae2-6185-4c98-ad70-bb13da2fe3b0","Type":"ContainerDied","Data":"063d7d38f9bc412babd73283f30cdd4274248e0467ee3c63ec3aa1207486311b"} Feb 20 14:47:38.583444 master-0 kubenswrapper[7744]: I0220 14:47:38.583391 7744 generic.go:334] "Generic (PLEG): container finished" podID="4cede061-d85a-4366-9f1e-90be51f726fc" containerID="5bfa599cbd27c25639ce1eac310bdf292fe58fdc151e3afbbf7cb5c9a001d3b5" exitCode=0 Feb 20 14:47:38.584371 master-0 kubenswrapper[7744]: I0220 14:47:38.584326 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" Feb 20 14:47:38.587083 master-0 kubenswrapper[7744]: I0220 14:47:38.587039 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" event={"ID":"4cede061-d85a-4366-9f1e-90be51f726fc","Type":"ContainerDied","Data":"5bfa599cbd27c25639ce1eac310bdf292fe58fdc151e3afbbf7cb5c9a001d3b5"} Feb 20 14:47:38.587193 master-0 kubenswrapper[7744]: I0220 14:47:38.587084 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9" event={"ID":"4cede061-d85a-4366-9f1e-90be51f726fc","Type":"ContainerDied","Data":"855c381c7e9503bada08349e9cd9fb33b869da71527d36e4ab32698f02bf192b"} Feb 20 14:47:38.587193 master-0 kubenswrapper[7744]: I0220 14:47:38.587116 7744 scope.go:117] "RemoveContainer" containerID="5bfa599cbd27c25639ce1eac310bdf292fe58fdc151e3afbbf7cb5c9a001d3b5" Feb 20 14:47:38.610312 master-0 kubenswrapper[7744]: I0220 14:47:38.610199 7744 scope.go:117] "RemoveContainer" containerID="5bfa599cbd27c25639ce1eac310bdf292fe58fdc151e3afbbf7cb5c9a001d3b5" Feb 20 14:47:38.610776 master-0 kubenswrapper[7744]: E0220 14:47:38.610708 7744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bfa599cbd27c25639ce1eac310bdf292fe58fdc151e3afbbf7cb5c9a001d3b5\": container with ID starting with 5bfa599cbd27c25639ce1eac310bdf292fe58fdc151e3afbbf7cb5c9a001d3b5 not found: ID does not exist" containerID="5bfa599cbd27c25639ce1eac310bdf292fe58fdc151e3afbbf7cb5c9a001d3b5" Feb 20 14:47:38.610967 master-0 kubenswrapper[7744]: I0220 14:47:38.610773 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfa599cbd27c25639ce1eac310bdf292fe58fdc151e3afbbf7cb5c9a001d3b5"} err="failed to get container status \"5bfa599cbd27c25639ce1eac310bdf292fe58fdc151e3afbbf7cb5c9a001d3b5\": rpc error: code = NotFound desc = could not find container \"5bfa599cbd27c25639ce1eac310bdf292fe58fdc151e3afbbf7cb5c9a001d3b5\": container with ID starting with 5bfa599cbd27c25639ce1eac310bdf292fe58fdc151e3afbbf7cb5c9a001d3b5 not found: ID does not exist" Feb 20 14:47:38.645493 master-0 kubenswrapper[7744]: I0220 14:47:38.644463 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9"] Feb 20 14:47:38.654746 master-0 kubenswrapper[7744]: I0220 14:47:38.654678 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-jf2s9"] Feb 20 14:47:38.700604 master-0 kubenswrapper[7744]: I0220 14:47:38.700548 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-57476485-nl7tx"] Feb 20 14:47:38.700964 master-0 kubenswrapper[7744]: E0220 14:47:38.700904 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cede061-d85a-4366-9f1e-90be51f726fc" containerName="cluster-version-operator" Feb 20 14:47:38.701034 master-0 kubenswrapper[7744]: I0220 14:47:38.700974 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cede061-d85a-4366-9f1e-90be51f726fc" containerName="cluster-version-operator" Feb 20 14:47:38.701172 master-0 kubenswrapper[7744]: I0220 14:47:38.701141 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cede061-d85a-4366-9f1e-90be51f726fc" containerName="cluster-version-operator" Feb 20 14:47:38.702110 master-0 kubenswrapper[7744]: I0220 14:47:38.702077 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:38.706190 master-0 kubenswrapper[7744]: I0220 14:47:38.705838 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 20 14:47:38.706190 master-0 kubenswrapper[7744]: I0220 14:47:38.706114 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 20 14:47:38.706190 master-0 kubenswrapper[7744]: I0220 14:47:38.706140 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 20 14:47:38.706381 master-0 kubenswrapper[7744]: I0220 14:47:38.706155 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-5mrbx" Feb 20 14:47:38.837860 master-0 kubenswrapper[7744]: I0220 14:47:38.837781 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/26473c28-db42-47e6-9164-8c441ccc48ca-service-ca\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:38.837860 master-0 kubenswrapper[7744]: I0220 14:47:38.837842 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26473c28-db42-47e6-9164-8c441ccc48ca-serving-cert\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:38.838095 master-0 kubenswrapper[7744]: I0220 14:47:38.837876 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/26473c28-db42-47e6-9164-8c441ccc48ca-etc-ssl-certs\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:38.838095 master-0 kubenswrapper[7744]: I0220 14:47:38.837913 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/26473c28-db42-47e6-9164-8c441ccc48ca-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:38.838095 master-0 kubenswrapper[7744]: I0220 14:47:38.837993 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26473c28-db42-47e6-9164-8c441ccc48ca-kube-api-access\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:38.878035 master-0 kubenswrapper[7744]: I0220 14:47:38.877992 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_80490ae2-6185-4c98-ad70-bb13da2fe3b0/installer/0.log" Feb 20 14:47:38.878137 master-0 kubenswrapper[7744]: I0220 14:47:38.878072 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 14:47:38.940000 master-0 kubenswrapper[7744]: I0220 14:47:38.939043 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26473c28-db42-47e6-9164-8c441ccc48ca-serving-cert\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:38.940000 master-0 kubenswrapper[7744]: I0220 14:47:38.939101 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/26473c28-db42-47e6-9164-8c441ccc48ca-etc-ssl-certs\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:38.940000 master-0 kubenswrapper[7744]: I0220 14:47:38.939134 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/26473c28-db42-47e6-9164-8c441ccc48ca-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:38.940000 master-0 kubenswrapper[7744]: I0220 14:47:38.939230 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/26473c28-db42-47e6-9164-8c441ccc48ca-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:38.940000 master-0 kubenswrapper[7744]: I0220 14:47:38.939663 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26473c28-db42-47e6-9164-8c441ccc48ca-kube-api-access\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:38.940000 master-0 kubenswrapper[7744]: I0220 14:47:38.939721 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/26473c28-db42-47e6-9164-8c441ccc48ca-service-ca\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:38.940452 master-0 kubenswrapper[7744]: I0220 14:47:38.940090 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/26473c28-db42-47e6-9164-8c441ccc48ca-etc-ssl-certs\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:38.948051 master-0 kubenswrapper[7744]: I0220 14:47:38.944849 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/26473c28-db42-47e6-9164-8c441ccc48ca-service-ca\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:38.949742 master-0 kubenswrapper[7744]: I0220 14:47:38.949699 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26473c28-db42-47e6-9164-8c441ccc48ca-serving-cert\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:38.963131 master-0 kubenswrapper[7744]: I0220 14:47:38.963033 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26473c28-db42-47e6-9164-8c441ccc48ca-kube-api-access\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:39.022704 master-0 kubenswrapper[7744]: I0220 14:47:39.022651 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 14:47:39.043379 master-0 kubenswrapper[7744]: I0220 14:47:39.042722 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80490ae2-6185-4c98-ad70-bb13da2fe3b0-kube-api-access\") pod \"80490ae2-6185-4c98-ad70-bb13da2fe3b0\" (UID: \"80490ae2-6185-4c98-ad70-bb13da2fe3b0\") " Feb 20 14:47:39.043379 master-0 kubenswrapper[7744]: I0220 14:47:39.042780 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80490ae2-6185-4c98-ad70-bb13da2fe3b0-var-lock\") pod \"80490ae2-6185-4c98-ad70-bb13da2fe3b0\" (UID: \"80490ae2-6185-4c98-ad70-bb13da2fe3b0\") " Feb 20 14:47:39.043379 master-0 kubenswrapper[7744]: I0220 14:47:39.042808 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80490ae2-6185-4c98-ad70-bb13da2fe3b0-kubelet-dir\") pod \"80490ae2-6185-4c98-ad70-bb13da2fe3b0\" (UID: \"80490ae2-6185-4c98-ad70-bb13da2fe3b0\") " Feb 20 14:47:39.043379 master-0 kubenswrapper[7744]: I0220 14:47:39.043074 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80490ae2-6185-4c98-ad70-bb13da2fe3b0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "80490ae2-6185-4c98-ad70-bb13da2fe3b0" (UID: "80490ae2-6185-4c98-ad70-bb13da2fe3b0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:47:39.043799 master-0 kubenswrapper[7744]: I0220 14:47:39.043533 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80490ae2-6185-4c98-ad70-bb13da2fe3b0-var-lock" (OuterVolumeSpecName: "var-lock") pod "80490ae2-6185-4c98-ad70-bb13da2fe3b0" (UID: "80490ae2-6185-4c98-ad70-bb13da2fe3b0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:47:39.047981 master-0 kubenswrapper[7744]: I0220 14:47:39.047945 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80490ae2-6185-4c98-ad70-bb13da2fe3b0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "80490ae2-6185-4c98-ad70-bb13da2fe3b0" (UID: "80490ae2-6185-4c98-ad70-bb13da2fe3b0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:47:39.059986 master-0 kubenswrapper[7744]: I0220 14:47:39.059036 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cede061-d85a-4366-9f1e-90be51f726fc" path="/var/lib/kubelet/pods/4cede061-d85a-4366-9f1e-90be51f726fc/volumes" Feb 20 14:47:39.144652 master-0 kubenswrapper[7744]: I0220 14:47:39.144585 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80490ae2-6185-4c98-ad70-bb13da2fe3b0-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:39.144756 master-0 kubenswrapper[7744]: I0220 14:47:39.144655 7744 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80490ae2-6185-4c98-ad70-bb13da2fe3b0-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:39.144756 master-0 kubenswrapper[7744]: I0220 14:47:39.144677 7744 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80490ae2-6185-4c98-ad70-bb13da2fe3b0-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:47:39.604510 master-0 kubenswrapper[7744]: I0220 14:47:39.602969 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_80490ae2-6185-4c98-ad70-bb13da2fe3b0/installer/0.log" Feb 20 14:47:39.604510 master-0 kubenswrapper[7744]: I0220 14:47:39.603100 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"80490ae2-6185-4c98-ad70-bb13da2fe3b0","Type":"ContainerDied","Data":"0098096c222a7f7bbec901788c207239fe95e271e299dfb4562ca29e40273cc7"} Feb 20 14:47:39.604510 master-0 kubenswrapper[7744]: I0220 14:47:39.603162 7744 scope.go:117] "RemoveContainer" containerID="063d7d38f9bc412babd73283f30cdd4274248e0467ee3c63ec3aa1207486311b" Feb 20 14:47:39.604510 master-0 kubenswrapper[7744]: I0220 14:47:39.603350 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 20 14:47:39.610216 master-0 kubenswrapper[7744]: I0220 14:47:39.610163 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" event={"ID":"26473c28-db42-47e6-9164-8c441ccc48ca","Type":"ContainerStarted","Data":"cf505a16e5ad42c8152bcd72aafcb820926098f85d46295fdbd79e955e20ab07"} Feb 20 14:47:39.610389 master-0 kubenswrapper[7744]: I0220 14:47:39.610367 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" event={"ID":"26473c28-db42-47e6-9164-8c441ccc48ca","Type":"ContainerStarted","Data":"4d7a859ad253e344142e3d8002817623ee421d3b324eff2b6246c1b1fdd11bc1"} Feb 20 14:47:39.626375 master-0 kubenswrapper[7744]: I0220 14:47:39.625722 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 20 14:47:39.642588 master-0 kubenswrapper[7744]: I0220 14:47:39.642545 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 20 14:47:39.662208 master-0 kubenswrapper[7744]: I0220 14:47:39.661479 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" podStartSLOduration=1.661459941 podStartE2EDuration="1.661459941s" podCreationTimestamp="2026-02-20 14:47:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:47:39.657983797 +0000 UTC m=+58.860183747" watchObservedRunningTime="2026-02-20 14:47:39.661459941 +0000 UTC m=+58.863659871" Feb 20 14:47:41.050179 master-0 kubenswrapper[7744]: I0220 14:47:41.050116 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80490ae2-6185-4c98-ad70-bb13da2fe3b0" path="/var/lib/kubelet/pods/80490ae2-6185-4c98-ad70-bb13da2fe3b0/volumes" Feb 20 14:47:41.566446 master-0 kubenswrapper[7744]: I0220 14:47:41.566302 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 20 14:47:41.566748 master-0 kubenswrapper[7744]: E0220 14:47:41.566606 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80490ae2-6185-4c98-ad70-bb13da2fe3b0" containerName="installer" Feb 20 14:47:41.566748 master-0 kubenswrapper[7744]: I0220 14:47:41.566627 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="80490ae2-6185-4c98-ad70-bb13da2fe3b0" containerName="installer" Feb 20 14:47:41.566854 master-0 kubenswrapper[7744]: I0220 14:47:41.566750 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="80490ae2-6185-4c98-ad70-bb13da2fe3b0" containerName="installer" Feb 20 14:47:41.567827 master-0 kubenswrapper[7744]: I0220 14:47:41.567447 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 14:47:41.570863 master-0 kubenswrapper[7744]: I0220 14:47:41.570732 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 20 14:47:41.570863 master-0 kubenswrapper[7744]: I0220 14:47:41.570756 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-j2nvb" Feb 20 14:47:41.581779 master-0 kubenswrapper[7744]: I0220 14:47:41.579626 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 20 14:47:41.675819 master-0 kubenswrapper[7744]: I0220 14:47:41.675742 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/277ab008-e6f0-49cd-801d-54d3071036d4-var-lock\") pod \"installer-1-master-0\" (UID: \"277ab008-e6f0-49cd-801d-54d3071036d4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 14:47:41.676120 master-0 kubenswrapper[7744]: I0220 14:47:41.675867 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/277ab008-e6f0-49cd-801d-54d3071036d4-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"277ab008-e6f0-49cd-801d-54d3071036d4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 14:47:41.676120 master-0 kubenswrapper[7744]: I0220 14:47:41.675934 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/277ab008-e6f0-49cd-801d-54d3071036d4-kube-api-access\") pod \"installer-1-master-0\" (UID: \"277ab008-e6f0-49cd-801d-54d3071036d4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 14:47:41.776784 master-0 kubenswrapper[7744]: I0220 14:47:41.776722 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/277ab008-e6f0-49cd-801d-54d3071036d4-kube-api-access\") pod \"installer-1-master-0\" (UID: \"277ab008-e6f0-49cd-801d-54d3071036d4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 14:47:41.777073 master-0 kubenswrapper[7744]: I0220 14:47:41.776852 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/277ab008-e6f0-49cd-801d-54d3071036d4-var-lock\") pod \"installer-1-master-0\" (UID: \"277ab008-e6f0-49cd-801d-54d3071036d4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 14:47:41.777073 master-0 kubenswrapper[7744]: I0220 14:47:41.777022 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/277ab008-e6f0-49cd-801d-54d3071036d4-var-lock\") pod \"installer-1-master-0\" (UID: \"277ab008-e6f0-49cd-801d-54d3071036d4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 14:47:41.777223 master-0 kubenswrapper[7744]: I0220 14:47:41.777110 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/277ab008-e6f0-49cd-801d-54d3071036d4-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"277ab008-e6f0-49cd-801d-54d3071036d4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 14:47:41.777292 master-0 kubenswrapper[7744]: I0220 14:47:41.777243 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/277ab008-e6f0-49cd-801d-54d3071036d4-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"277ab008-e6f0-49cd-801d-54d3071036d4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 14:47:41.809659 master-0 kubenswrapper[7744]: I0220 14:47:41.809554 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/277ab008-e6f0-49cd-801d-54d3071036d4-kube-api-access\") pod \"installer-1-master-0\" (UID: \"277ab008-e6f0-49cd-801d-54d3071036d4\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 14:47:41.915041 master-0 kubenswrapper[7744]: I0220 14:47:41.914831 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 14:47:42.274548 master-0 kubenswrapper[7744]: I0220 14:47:42.274462 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-dzfl8" Feb 20 14:47:42.417724 master-0 kubenswrapper[7744]: I0220 14:47:42.417676 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 20 14:47:42.422485 master-0 kubenswrapper[7744]: W0220 14:47:42.422201 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod277ab008_e6f0_49cd_801d_54d3071036d4.slice/crio-9973189e4a2bcf54eee01772766f154e4f0414d83cd39d056cbee6f94ee506af WatchSource:0}: Error finding container 9973189e4a2bcf54eee01772766f154e4f0414d83cd39d056cbee6f94ee506af: Status 404 returned error can't find the container with id 9973189e4a2bcf54eee01772766f154e4f0414d83cd39d056cbee6f94ee506af Feb 20 14:47:42.639216 master-0 kubenswrapper[7744]: I0220 14:47:42.639162 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"277ab008-e6f0-49cd-801d-54d3071036d4","Type":"ContainerStarted","Data":"9973189e4a2bcf54eee01772766f154e4f0414d83cd39d056cbee6f94ee506af"} Feb 20 14:47:43.649041 master-0 kubenswrapper[7744]: I0220 14:47:43.648957 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"277ab008-e6f0-49cd-801d-54d3071036d4","Type":"ContainerStarted","Data":"6c7c12ccf7f07aacf9744ba31c10a72a4c19226b35c8d4fd36f32979a50dbaaf"} Feb 20 14:47:43.675602 master-0 kubenswrapper[7744]: I0220 14:47:43.675480 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=2.675453585 podStartE2EDuration="2.675453585s" podCreationTimestamp="2026-02-20 14:47:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:47:43.673004163 +0000 UTC m=+62.875204123" watchObservedRunningTime="2026-02-20 14:47:43.675453585 +0000 UTC m=+62.877653545" Feb 20 14:47:44.347273 master-0 kubenswrapper[7744]: I0220 14:47:44.347201 7744 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 20 14:47:44.347541 master-0 kubenswrapper[7744]: I0220 14:47:44.347468 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" containerID="cri-o://10bfd96b29aba7539a53e7ab2b44c245c4854718cd635aecd100e792a48f1fdc" gracePeriod=30 Feb 20 14:47:44.347541 master-0 kubenswrapper[7744]: I0220 14:47:44.347474 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" containerID="cri-o://ab258fec42d8ec54f4f2b16e7f18ce6e3f88de1f121875064baf67bce8e05a10" gracePeriod=30 Feb 20 14:47:44.504682 master-0 kubenswrapper[7744]: I0220 14:47:44.432183 7744 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 20 14:47:44.504682 master-0 kubenswrapper[7744]: E0220 14:47:44.432436 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 20 14:47:44.504682 master-0 kubenswrapper[7744]: I0220 14:47:44.432457 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 20 14:47:44.504682 master-0 kubenswrapper[7744]: E0220 14:47:44.432480 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 20 14:47:44.504682 master-0 kubenswrapper[7744]: I0220 14:47:44.432495 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 20 14:47:44.504682 master-0 kubenswrapper[7744]: I0220 14:47:44.432633 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 20 14:47:44.504682 master-0 kubenswrapper[7744]: I0220 14:47:44.432655 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 20 14:47:44.504682 master-0 kubenswrapper[7744]: I0220 14:47:44.434833 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.517136 master-0 kubenswrapper[7744]: I0220 14:47:44.517091 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.517212 master-0 kubenswrapper[7744]: I0220 14:47:44.517141 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.517212 master-0 kubenswrapper[7744]: I0220 14:47:44.517180 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.517285 master-0 kubenswrapper[7744]: I0220 14:47:44.517217 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.517314 master-0 kubenswrapper[7744]: I0220 14:47:44.517270 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.517348 master-0 kubenswrapper[7744]: I0220 14:47:44.517333 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.619256 master-0 kubenswrapper[7744]: I0220 14:47:44.619026 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.619256 master-0 kubenswrapper[7744]: I0220 14:47:44.619217 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.619453 master-0 kubenswrapper[7744]: I0220 14:47:44.619289 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.619453 master-0 kubenswrapper[7744]: I0220 14:47:44.619333 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.619453 master-0 kubenswrapper[7744]: I0220 14:47:44.619336 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.619453 master-0 kubenswrapper[7744]: I0220 14:47:44.619403 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.619570 master-0 kubenswrapper[7744]: I0220 14:47:44.619468 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.619604 master-0 kubenswrapper[7744]: I0220 14:47:44.619559 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.619604 master-0 kubenswrapper[7744]: I0220 14:47:44.619592 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.619670 master-0 kubenswrapper[7744]: I0220 14:47:44.619641 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.619705 master-0 kubenswrapper[7744]: I0220 14:47:44.619669 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:44.619804 master-0 kubenswrapper[7744]: I0220 14:47:44.619770 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:47:45.835717 master-0 kubenswrapper[7744]: I0220 14:47:45.835642 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:47:45.840987 master-0 kubenswrapper[7744]: I0220 14:47:45.840916 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:47:45.882098 master-0 kubenswrapper[7744]: I0220 14:47:45.882021 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:47:57.486949 master-0 kubenswrapper[7744]: E0220 14:47:57.486878 7744 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 20 14:47:57.487769 master-0 kubenswrapper[7744]: I0220 14:47:57.487726 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 20 14:47:57.511805 master-0 kubenswrapper[7744]: W0220 14:47:57.511726 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18a83278819db2092fa26d8274eb3f00.slice/crio-169301f231999751626367573bc5e36ff31a1905919a09ffd1d8ae262296aa73 WatchSource:0}: Error finding container 169301f231999751626367573bc5e36ff31a1905919a09ffd1d8ae262296aa73: Status 404 returned error can't find the container with id 169301f231999751626367573bc5e36ff31a1905919a09ffd1d8ae262296aa73 Feb 20 14:47:57.721422 master-0 kubenswrapper[7744]: I0220 14:47:57.721354 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-j66jm_45d7ef0c-272b-4d1e-965f-484975d5d25c/openshift-controller-manager-operator/0.log" Feb 20 14:47:57.721422 master-0 kubenswrapper[7744]: I0220 14:47:57.721401 7744 generic.go:334] "Generic (PLEG): container finished" podID="45d7ef0c-272b-4d1e-965f-484975d5d25c" containerID="233f31cc87ed77a81bb475184c8275cb1327d0aaed87c186b3895bc1d70da1c4" exitCode=1 Feb 20 14:47:57.721659 master-0 kubenswrapper[7744]: I0220 14:47:57.721453 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" event={"ID":"45d7ef0c-272b-4d1e-965f-484975d5d25c","Type":"ContainerDied","Data":"233f31cc87ed77a81bb475184c8275cb1327d0aaed87c186b3895bc1d70da1c4"} Feb 20 14:47:57.721842 master-0 kubenswrapper[7744]: I0220 14:47:57.721803 7744 scope.go:117] "RemoveContainer" containerID="233f31cc87ed77a81bb475184c8275cb1327d0aaed87c186b3895bc1d70da1c4" Feb 20 14:47:57.723237 master-0 kubenswrapper[7744]: I0220 14:47:57.723197 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"169301f231999751626367573bc5e36ff31a1905919a09ffd1d8ae262296aa73"} Feb 20 14:47:58.733513 master-0 kubenswrapper[7744]: I0220 14:47:58.733297 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-j66jm_45d7ef0c-272b-4d1e-965f-484975d5d25c/openshift-controller-manager-operator/0.log" Feb 20 14:47:58.734373 master-0 kubenswrapper[7744]: I0220 14:47:58.733598 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" event={"ID":"45d7ef0c-272b-4d1e-965f-484975d5d25c","Type":"ContainerStarted","Data":"2b921a59215a9b57fc0e140139af8ee009d893b2733cf5fcafdbd68899442899"} Feb 20 14:47:58.737349 master-0 kubenswrapper[7744]: I0220 14:47:58.737289 7744 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="0790f8358bd195055e3ef6e8082a46ba0465ead8edb0163b9e940b4b722d77b1" exitCode=0 Feb 20 14:47:58.737526 master-0 kubenswrapper[7744]: I0220 14:47:58.737372 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"0790f8358bd195055e3ef6e8082a46ba0465ead8edb0163b9e940b4b722d77b1"} Feb 20 14:47:58.739521 master-0 kubenswrapper[7744]: I0220 14:47:58.739439 7744 generic.go:334] "Generic (PLEG): container finished" podID="53835140-8eed-401c-ac07-f89b554ff616" containerID="ac1ebe21f01db828cbdc3775b7cb4f962d321758483e5f64757855bd43976682" exitCode=0 Feb 20 14:47:58.739734 master-0 kubenswrapper[7744]: I0220 14:47:58.739499 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"53835140-8eed-401c-ac07-f89b554ff616","Type":"ContainerDied","Data":"ac1ebe21f01db828cbdc3775b7cb4f962d321758483e5f64757855bd43976682"} Feb 20 14:47:59.748187 master-0 kubenswrapper[7744]: I0220 14:47:59.748137 7744 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="24b2aee1f972d18ca4405ff399927f57d407665113e657b4f3db6303afde8747" exitCode=1 Feb 20 14:47:59.749035 master-0 kubenswrapper[7744]: I0220 14:47:59.748287 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"24b2aee1f972d18ca4405ff399927f57d407665113e657b4f3db6303afde8747"} Feb 20 14:47:59.749141 master-0 kubenswrapper[7744]: I0220 14:47:59.749086 7744 scope.go:117] "RemoveContainer" containerID="6dbf7c55ace0ed513f2aaaeda5aa48d72fc75a02defc6cc2063a7bcf59d1c27f" Feb 20 14:47:59.749896 master-0 kubenswrapper[7744]: I0220 14:47:59.749855 7744 scope.go:117] "RemoveContainer" containerID="24b2aee1f972d18ca4405ff399927f57d407665113e657b4f3db6303afde8747" Feb 20 14:48:00.105101 master-0 kubenswrapper[7744]: I0220 14:48:00.105035 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:48:00.168244 master-0 kubenswrapper[7744]: I0220 14:48:00.168192 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 20 14:48:00.309890 master-0 kubenswrapper[7744]: I0220 14:48:00.309828 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53835140-8eed-401c-ac07-f89b554ff616-var-lock\") pod \"53835140-8eed-401c-ac07-f89b554ff616\" (UID: \"53835140-8eed-401c-ac07-f89b554ff616\") " Feb 20 14:48:00.310156 master-0 kubenswrapper[7744]: I0220 14:48:00.309968 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53835140-8eed-401c-ac07-f89b554ff616-kubelet-dir\") pod \"53835140-8eed-401c-ac07-f89b554ff616\" (UID: \"53835140-8eed-401c-ac07-f89b554ff616\") " Feb 20 14:48:00.310156 master-0 kubenswrapper[7744]: I0220 14:48:00.309980 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53835140-8eed-401c-ac07-f89b554ff616-var-lock" (OuterVolumeSpecName: "var-lock") pod "53835140-8eed-401c-ac07-f89b554ff616" (UID: "53835140-8eed-401c-ac07-f89b554ff616"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:48:00.310156 master-0 kubenswrapper[7744]: I0220 14:48:00.310031 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53835140-8eed-401c-ac07-f89b554ff616-kube-api-access\") pod \"53835140-8eed-401c-ac07-f89b554ff616\" (UID: \"53835140-8eed-401c-ac07-f89b554ff616\") " Feb 20 14:48:00.310156 master-0 kubenswrapper[7744]: I0220 14:48:00.310041 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53835140-8eed-401c-ac07-f89b554ff616-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "53835140-8eed-401c-ac07-f89b554ff616" (UID: "53835140-8eed-401c-ac07-f89b554ff616"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:48:00.310407 master-0 kubenswrapper[7744]: I0220 14:48:00.310286 7744 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53835140-8eed-401c-ac07-f89b554ff616-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:48:00.310407 master-0 kubenswrapper[7744]: I0220 14:48:00.310308 7744 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53835140-8eed-401c-ac07-f89b554ff616-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 14:48:00.314778 master-0 kubenswrapper[7744]: I0220 14:48:00.314666 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53835140-8eed-401c-ac07-f89b554ff616-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "53835140-8eed-401c-ac07-f89b554ff616" (UID: "53835140-8eed-401c-ac07-f89b554ff616"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:48:00.357838 master-0 kubenswrapper[7744]: I0220 14:48:00.357781 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:48:00.411389 master-0 kubenswrapper[7744]: I0220 14:48:00.411328 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53835140-8eed-401c-ac07-f89b554ff616-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 14:48:00.756498 master-0 kubenswrapper[7744]: I0220 14:48:00.756425 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 20 14:48:00.757371 master-0 kubenswrapper[7744]: I0220 14:48:00.756419 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"53835140-8eed-401c-ac07-f89b554ff616","Type":"ContainerDied","Data":"823465cca5c74108f34569b06808ad03bfdc5a9d5fe983b835a9ba1e796ceb31"} Feb 20 14:48:00.757371 master-0 kubenswrapper[7744]: I0220 14:48:00.756680 7744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="823465cca5c74108f34569b06808ad03bfdc5a9d5fe983b835a9ba1e796ceb31" Feb 20 14:48:00.761510 master-0 kubenswrapper[7744]: I0220 14:48:00.760682 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"270d3a75efe91ed6ef6d1abeb18e00097f8477e6f1fadd3a750363afe0b16909"} Feb 20 14:48:01.770032 master-0 kubenswrapper[7744]: I0220 14:48:01.769884 7744 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="1dbd1253fb8b09bfbaa096d3703dce0afe66c7bc42222d1d422586b85221b083" exitCode=1 Feb 20 14:48:01.770032 master-0 kubenswrapper[7744]: I0220 14:48:01.769952 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerDied","Data":"1dbd1253fb8b09bfbaa096d3703dce0afe66c7bc42222d1d422586b85221b083"} Feb 20 14:48:01.770955 master-0 kubenswrapper[7744]: I0220 14:48:01.770873 7744 scope.go:117] "RemoveContainer" containerID="1dbd1253fb8b09bfbaa096d3703dce0afe66c7bc42222d1d422586b85221b083" Feb 20 14:48:02.555741 master-0 kubenswrapper[7744]: E0220 14:48:02.555647 7744 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:48:02.777544 master-0 kubenswrapper[7744]: I0220 14:48:02.777456 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"e95606b40a17608c8c7fdabfbaff98a784411ba115dbcdf26ab46d49f3aaafbd"} Feb 20 14:48:03.427949 master-0 kubenswrapper[7744]: E0220 14:48:03.427666 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T14:47:53Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T14:47:53Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T14:47:53Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T14:47:53Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015\\\"],\\\"sizeBytes\\\":443170136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568\\\"],\\\"sizeBytes\\\":411485245},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0\\\"],\\\"sizeBytes\\\":407241636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229\\\"],\\\"sizeBytes\\\":396420881}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:48:10.105375 master-0 kubenswrapper[7744]: I0220 14:48:10.105268 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:48:10.357475 master-0 kubenswrapper[7744]: I0220 14:48:10.357288 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:48:11.747430 master-0 kubenswrapper[7744]: E0220 14:48:11.747374 7744 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 20 14:48:11.832479 master-0 kubenswrapper[7744]: I0220 14:48:11.832393 7744 generic.go:334] "Generic (PLEG): container finished" podID="12dab5d350ebc129b0bfa4714d330b15" containerID="ab258fec42d8ec54f4f2b16e7f18ce6e3f88de1f121875064baf67bce8e05a10" exitCode=0 Feb 20 14:48:12.556325 master-0 kubenswrapper[7744]: E0220 14:48:12.556227 7744 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:48:12.841213 master-0 kubenswrapper[7744]: I0220 14:48:12.841060 7744 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="667e04d2ee9447d5c6be6502611e06945f27b2a635f79b208a58d8042b30dc6b" exitCode=0 Feb 20 14:48:12.841213 master-0 kubenswrapper[7744]: I0220 14:48:12.841129 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"667e04d2ee9447d5c6be6502611e06945f27b2a635f79b208a58d8042b30dc6b"} Feb 20 14:48:13.106222 master-0 kubenswrapper[7744]: I0220 14:48:13.106013 7744 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 14:48:13.429373 master-0 kubenswrapper[7744]: E0220 14:48:13.429189 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:48:14.482648 master-0 kubenswrapper[7744]: I0220 14:48:14.482586 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_12dab5d350ebc129b0bfa4714d330b15/etcdctl/0.log" Feb 20 14:48:14.483316 master-0 kubenswrapper[7744]: I0220 14:48:14.482703 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:48:14.583117 master-0 kubenswrapper[7744]: I0220 14:48:14.583020 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"12dab5d350ebc129b0bfa4714d330b15\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " Feb 20 14:48:14.583117 master-0 kubenswrapper[7744]: I0220 14:48:14.583118 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"12dab5d350ebc129b0bfa4714d330b15\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " Feb 20 14:48:14.583431 master-0 kubenswrapper[7744]: I0220 14:48:14.583185 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs" (OuterVolumeSpecName: "certs") pod "12dab5d350ebc129b0bfa4714d330b15" (UID: "12dab5d350ebc129b0bfa4714d330b15"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:48:14.583431 master-0 kubenswrapper[7744]: I0220 14:48:14.583331 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir" (OuterVolumeSpecName: "data-dir") pod "12dab5d350ebc129b0bfa4714d330b15" (UID: "12dab5d350ebc129b0bfa4714d330b15"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:48:14.583974 master-0 kubenswrapper[7744]: I0220 14:48:14.583888 7744 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 14:48:14.583974 master-0 kubenswrapper[7744]: I0220 14:48:14.583926 7744 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:48:14.856190 master-0 kubenswrapper[7744]: I0220 14:48:14.856127 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_12dab5d350ebc129b0bfa4714d330b15/etcdctl/0.log" Feb 20 14:48:14.856492 master-0 kubenswrapper[7744]: I0220 14:48:14.856204 7744 generic.go:334] "Generic (PLEG): container finished" podID="12dab5d350ebc129b0bfa4714d330b15" containerID="10bfd96b29aba7539a53e7ab2b44c245c4854718cd635aecd100e792a48f1fdc" exitCode=137 Feb 20 14:48:14.856492 master-0 kubenswrapper[7744]: I0220 14:48:14.856273 7744 scope.go:117] "RemoveContainer" containerID="ab258fec42d8ec54f4f2b16e7f18ce6e3f88de1f121875064baf67bce8e05a10" Feb 20 14:48:14.856492 master-0 kubenswrapper[7744]: I0220 14:48:14.856327 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:48:14.878914 master-0 kubenswrapper[7744]: I0220 14:48:14.878863 7744 scope.go:117] "RemoveContainer" containerID="10bfd96b29aba7539a53e7ab2b44c245c4854718cd635aecd100e792a48f1fdc" Feb 20 14:48:14.902711 master-0 kubenswrapper[7744]: I0220 14:48:14.902639 7744 scope.go:117] "RemoveContainer" containerID="ab258fec42d8ec54f4f2b16e7f18ce6e3f88de1f121875064baf67bce8e05a10" Feb 20 14:48:14.903478 master-0 kubenswrapper[7744]: E0220 14:48:14.903432 7744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab258fec42d8ec54f4f2b16e7f18ce6e3f88de1f121875064baf67bce8e05a10\": container with ID starting with ab258fec42d8ec54f4f2b16e7f18ce6e3f88de1f121875064baf67bce8e05a10 not found: ID does not exist" containerID="ab258fec42d8ec54f4f2b16e7f18ce6e3f88de1f121875064baf67bce8e05a10" Feb 20 14:48:14.903644 master-0 kubenswrapper[7744]: I0220 14:48:14.903491 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab258fec42d8ec54f4f2b16e7f18ce6e3f88de1f121875064baf67bce8e05a10"} err="failed to get container status \"ab258fec42d8ec54f4f2b16e7f18ce6e3f88de1f121875064baf67bce8e05a10\": rpc error: code = NotFound desc = could not find container \"ab258fec42d8ec54f4f2b16e7f18ce6e3f88de1f121875064baf67bce8e05a10\": container with ID starting with ab258fec42d8ec54f4f2b16e7f18ce6e3f88de1f121875064baf67bce8e05a10 not found: ID does not exist" Feb 20 14:48:14.903644 master-0 kubenswrapper[7744]: I0220 14:48:14.903529 7744 scope.go:117] "RemoveContainer" containerID="10bfd96b29aba7539a53e7ab2b44c245c4854718cd635aecd100e792a48f1fdc" Feb 20 14:48:14.904341 master-0 kubenswrapper[7744]: E0220 14:48:14.904269 7744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10bfd96b29aba7539a53e7ab2b44c245c4854718cd635aecd100e792a48f1fdc\": container with ID starting with 10bfd96b29aba7539a53e7ab2b44c245c4854718cd635aecd100e792a48f1fdc not found: ID does not exist" containerID="10bfd96b29aba7539a53e7ab2b44c245c4854718cd635aecd100e792a48f1fdc" Feb 20 14:48:14.904475 master-0 kubenswrapper[7744]: I0220 14:48:14.904339 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10bfd96b29aba7539a53e7ab2b44c245c4854718cd635aecd100e792a48f1fdc"} err="failed to get container status \"10bfd96b29aba7539a53e7ab2b44c245c4854718cd635aecd100e792a48f1fdc\": rpc error: code = NotFound desc = could not find container \"10bfd96b29aba7539a53e7ab2b44c245c4854718cd635aecd100e792a48f1fdc\": container with ID starting with 10bfd96b29aba7539a53e7ab2b44c245c4854718cd635aecd100e792a48f1fdc not found: ID does not exist" Feb 20 14:48:15.047151 master-0 kubenswrapper[7744]: I0220 14:48:15.047060 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12dab5d350ebc129b0bfa4714d330b15" path="/var/lib/kubelet/pods/12dab5d350ebc129b0bfa4714d330b15/volumes" Feb 20 14:48:15.047504 master-0 kubenswrapper[7744]: I0220 14:48:15.047463 7744 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 20 14:48:15.185473 master-0 kubenswrapper[7744]: I0220 14:48:15.185284 7744 patch_prober.go:28] interesting pod/etcd-operator-545bf96f4d-jhd5c container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Feb 20 14:48:15.185473 master-0 kubenswrapper[7744]: I0220 14:48:15.185359 7744 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" podUID="234a44fd-c153-47a6-a11d-7d4b7165c236" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Feb 20 14:48:18.357533 master-0 kubenswrapper[7744]: E0220 14:48:18.357119 7744 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.1895fbc7e1ea73a8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:47:44.34746052 +0000 UTC m=+63.549660450,LastTimestamp:2026-02-20 14:47:44.34746052 +0000 UTC m=+63.549660450,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:48:18.882132 master-0 kubenswrapper[7744]: I0220 14:48:18.882047 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_986049a1-b3e4-4dca-b178-55eaa7a27bfb/installer/0.log" Feb 20 14:48:18.882132 master-0 kubenswrapper[7744]: I0220 14:48:18.882113 7744 generic.go:334] "Generic (PLEG): container finished" podID="986049a1-b3e4-4dca-b178-55eaa7a27bfb" containerID="6f844b10f8ac3c87a0a1682a1e7ea9ccbec49915b04b1fd7a88cca60f9004b80" exitCode=1 Feb 20 14:48:22.557305 master-0 kubenswrapper[7744]: E0220 14:48:22.557212 7744 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:48:23.106379 master-0 kubenswrapper[7744]: I0220 14:48:23.106185 7744 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 14:48:23.430325 master-0 kubenswrapper[7744]: E0220 14:48:23.430151 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:48:25.851610 master-0 kubenswrapper[7744]: E0220 14:48:25.851505 7744 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 20 14:48:26.936271 master-0 kubenswrapper[7744]: I0220 14:48:26.936096 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_975d0fde-cb2f-4599-b3b7-7de876307a61/installer/0.log" Feb 20 14:48:26.936271 master-0 kubenswrapper[7744]: I0220 14:48:26.936168 7744 generic.go:334] "Generic (PLEG): container finished" podID="975d0fde-cb2f-4599-b3b7-7de876307a61" containerID="a59f2b3ca51cdc733c2fc543e6bd0ce183b3347c73680778d3a84d4f88dd4a1f" exitCode=1 Feb 20 14:48:26.939896 master-0 kubenswrapper[7744]: I0220 14:48:26.939814 7744 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="afdfde0efc416d5e6424b7e7305c6f92f436f753f3f94c9b4efe806e43f618f1" exitCode=0 Feb 20 14:48:27.948451 master-0 kubenswrapper[7744]: I0220 14:48:27.948257 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_277ab008-e6f0-49cd-801d-54d3071036d4/installer/0.log" Feb 20 14:48:27.948451 master-0 kubenswrapper[7744]: I0220 14:48:27.948323 7744 generic.go:334] "Generic (PLEG): container finished" podID="277ab008-e6f0-49cd-801d-54d3071036d4" containerID="6c7c12ccf7f07aacf9744ba31c10a72a4c19226b35c8d4fd36f32979a50dbaaf" exitCode=1 Feb 20 14:48:29.963650 master-0 kubenswrapper[7744]: I0220 14:48:29.963517 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-tj8fx_9fd9f419-2cdc-4991-8fb9-87d76ac58976/network-operator/0.log" Feb 20 14:48:29.963650 master-0 kubenswrapper[7744]: I0220 14:48:29.963594 7744 generic.go:334] "Generic (PLEG): container finished" podID="9fd9f419-2cdc-4991-8fb9-87d76ac58976" containerID="206ff74dbf8ac205b7526aba69f67598c7eb64c83ff678f0e12a41fa367def5c" exitCode=255 Feb 20 14:48:31.976082 master-0 kubenswrapper[7744]: I0220 14:48:31.975971 7744 generic.go:334] "Generic (PLEG): container finished" podID="234a44fd-c153-47a6-a11d-7d4b7165c236" containerID="62a31d32d4ca4d676ab042ba4779a3437daeccc9e4cd7a7e48c41884a5b21dfe" exitCode=0 Feb 20 14:48:32.558052 master-0 kubenswrapper[7744]: E0220 14:48:32.557969 7744 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:48:33.106028 master-0 kubenswrapper[7744]: I0220 14:48:33.105943 7744 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 14:48:33.431300 master-0 kubenswrapper[7744]: E0220 14:48:33.431085 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:48:37.006346 master-0 kubenswrapper[7744]: I0220 14:48:37.006270 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-gprr4_33675e96-ce49-49be-9117-954ac7cca5d5/approver/0.log" Feb 20 14:48:37.007298 master-0 kubenswrapper[7744]: I0220 14:48:37.007066 7744 generic.go:334] "Generic (PLEG): container finished" podID="33675e96-ce49-49be-9117-954ac7cca5d5" containerID="4e27eb5860cdd7ddac83a0d0bd7cc2ce5f678c93e28b4ef780b63b34098f4c71" exitCode=1 Feb 20 14:48:38.015360 master-0 kubenswrapper[7744]: I0220 14:48:38.015219 7744 generic.go:334] "Generic (PLEG): container finished" podID="43e9807a-859c-44c1-8511-0066b0f59ff8" containerID="434ed936cc25c1d0e0f36dd52a8572c7b7417d14a5a50821cdca25739e6e9d2b" exitCode=0 Feb 20 14:48:39.946900 master-0 kubenswrapper[7744]: E0220 14:48:39.946822 7744 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 20 14:48:42.558287 master-0 kubenswrapper[7744]: E0220 14:48:42.558226 7744 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" Feb 20 14:48:42.558287 master-0 kubenswrapper[7744]: I0220 14:48:42.558277 7744 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 20 14:48:43.057286 master-0 kubenswrapper[7744]: I0220 14:48:43.057191 7744 generic.go:334] "Generic (PLEG): container finished" podID="d3ca2d2f-9f31-4524-a28f-cf16b02dd711" containerID="e7d3fca444d3332e414ef45d428d9305bcf3afae66213559a3b368f710b1a743" exitCode=0 Feb 20 14:48:43.432217 master-0 kubenswrapper[7744]: E0220 14:48:43.432018 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:48:43.432217 master-0 kubenswrapper[7744]: E0220 14:48:43.432075 7744 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 20 14:48:46.072808 master-0 kubenswrapper[7744]: I0220 14:48:46.072697 7744 generic.go:334] "Generic (PLEG): container finished" podID="c81ad608-a8ad-4289-a8d2-d48acb9b540c" containerID="5433accfcf1efda61ccbe8f683016067c773a6f6dbc87107ff277c75114e35c4" exitCode=0 Feb 20 14:48:46.634349 master-0 kubenswrapper[7744]: E0220 14:48:46.634277 7744 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 20 14:48:46.634349 master-0 kubenswrapper[7744]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-5c75f78c8b-2sw9z_openshift-operator-lifecycle-manager_1fe69517-eec2-4721-933c-fa27cea7ab1f_0(186f827d34c791054f8dbf45cd803de4898cec3b3f0ecaadcedcc362c23c5b5b): error adding pod openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-2sw9z to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"186f827d34c791054f8dbf45cd803de4898cec3b3f0ecaadcedcc362c23c5b5b" Netns:"/var/run/netns/7c75d10e-0ae7-4d46-8565-9a0dd91d53cb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-5c75f78c8b-2sw9z;K8S_POD_INFRA_CONTAINER_ID=186f827d34c791054f8dbf45cd803de4898cec3b3f0ecaadcedcc362c23c5b5b;K8S_POD_UID=1fe69517-eec2-4721-933c-fa27cea7ab1f" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z/1fe69517-eec2-4721-933c-fa27cea7ab1f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-5c75f78c8b-2sw9z in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-5c75f78c8b-2sw9z in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods package-server-manager-5c75f78c8b-2sw9z) Feb 20 14:48:46.634349 master-0 kubenswrapper[7744]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 14:48:46.634349 master-0 kubenswrapper[7744]: > Feb 20 14:48:46.634733 master-0 kubenswrapper[7744]: E0220 14:48:46.634392 7744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 20 14:48:46.634733 master-0 kubenswrapper[7744]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-5c75f78c8b-2sw9z_openshift-operator-lifecycle-manager_1fe69517-eec2-4721-933c-fa27cea7ab1f_0(186f827d34c791054f8dbf45cd803de4898cec3b3f0ecaadcedcc362c23c5b5b): error adding pod openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-2sw9z to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"186f827d34c791054f8dbf45cd803de4898cec3b3f0ecaadcedcc362c23c5b5b" Netns:"/var/run/netns/7c75d10e-0ae7-4d46-8565-9a0dd91d53cb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-5c75f78c8b-2sw9z;K8S_POD_INFRA_CONTAINER_ID=186f827d34c791054f8dbf45cd803de4898cec3b3f0ecaadcedcc362c23c5b5b;K8S_POD_UID=1fe69517-eec2-4721-933c-fa27cea7ab1f" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z/1fe69517-eec2-4721-933c-fa27cea7ab1f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-5c75f78c8b-2sw9z in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-5c75f78c8b-2sw9z in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods package-server-manager-5c75f78c8b-2sw9z) Feb 20 14:48:46.634733 master-0 kubenswrapper[7744]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 14:48:46.634733 master-0 kubenswrapper[7744]: > pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:48:46.634733 master-0 kubenswrapper[7744]: E0220 14:48:46.634428 7744 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 20 14:48:46.634733 master-0 kubenswrapper[7744]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-5c75f78c8b-2sw9z_openshift-operator-lifecycle-manager_1fe69517-eec2-4721-933c-fa27cea7ab1f_0(186f827d34c791054f8dbf45cd803de4898cec3b3f0ecaadcedcc362c23c5b5b): error adding pod openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-2sw9z to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"186f827d34c791054f8dbf45cd803de4898cec3b3f0ecaadcedcc362c23c5b5b" Netns:"/var/run/netns/7c75d10e-0ae7-4d46-8565-9a0dd91d53cb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-5c75f78c8b-2sw9z;K8S_POD_INFRA_CONTAINER_ID=186f827d34c791054f8dbf45cd803de4898cec3b3f0ecaadcedcc362c23c5b5b;K8S_POD_UID=1fe69517-eec2-4721-933c-fa27cea7ab1f" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z/1fe69517-eec2-4721-933c-fa27cea7ab1f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-5c75f78c8b-2sw9z in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-5c75f78c8b-2sw9z in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods package-server-manager-5c75f78c8b-2sw9z) Feb 20 14:48:46.634733 master-0 kubenswrapper[7744]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 20 14:48:46.634733 master-0 kubenswrapper[7744]: > pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:48:46.634733 master-0 kubenswrapper[7744]: E0220 14:48:46.634549 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"package-server-manager-5c75f78c8b-2sw9z_openshift-operator-lifecycle-manager(1fe69517-eec2-4721-933c-fa27cea7ab1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"package-server-manager-5c75f78c8b-2sw9z_openshift-operator-lifecycle-manager(1fe69517-eec2-4721-933c-fa27cea7ab1f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-5c75f78c8b-2sw9z_openshift-operator-lifecycle-manager_1fe69517-eec2-4721-933c-fa27cea7ab1f_0(186f827d34c791054f8dbf45cd803de4898cec3b3f0ecaadcedcc362c23c5b5b): error adding pod openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-2sw9z to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"186f827d34c791054f8dbf45cd803de4898cec3b3f0ecaadcedcc362c23c5b5b\\\" Netns:\\\"/var/run/netns/7c75d10e-0ae7-4d46-8565-9a0dd91d53cb\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-5c75f78c8b-2sw9z;K8S_POD_INFRA_CONTAINER_ID=186f827d34c791054f8dbf45cd803de4898cec3b3f0ecaadcedcc362c23c5b5b;K8S_POD_UID=1fe69517-eec2-4721-933c-fa27cea7ab1f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z/1fe69517-eec2-4721-933c-fa27cea7ab1f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-5c75f78c8b-2sw9z in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-5c75f78c8b-2sw9z in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods package-server-manager-5c75f78c8b-2sw9z)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" podUID="1fe69517-eec2-4721-933c-fa27cea7ab1f" Feb 20 14:48:47.080540 master-0 kubenswrapper[7744]: I0220 14:48:47.080490 7744 generic.go:334] "Generic (PLEG): container finished" podID="989af121-da08-4f40-b08c-dd2aa67bc60c" containerID="31ee4b259747c34f0e0b3ef2fb4560b0c5185716f80403e8aa587e56efaa8aa2" exitCode=0 Feb 20 14:48:49.050339 master-0 kubenswrapper[7744]: E0220 14:48:49.050246 7744 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 20 14:48:49.051147 master-0 kubenswrapper[7744]: E0220 14:48:49.050508 7744 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.013s" Feb 20 14:48:49.066303 master-0 kubenswrapper[7744]: I0220 14:48:49.066220 7744 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 20 14:48:50.100462 master-0 kubenswrapper[7744]: I0220 14:48:50.100368 7744 generic.go:334] "Generic (PLEG): container finished" podID="4c31b8a7-edcb-403d-9122-7eb740f7d659" containerID="941dd44ae98490c4a66ceb486a6367ef40fefdfd465008c4ef290585229b84c1" exitCode=0 Feb 20 14:48:52.020757 master-0 kubenswrapper[7744]: I0220 14:48:52.020707 7744 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-6r5qx container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 20 14:48:52.021734 master-0 kubenswrapper[7744]: I0220 14:48:52.021683 7744 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" podUID="8157f73d-c757-40c4-80bc-3c9de2f2288a" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 20 14:48:52.115381 master-0 kubenswrapper[7744]: I0220 14:48:52.115295 7744 generic.go:334] "Generic (PLEG): container finished" podID="8157f73d-c757-40c4-80bc-3c9de2f2288a" containerID="cc8ec7e8b926ba49c143a81485ff0f3a14da5399a34238c1afe1d5e4cc71a0ba" exitCode=0 Feb 20 14:48:53.127714 master-0 kubenswrapper[7744]: I0220 14:48:53.127617 7744 generic.go:334] "Generic (PLEG): container finished" podID="8b73ae08-0ad7-4f99-8002-6df0d984cd2c" containerID="cfbd27b76aa0dc7c10ce1de7a1bdca66b3303ee8a7bc370fa5d11a1d913c8168" exitCode=0 Feb 20 14:48:54.136423 master-0 kubenswrapper[7744]: I0220 14:48:54.136334 7744 generic.go:334] "Generic (PLEG): container finished" podID="db9dc349-5216-43ff-8c17-3a9384a010ea" containerID="255184eff0270c34b8e6556e377cc8915ae25bb2f15df7164830c2551d563b2b" exitCode=0 Feb 20 14:48:55.494601 master-0 kubenswrapper[7744]: E0220 14:48:55.494526 7744 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.444s" Feb 20 14:48:55.506109 master-0 kubenswrapper[7744]: I0220 14:48:55.506051 7744 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 20 14:48:55.509222 master-0 kubenswrapper[7744]: I0220 14:48:55.509141 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"986049a1-b3e4-4dca-b178-55eaa7a27bfb","Type":"ContainerDied","Data":"6f844b10f8ac3c87a0a1682a1e7ea9ccbec49915b04b1fd7a88cca60f9004b80"} Feb 20 14:48:55.509367 master-0 kubenswrapper[7744]: I0220 14:48:55.509239 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:48:55.509367 master-0 kubenswrapper[7744]: I0220 14:48:55.509279 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"975d0fde-cb2f-4599-b3b7-7de876307a61","Type":"ContainerDied","Data":"a59f2b3ca51cdc733c2fc543e6bd0ce183b3347c73680778d3a84d4f88dd4a1f"} Feb 20 14:48:55.509367 master-0 kubenswrapper[7744]: I0220 14:48:55.509348 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"afdfde0efc416d5e6424b7e7305c6f92f436f753f3f94c9b4efe806e43f618f1"} Feb 20 14:48:55.509367 master-0 kubenswrapper[7744]: I0220 14:48:55.509369 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"277ab008-e6f0-49cd-801d-54d3071036d4","Type":"ContainerDied","Data":"6c7c12ccf7f07aacf9744ba31c10a72a4c19226b35c8d4fd36f32979a50dbaaf"} Feb 20 14:48:55.509624 master-0 kubenswrapper[7744]: I0220 14:48:55.509388 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" event={"ID":"9fd9f419-2cdc-4991-8fb9-87d76ac58976","Type":"ContainerDied","Data":"206ff74dbf8ac205b7526aba69f67598c7eb64c83ff678f0e12a41fa367def5c"} Feb 20 14:48:55.509624 master-0 kubenswrapper[7744]: I0220 14:48:55.509407 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" event={"ID":"234a44fd-c153-47a6-a11d-7d4b7165c236","Type":"ContainerDied","Data":"62a31d32d4ca4d676ab042ba4779a3437daeccc9e4cd7a7e48c41884a5b21dfe"} Feb 20 14:48:55.509624 master-0 kubenswrapper[7744]: I0220 14:48:55.509424 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-gprr4" event={"ID":"33675e96-ce49-49be-9117-954ac7cca5d5","Type":"ContainerDied","Data":"4e27eb5860cdd7ddac83a0d0bd7cc2ce5f678c93e28b4ef780b63b34098f4c71"} Feb 20 14:48:55.509624 master-0 kubenswrapper[7744]: I0220 14:48:55.509441 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" event={"ID":"43e9807a-859c-44c1-8511-0066b0f59ff8","Type":"ContainerDied","Data":"434ed936cc25c1d0e0f36dd52a8572c7b7417d14a5a50821cdca25739e6e9d2b"} Feb 20 14:48:55.509624 master-0 kubenswrapper[7744]: I0220 14:48:55.509457 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"689ee9b0e5108311ff54df17a58ad47a30c4ae3de8db0ce3794fccb2f1d4b026"} Feb 20 14:48:55.509624 master-0 kubenswrapper[7744]: I0220 14:48:55.509469 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"5c1b5acd04d8c9f3d08aff8344aaeb09f20e3251f3c25c2cbd1b35b61bcf2908"} Feb 20 14:48:55.509624 master-0 kubenswrapper[7744]: I0220 14:48:55.509485 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"03c3011b78a1e090e26282e5bfe01d4a95cee038877b51f0cfa6e5d29c599082"} Feb 20 14:48:55.509624 master-0 kubenswrapper[7744]: I0220 14:48:55.509504 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"e7c73184be18a91b74e2b6b30aa92586b0cebf1411e6cd234cef01c85e9c4104"} Feb 20 14:48:55.509624 master-0 kubenswrapper[7744]: I0220 14:48:55.509518 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"f07ceb1fe9c4ec76c3cffe25b5d95b988b21733f5be6094583204d0c17e7fcb8"} Feb 20 14:48:55.509624 master-0 kubenswrapper[7744]: I0220 14:48:55.509532 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" event={"ID":"d3ca2d2f-9f31-4524-a28f-cf16b02dd711","Type":"ContainerDied","Data":"e7d3fca444d3332e414ef45d428d9305bcf3afae66213559a3b368f710b1a743"} Feb 20 14:48:55.509624 master-0 kubenswrapper[7744]: I0220 14:48:55.509554 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" event={"ID":"c81ad608-a8ad-4289-a8d2-d48acb9b540c","Type":"ContainerDied","Data":"5433accfcf1efda61ccbe8f683016067c773a6f6dbc87107ff277c75114e35c4"} Feb 20 14:48:55.509624 master-0 kubenswrapper[7744]: I0220 14:48:55.509571 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" event={"ID":"989af121-da08-4f40-b08c-dd2aa67bc60c","Type":"ContainerDied","Data":"31ee4b259747c34f0e0b3ef2fb4560b0c5185716f80403e8aa587e56efaa8aa2"} Feb 20 14:48:55.509624 master-0 kubenswrapper[7744]: I0220 14:48:55.509588 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" event={"ID":"4c31b8a7-edcb-403d-9122-7eb740f7d659","Type":"ContainerDied","Data":"941dd44ae98490c4a66ceb486a6367ef40fefdfd465008c4ef290585229b84c1"} Feb 20 14:48:55.509624 master-0 kubenswrapper[7744]: I0220 14:48:55.509606 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" event={"ID":"8157f73d-c757-40c4-80bc-3c9de2f2288a","Type":"ContainerDied","Data":"cc8ec7e8b926ba49c143a81485ff0f3a14da5399a34238c1afe1d5e4cc71a0ba"} Feb 20 14:48:55.509624 master-0 kubenswrapper[7744]: I0220 14:48:55.509623 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" event={"ID":"8b73ae08-0ad7-4f99-8002-6df0d984cd2c","Type":"ContainerDied","Data":"cfbd27b76aa0dc7c10ce1de7a1bdca66b3303ee8a7bc370fa5d11a1d913c8168"} Feb 20 14:48:55.509624 master-0 kubenswrapper[7744]: I0220 14:48:55.509640 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" event={"ID":"db9dc349-5216-43ff-8c17-3a9384a010ea","Type":"ContainerDied","Data":"255184eff0270c34b8e6556e377cc8915ae25bb2f15df7164830c2551d563b2b"} Feb 20 14:48:55.510679 master-0 kubenswrapper[7744]: I0220 14:48:55.510475 7744 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"270d3a75efe91ed6ef6d1abeb18e00097f8477e6f1fadd3a750363afe0b16909"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 20 14:48:55.510679 master-0 kubenswrapper[7744]: I0220 14:48:55.510615 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://270d3a75efe91ed6ef6d1abeb18e00097f8477e6f1fadd3a750363afe0b16909" gracePeriod=30 Feb 20 14:48:55.510857 master-0 kubenswrapper[7744]: I0220 14:48:55.510807 7744 scope.go:117] "RemoveContainer" containerID="62a31d32d4ca4d676ab042ba4779a3437daeccc9e4cd7a7e48c41884a5b21dfe" Feb 20 14:48:55.511902 master-0 kubenswrapper[7744]: I0220 14:48:55.511337 7744 scope.go:117] "RemoveContainer" containerID="31ee4b259747c34f0e0b3ef2fb4560b0c5185716f80403e8aa587e56efaa8aa2" Feb 20 14:48:55.517969 master-0 kubenswrapper[7744]: I0220 14:48:55.512229 7744 scope.go:117] "RemoveContainer" containerID="941dd44ae98490c4a66ceb486a6367ef40fefdfd465008c4ef290585229b84c1" Feb 20 14:48:55.517969 master-0 kubenswrapper[7744]: I0220 14:48:55.512320 7744 scope.go:117] "RemoveContainer" containerID="cc8ec7e8b926ba49c143a81485ff0f3a14da5399a34238c1afe1d5e4cc71a0ba" Feb 20 14:48:55.517969 master-0 kubenswrapper[7744]: I0220 14:48:55.512473 7744 scope.go:117] "RemoveContainer" containerID="4e27eb5860cdd7ddac83a0d0bd7cc2ce5f678c93e28b4ef780b63b34098f4c71" Feb 20 14:48:55.517969 master-0 kubenswrapper[7744]: I0220 14:48:55.514402 7744 scope.go:117] "RemoveContainer" containerID="5433accfcf1efda61ccbe8f683016067c773a6f6dbc87107ff277c75114e35c4" Feb 20 14:48:55.517969 master-0 kubenswrapper[7744]: I0220 14:48:55.514665 7744 scope.go:117] "RemoveContainer" containerID="cfbd27b76aa0dc7c10ce1de7a1bdca66b3303ee8a7bc370fa5d11a1d913c8168" Feb 20 14:48:55.517969 master-0 kubenswrapper[7744]: I0220 14:48:55.514762 7744 scope.go:117] "RemoveContainer" containerID="255184eff0270c34b8e6556e377cc8915ae25bb2f15df7164830c2551d563b2b" Feb 20 14:48:55.519610 master-0 kubenswrapper[7744]: I0220 14:48:55.518485 7744 scope.go:117] "RemoveContainer" containerID="e7d3fca444d3332e414ef45d428d9305bcf3afae66213559a3b368f710b1a743" Feb 20 14:48:55.519610 master-0 kubenswrapper[7744]: I0220 14:48:55.518569 7744 scope.go:117] "RemoveContainer" containerID="434ed936cc25c1d0e0f36dd52a8572c7b7417d14a5a50821cdca25739e6e9d2b" Feb 20 14:48:55.532127 master-0 kubenswrapper[7744]: I0220 14:48:55.527053 7744 scope.go:117] "RemoveContainer" containerID="206ff74dbf8ac205b7526aba69f67598c7eb64c83ff678f0e12a41fa367def5c" Feb 20 14:48:55.572525 master-0 kubenswrapper[7744]: I0220 14:48:55.572463 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 20 14:48:55.572525 master-0 kubenswrapper[7744]: I0220 14:48:55.572517 7744 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="51256ecb-fc2b-4367-9c68-821ecdd31a7a" Feb 20 14:48:55.580251 master-0 kubenswrapper[7744]: I0220 14:48:55.579084 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 20 14:48:55.580251 master-0 kubenswrapper[7744]: I0220 14:48:55.579128 7744 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="51256ecb-fc2b-4367-9c68-821ecdd31a7a" Feb 20 14:48:55.584748 master-0 kubenswrapper[7744]: I0220 14:48:55.584654 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 20 14:48:56.150191 master-0 kubenswrapper[7744]: I0220 14:48:56.150073 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" event={"ID":"234a44fd-c153-47a6-a11d-7d4b7165c236","Type":"ContainerStarted","Data":"581f236214a140a0dd97c9926ea209ede3f39ed6cfcbab89bbd1dddd4483776d"} Feb 20 14:48:56.152251 master-0 kubenswrapper[7744]: I0220 14:48:56.152210 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-gprr4_33675e96-ce49-49be-9117-954ac7cca5d5/approver/0.log" Feb 20 14:48:56.152714 master-0 kubenswrapper[7744]: I0220 14:48:56.152666 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-gprr4" event={"ID":"33675e96-ce49-49be-9117-954ac7cca5d5","Type":"ContainerStarted","Data":"93c53e18dcac71f47a3746e6562e8b692068a3b0ff7c4afe8e6e0d3f178f230b"} Feb 20 14:48:56.160271 master-0 kubenswrapper[7744]: I0220 14:48:56.160224 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" event={"ID":"8157f73d-c757-40c4-80bc-3c9de2f2288a","Type":"ContainerStarted","Data":"9eac150251b3b5d386062f7aa8467ef3cc273bff50cfaf7bb7d3226879ebfbb8"} Feb 20 14:48:56.183873 master-0 kubenswrapper[7744]: I0220 14:48:56.183813 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" event={"ID":"d3ca2d2f-9f31-4524-a28f-cf16b02dd711","Type":"ContainerStarted","Data":"a95da6b755620b3477b82b60290cab82bafb501ad18fb013d6a2d035fb2977b7"} Feb 20 14:48:56.197265 master-0 kubenswrapper[7744]: I0220 14:48:56.197168 7744 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="270d3a75efe91ed6ef6d1abeb18e00097f8477e6f1fadd3a750363afe0b16909" exitCode=0 Feb 20 14:48:56.197265 master-0 kubenswrapper[7744]: I0220 14:48:56.197242 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"270d3a75efe91ed6ef6d1abeb18e00097f8477e6f1fadd3a750363afe0b16909"} Feb 20 14:48:56.197486 master-0 kubenswrapper[7744]: I0220 14:48:56.197296 7744 scope.go:117] "RemoveContainer" containerID="24b2aee1f972d18ca4405ff399927f57d407665113e657b4f3db6303afde8747" Feb 20 14:48:56.202116 master-0 kubenswrapper[7744]: I0220 14:48:56.202070 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" event={"ID":"db9dc349-5216-43ff-8c17-3a9384a010ea","Type":"ContainerStarted","Data":"09404528b44b3c434922019f2c7d2520c924306f4d6dd307fe7646b14b292751"} Feb 20 14:48:56.210530 master-0 kubenswrapper[7744]: I0220 14:48:56.210477 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-tj8fx_9fd9f419-2cdc-4991-8fb9-87d76ac58976/network-operator/0.log" Feb 20 14:48:56.213457 master-0 kubenswrapper[7744]: I0220 14:48:56.213387 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" event={"ID":"8b73ae08-0ad7-4f99-8002-6df0d984cd2c","Type":"ContainerStarted","Data":"5a018964150859d45676b2b2c0dbe19b3259b6b089851d91eea98e412d65f129"} Feb 20 14:48:56.216758 master-0 kubenswrapper[7744]: I0220 14:48:56.216695 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" event={"ID":"989af121-da08-4f40-b08c-dd2aa67bc60c","Type":"ContainerStarted","Data":"832f243cdb2cdff1065e35c1a4b8eb6397a6696e55399d5bf71d3cb4f866d80d"} Feb 20 14:48:56.220466 master-0 kubenswrapper[7744]: I0220 14:48:56.220420 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" event={"ID":"c81ad608-a8ad-4289-a8d2-d48acb9b540c","Type":"ContainerStarted","Data":"76678f3b3771aa596ae00afe94f70cbbd9bcae26c675da96c50642342f6abcee"} Feb 20 14:48:56.292734 master-0 kubenswrapper[7744]: I0220 14:48:56.292503 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=1.292468181 podStartE2EDuration="1.292468181s" podCreationTimestamp="2026-02-20 14:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:48:56.290577395 +0000 UTC m=+135.492777325" watchObservedRunningTime="2026-02-20 14:48:56.292468181 +0000 UTC m=+135.494668091" Feb 20 14:48:56.697525 master-0 kubenswrapper[7744]: I0220 14:48:56.697493 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_975d0fde-cb2f-4599-b3b7-7de876307a61/installer/0.log" Feb 20 14:48:56.697866 master-0 kubenswrapper[7744]: I0220 14:48:56.697564 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 14:48:56.701014 master-0 kubenswrapper[7744]: I0220 14:48:56.700978 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_277ab008-e6f0-49cd-801d-54d3071036d4/installer/0.log" Feb 20 14:48:56.701093 master-0 kubenswrapper[7744]: I0220 14:48:56.701035 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 14:48:56.705053 master-0 kubenswrapper[7744]: I0220 14:48:56.705024 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_986049a1-b3e4-4dca-b178-55eaa7a27bfb/installer/0.log" Feb 20 14:48:56.705129 master-0 kubenswrapper[7744]: I0220 14:48:56.705073 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 14:48:56.860268 master-0 kubenswrapper[7744]: I0220 14:48:56.860168 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/986049a1-b3e4-4dca-b178-55eaa7a27bfb-kube-api-access\") pod \"986049a1-b3e4-4dca-b178-55eaa7a27bfb\" (UID: \"986049a1-b3e4-4dca-b178-55eaa7a27bfb\") " Feb 20 14:48:56.860528 master-0 kubenswrapper[7744]: I0220 14:48:56.860288 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/277ab008-e6f0-49cd-801d-54d3071036d4-kube-api-access\") pod \"277ab008-e6f0-49cd-801d-54d3071036d4\" (UID: \"277ab008-e6f0-49cd-801d-54d3071036d4\") " Feb 20 14:48:56.860528 master-0 kubenswrapper[7744]: I0220 14:48:56.860362 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/975d0fde-cb2f-4599-b3b7-7de876307a61-kubelet-dir\") pod \"975d0fde-cb2f-4599-b3b7-7de876307a61\" (UID: \"975d0fde-cb2f-4599-b3b7-7de876307a61\") " Feb 20 14:48:56.860528 master-0 kubenswrapper[7744]: I0220 14:48:56.860405 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/277ab008-e6f0-49cd-801d-54d3071036d4-var-lock\") pod \"277ab008-e6f0-49cd-801d-54d3071036d4\" (UID: \"277ab008-e6f0-49cd-801d-54d3071036d4\") " Feb 20 14:48:56.860528 master-0 kubenswrapper[7744]: I0220 14:48:56.860480 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/986049a1-b3e4-4dca-b178-55eaa7a27bfb-kubelet-dir\") pod \"986049a1-b3e4-4dca-b178-55eaa7a27bfb\" (UID: \"986049a1-b3e4-4dca-b178-55eaa7a27bfb\") " Feb 20 14:48:56.860653 master-0 kubenswrapper[7744]: I0220 14:48:56.860543 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/975d0fde-cb2f-4599-b3b7-7de876307a61-kube-api-access\") pod \"975d0fde-cb2f-4599-b3b7-7de876307a61\" (UID: \"975d0fde-cb2f-4599-b3b7-7de876307a61\") " Feb 20 14:48:56.860653 master-0 kubenswrapper[7744]: I0220 14:48:56.860574 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/986049a1-b3e4-4dca-b178-55eaa7a27bfb-var-lock\") pod \"986049a1-b3e4-4dca-b178-55eaa7a27bfb\" (UID: \"986049a1-b3e4-4dca-b178-55eaa7a27bfb\") " Feb 20 14:48:56.860653 master-0 kubenswrapper[7744]: I0220 14:48:56.860615 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/975d0fde-cb2f-4599-b3b7-7de876307a61-var-lock\") pod \"975d0fde-cb2f-4599-b3b7-7de876307a61\" (UID: \"975d0fde-cb2f-4599-b3b7-7de876307a61\") " Feb 20 14:48:56.860653 master-0 kubenswrapper[7744]: I0220 14:48:56.860647 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/277ab008-e6f0-49cd-801d-54d3071036d4-kubelet-dir\") pod \"277ab008-e6f0-49cd-801d-54d3071036d4\" (UID: \"277ab008-e6f0-49cd-801d-54d3071036d4\") " Feb 20 14:48:56.861050 master-0 kubenswrapper[7744]: I0220 14:48:56.861003 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/277ab008-e6f0-49cd-801d-54d3071036d4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "277ab008-e6f0-49cd-801d-54d3071036d4" (UID: "277ab008-e6f0-49cd-801d-54d3071036d4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:48:56.861724 master-0 kubenswrapper[7744]: I0220 14:48:56.861683 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/986049a1-b3e4-4dca-b178-55eaa7a27bfb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "986049a1-b3e4-4dca-b178-55eaa7a27bfb" (UID: "986049a1-b3e4-4dca-b178-55eaa7a27bfb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:48:56.861764 master-0 kubenswrapper[7744]: I0220 14:48:56.861742 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/277ab008-e6f0-49cd-801d-54d3071036d4-var-lock" (OuterVolumeSpecName: "var-lock") pod "277ab008-e6f0-49cd-801d-54d3071036d4" (UID: "277ab008-e6f0-49cd-801d-54d3071036d4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:48:56.861854 master-0 kubenswrapper[7744]: I0220 14:48:56.861798 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/975d0fde-cb2f-4599-b3b7-7de876307a61-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "975d0fde-cb2f-4599-b3b7-7de876307a61" (UID: "975d0fde-cb2f-4599-b3b7-7de876307a61"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:48:56.861854 master-0 kubenswrapper[7744]: I0220 14:48:56.861826 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/986049a1-b3e4-4dca-b178-55eaa7a27bfb-var-lock" (OuterVolumeSpecName: "var-lock") pod "986049a1-b3e4-4dca-b178-55eaa7a27bfb" (UID: "986049a1-b3e4-4dca-b178-55eaa7a27bfb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:48:56.861941 master-0 kubenswrapper[7744]: I0220 14:48:56.861902 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/975d0fde-cb2f-4599-b3b7-7de876307a61-var-lock" (OuterVolumeSpecName: "var-lock") pod "975d0fde-cb2f-4599-b3b7-7de876307a61" (UID: "975d0fde-cb2f-4599-b3b7-7de876307a61"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:48:56.865046 master-0 kubenswrapper[7744]: I0220 14:48:56.864989 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277ab008-e6f0-49cd-801d-54d3071036d4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "277ab008-e6f0-49cd-801d-54d3071036d4" (UID: "277ab008-e6f0-49cd-801d-54d3071036d4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:48:56.866342 master-0 kubenswrapper[7744]: I0220 14:48:56.866289 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/975d0fde-cb2f-4599-b3b7-7de876307a61-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "975d0fde-cb2f-4599-b3b7-7de876307a61" (UID: "975d0fde-cb2f-4599-b3b7-7de876307a61"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:48:56.867973 master-0 kubenswrapper[7744]: I0220 14:48:56.867883 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/986049a1-b3e4-4dca-b178-55eaa7a27bfb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "986049a1-b3e4-4dca-b178-55eaa7a27bfb" (UID: "986049a1-b3e4-4dca-b178-55eaa7a27bfb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:48:56.961969 master-0 kubenswrapper[7744]: I0220 14:48:56.961855 7744 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/975d0fde-cb2f-4599-b3b7-7de876307a61-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:48:56.961969 master-0 kubenswrapper[7744]: I0220 14:48:56.961912 7744 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/277ab008-e6f0-49cd-801d-54d3071036d4-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 14:48:56.961969 master-0 kubenswrapper[7744]: I0220 14:48:56.961959 7744 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/986049a1-b3e4-4dca-b178-55eaa7a27bfb-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:48:56.961969 master-0 kubenswrapper[7744]: I0220 14:48:56.961977 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/975d0fde-cb2f-4599-b3b7-7de876307a61-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 14:48:56.962424 master-0 kubenswrapper[7744]: I0220 14:48:56.961995 7744 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/986049a1-b3e4-4dca-b178-55eaa7a27bfb-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 14:48:56.962424 master-0 kubenswrapper[7744]: I0220 14:48:56.962013 7744 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/975d0fde-cb2f-4599-b3b7-7de876307a61-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 14:48:56.962424 master-0 kubenswrapper[7744]: I0220 14:48:56.962030 7744 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/277ab008-e6f0-49cd-801d-54d3071036d4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:48:56.962424 master-0 kubenswrapper[7744]: I0220 14:48:56.962047 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/986049a1-b3e4-4dca-b178-55eaa7a27bfb-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 14:48:56.962424 master-0 kubenswrapper[7744]: I0220 14:48:56.962064 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/277ab008-e6f0-49cd-801d-54d3071036d4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 14:48:57.230003 master-0 kubenswrapper[7744]: I0220 14:48:57.229879 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"c892ef72ed4285d837da54275f52fe7ef188495a8fa319fa925855d3b2fd2a7f"} Feb 20 14:48:57.231668 master-0 kubenswrapper[7744]: I0220 14:48:57.231647 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_975d0fde-cb2f-4599-b3b7-7de876307a61/installer/0.log" Feb 20 14:48:57.231748 master-0 kubenswrapper[7744]: I0220 14:48:57.231713 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"975d0fde-cb2f-4599-b3b7-7de876307a61","Type":"ContainerDied","Data":"60d5adb534a09e9e59437dcfd0ecba0aa4cc034a5ffeab8cd4bf643934aa8641"} Feb 20 14:48:57.231748 master-0 kubenswrapper[7744]: I0220 14:48:57.231730 7744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60d5adb534a09e9e59437dcfd0ecba0aa4cc034a5ffeab8cd4bf643934aa8641" Feb 20 14:48:57.231902 master-0 kubenswrapper[7744]: I0220 14:48:57.231850 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 14:48:57.233834 master-0 kubenswrapper[7744]: I0220 14:48:57.233790 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_277ab008-e6f0-49cd-801d-54d3071036d4/installer/0.log" Feb 20 14:48:57.233987 master-0 kubenswrapper[7744]: I0220 14:48:57.233963 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"277ab008-e6f0-49cd-801d-54d3071036d4","Type":"ContainerDied","Data":"9973189e4a2bcf54eee01772766f154e4f0414d83cd39d056cbee6f94ee506af"} Feb 20 14:48:57.234042 master-0 kubenswrapper[7744]: I0220 14:48:57.233990 7744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9973189e4a2bcf54eee01772766f154e4f0414d83cd39d056cbee6f94ee506af" Feb 20 14:48:57.234111 master-0 kubenswrapper[7744]: I0220 14:48:57.234056 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 14:48:57.236708 master-0 kubenswrapper[7744]: I0220 14:48:57.236681 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-tj8fx_9fd9f419-2cdc-4991-8fb9-87d76ac58976/network-operator/0.log" Feb 20 14:48:57.236976 master-0 kubenswrapper[7744]: I0220 14:48:57.236900 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" event={"ID":"9fd9f419-2cdc-4991-8fb9-87d76ac58976","Type":"ContainerStarted","Data":"5761b5d97bb857209597024a19cdbe2341d245c395e6ce681c8bc8fd7fa023bd"} Feb 20 14:48:57.239244 master-0 kubenswrapper[7744]: I0220 14:48:57.239181 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_986049a1-b3e4-4dca-b178-55eaa7a27bfb/installer/0.log" Feb 20 14:48:57.239497 master-0 kubenswrapper[7744]: I0220 14:48:57.239446 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 14:48:57.239594 master-0 kubenswrapper[7744]: I0220 14:48:57.239499 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"986049a1-b3e4-4dca-b178-55eaa7a27bfb","Type":"ContainerDied","Data":"c5695ade0d175e611702bd38877ae968a3b086637dc9039f70a7cafe4447aa4c"} Feb 20 14:48:57.239663 master-0 kubenswrapper[7744]: I0220 14:48:57.239591 7744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5695ade0d175e611702bd38877ae968a3b086637dc9039f70a7cafe4447aa4c" Feb 20 14:48:57.241792 master-0 kubenswrapper[7744]: I0220 14:48:57.241740 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" event={"ID":"43e9807a-859c-44c1-8511-0066b0f59ff8","Type":"ContainerStarted","Data":"576abacd055041debbe6e09151c67b20fc73597532194f0a3cbc9b1e0f7ce583"} Feb 20 14:48:57.245729 master-0 kubenswrapper[7744]: I0220 14:48:57.245672 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" event={"ID":"4c31b8a7-edcb-403d-9122-7eb740f7d659","Type":"ContainerStarted","Data":"696e06ef6554e221cbbd27e48c3197d621e72c8d19b1df8b12bd4eab6b3279b8"} Feb 20 14:48:57.492022 master-0 kubenswrapper[7744]: I0220 14:48:57.488668 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 20 14:48:57.492022 master-0 kubenswrapper[7744]: I0220 14:48:57.488735 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 20 14:48:57.523020 master-0 kubenswrapper[7744]: I0220 14:48:57.522417 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 20 14:48:58.263167 master-0 kubenswrapper[7744]: I0220 14:48:58.263126 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 20 14:49:00.037017 master-0 kubenswrapper[7744]: I0220 14:49:00.036957 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:49:00.037707 master-0 kubenswrapper[7744]: I0220 14:49:00.037490 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:49:00.107944 master-0 kubenswrapper[7744]: I0220 14:49:00.104466 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:49:00.111938 master-0 kubenswrapper[7744]: I0220 14:49:00.108614 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:49:00.264031 master-0 kubenswrapper[7744]: I0220 14:49:00.263963 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:49:00.524460 master-0 kubenswrapper[7744]: I0220 14:49:00.524393 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z"] Feb 20 14:49:00.534148 master-0 kubenswrapper[7744]: W0220 14:49:00.534099 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fe69517_eec2_4721_933c_fa27cea7ab1f.slice/crio-5fc1828a85716c5c152a1e9d497ac8c147726f1a98a02df72c44bdcd9feda4f1 WatchSource:0}: Error finding container 5fc1828a85716c5c152a1e9d497ac8c147726f1a98a02df72c44bdcd9feda4f1: Status 404 returned error can't find the container with id 5fc1828a85716c5c152a1e9d497ac8c147726f1a98a02df72c44bdcd9feda4f1 Feb 20 14:49:01.276103 master-0 kubenswrapper[7744]: I0220 14:49:01.276033 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" event={"ID":"1fe69517-eec2-4721-933c-fa27cea7ab1f","Type":"ContainerStarted","Data":"14c2be5daaa831938eade439658f265a27e1cda2542e3eb0c4e1d57c63fee064"} Feb 20 14:49:01.276103 master-0 kubenswrapper[7744]: I0220 14:49:01.276078 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" event={"ID":"1fe69517-eec2-4721-933c-fa27cea7ab1f","Type":"ContainerStarted","Data":"5fc1828a85716c5c152a1e9d497ac8c147726f1a98a02df72c44bdcd9feda4f1"} Feb 20 14:49:08.315403 master-0 kubenswrapper[7744]: I0220 14:49:08.315334 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" event={"ID":"1fe69517-eec2-4721-933c-fa27cea7ab1f","Type":"ContainerStarted","Data":"8fa1fcd077e28cf5cfeec8c2cafd29cf0677802573ac33c46747c76a0973c8ec"} Feb 20 14:49:08.316492 master-0 kubenswrapper[7744]: I0220 14:49:08.316444 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:49:10.366141 master-0 kubenswrapper[7744]: I0220 14:49:10.366083 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:49:13.913236 master-0 kubenswrapper[7744]: I0220 14:49:13.913149 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p"] Feb 20 14:49:13.914066 master-0 kubenswrapper[7744]: E0220 14:49:13.913478 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986049a1-b3e4-4dca-b178-55eaa7a27bfb" containerName="installer" Feb 20 14:49:13.914066 master-0 kubenswrapper[7744]: I0220 14:49:13.913495 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="986049a1-b3e4-4dca-b178-55eaa7a27bfb" containerName="installer" Feb 20 14:49:13.914066 master-0 kubenswrapper[7744]: E0220 14:49:13.913525 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53835140-8eed-401c-ac07-f89b554ff616" containerName="installer" Feb 20 14:49:13.914066 master-0 kubenswrapper[7744]: I0220 14:49:13.913534 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="53835140-8eed-401c-ac07-f89b554ff616" containerName="installer" Feb 20 14:49:13.914066 master-0 kubenswrapper[7744]: E0220 14:49:13.913546 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277ab008-e6f0-49cd-801d-54d3071036d4" containerName="installer" Feb 20 14:49:13.914066 master-0 kubenswrapper[7744]: I0220 14:49:13.913555 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="277ab008-e6f0-49cd-801d-54d3071036d4" containerName="installer" Feb 20 14:49:13.914066 master-0 kubenswrapper[7744]: E0220 14:49:13.913569 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975d0fde-cb2f-4599-b3b7-7de876307a61" containerName="installer" Feb 20 14:49:13.914066 master-0 kubenswrapper[7744]: I0220 14:49:13.913575 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="975d0fde-cb2f-4599-b3b7-7de876307a61" containerName="installer" Feb 20 14:49:13.914066 master-0 kubenswrapper[7744]: I0220 14:49:13.913684 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="53835140-8eed-401c-ac07-f89b554ff616" containerName="installer" Feb 20 14:49:13.914066 master-0 kubenswrapper[7744]: I0220 14:49:13.913712 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="975d0fde-cb2f-4599-b3b7-7de876307a61" containerName="installer" Feb 20 14:49:13.914066 master-0 kubenswrapper[7744]: I0220 14:49:13.913726 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="986049a1-b3e4-4dca-b178-55eaa7a27bfb" containerName="installer" Feb 20 14:49:13.914066 master-0 kubenswrapper[7744]: I0220 14:49:13.913737 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="277ab008-e6f0-49cd-801d-54d3071036d4" containerName="installer" Feb 20 14:49:13.914485 master-0 kubenswrapper[7744]: I0220 14:49:13.914403 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 14:49:13.914692 master-0 kubenswrapper[7744]: I0220 14:49:13.914635 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf"] Feb 20 14:49:13.915729 master-0 kubenswrapper[7744]: I0220 14:49:13.915703 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" Feb 20 14:49:13.918086 master-0 kubenswrapper[7744]: I0220 14:49:13.917353 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x"] Feb 20 14:49:13.923601 master-0 kubenswrapper[7744]: I0220 14:49:13.923547 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" Feb 20 14:49:13.935963 master-0 kubenswrapper[7744]: I0220 14:49:13.932978 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-cp6wb" Feb 20 14:49:13.935963 master-0 kubenswrapper[7744]: I0220 14:49:13.933278 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-ts5zc" Feb 20 14:49:13.935963 master-0 kubenswrapper[7744]: I0220 14:49:13.933406 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 20 14:49:13.935963 master-0 kubenswrapper[7744]: I0220 14:49:13.933558 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 20 14:49:13.935963 master-0 kubenswrapper[7744]: I0220 14:49:13.933690 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 20 14:49:13.935963 master-0 kubenswrapper[7744]: I0220 14:49:13.933896 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 20 14:49:13.935963 master-0 kubenswrapper[7744]: I0220 14:49:13.934102 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 20 14:49:13.935963 master-0 kubenswrapper[7744]: I0220 14:49:13.934211 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 20 14:49:13.935963 master-0 kubenswrapper[7744]: I0220 14:49:13.934738 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 20 14:49:13.935963 master-0 kubenswrapper[7744]: I0220 14:49:13.935312 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 20 14:49:13.935963 master-0 kubenswrapper[7744]: I0220 14:49:13.935858 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 20 14:49:13.936550 master-0 kubenswrapper[7744]: I0220 14:49:13.936187 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 20 14:49:13.942957 master-0 kubenswrapper[7744]: I0220 14:49:13.940406 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xrgsf" Feb 20 14:49:13.942957 master-0 kubenswrapper[7744]: I0220 14:49:13.940520 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 20 14:49:13.945296 master-0 kubenswrapper[7744]: I0220 14:49:13.945255 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 20 14:49:13.950964 master-0 kubenswrapper[7744]: I0220 14:49:13.948977 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x"] Feb 20 14:49:13.956997 master-0 kubenswrapper[7744]: I0220 14:49:13.954584 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p"] Feb 20 14:49:14.022944 master-0 kubenswrapper[7744]: I0220 14:49:14.018841 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-mv42p\" (UID: \"6949e9d5-460c-4b63-94cb-1b20ad75ee1c\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 14:49:14.022944 master-0 kubenswrapper[7744]: I0220 14:49:14.018882 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-machine-approver-tls\") pod \"machine-approver-798b897698-vzbjf\" (UID: \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" Feb 20 14:49:14.022944 master-0 kubenswrapper[7744]: I0220 14:49:14.018899 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2fcg\" (UniqueName: \"kubernetes.io/projected/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-kube-api-access-m2fcg\") pod \"machine-approver-798b897698-vzbjf\" (UID: \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" Feb 20 14:49:14.022944 master-0 kubenswrapper[7744]: I0220 14:49:14.018920 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-config\") pod \"machine-approver-798b897698-vzbjf\" (UID: \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" Feb 20 14:49:14.022944 master-0 kubenswrapper[7744]: I0220 14:49:14.018964 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk2pl\" (UniqueName: \"kubernetes.io/projected/ee3a6748-0bbc-41bf-8726-a8db18faf03b-kube-api-access-mk2pl\") pod \"cluster-samples-operator-65c5c48b9b-92c4x\" (UID: \"ee3a6748-0bbc-41bf-8726-a8db18faf03b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" Feb 20 14:49:14.022944 master-0 kubenswrapper[7744]: I0220 14:49:14.018981 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-auth-proxy-config\") pod \"machine-approver-798b897698-vzbjf\" (UID: \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" Feb 20 14:49:14.022944 master-0 kubenswrapper[7744]: I0220 14:49:14.018997 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-mv42p\" (UID: \"6949e9d5-460c-4b63-94cb-1b20ad75ee1c\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 14:49:14.022944 master-0 kubenswrapper[7744]: I0220 14:49:14.019035 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpt8j\" (UniqueName: \"kubernetes.io/projected/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-kube-api-access-jpt8j\") pod \"cloud-credential-operator-6968c58f46-mv42p\" (UID: \"6949e9d5-460c-4b63-94cb-1b20ad75ee1c\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 14:49:14.022944 master-0 kubenswrapper[7744]: I0220 14:49:14.019054 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee3a6748-0bbc-41bf-8726-a8db18faf03b-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-92c4x\" (UID: \"ee3a6748-0bbc-41bf-8726-a8db18faf03b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" Feb 20 14:49:14.029948 master-0 kubenswrapper[7744]: I0220 14:49:14.024258 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw"] Feb 20 14:49:14.029948 master-0 kubenswrapper[7744]: I0220 14:49:14.025994 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.029948 master-0 kubenswrapper[7744]: I0220 14:49:14.028590 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 14:49:14.029948 master-0 kubenswrapper[7744]: I0220 14:49:14.029042 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 20 14:49:14.029948 master-0 kubenswrapper[7744]: I0220 14:49:14.029255 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-mnmfc" Feb 20 14:49:14.029948 master-0 kubenswrapper[7744]: I0220 14:49:14.029442 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 20 14:49:14.029948 master-0 kubenswrapper[7744]: I0220 14:49:14.029678 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 20 14:49:14.031067 master-0 kubenswrapper[7744]: I0220 14:49:14.030651 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 20 14:49:14.033516 master-0 kubenswrapper[7744]: I0220 14:49:14.033469 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7"] Feb 20 14:49:14.035442 master-0 kubenswrapper[7744]: I0220 14:49:14.034461 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" Feb 20 14:49:14.041253 master-0 kubenswrapper[7744]: I0220 14:49:14.041197 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-c8rnz" Feb 20 14:49:14.041510 master-0 kubenswrapper[7744]: I0220 14:49:14.041457 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 20 14:49:14.046244 master-0 kubenswrapper[7744]: I0220 14:49:14.046210 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-fjtrw"] Feb 20 14:49:14.047059 master-0 kubenswrapper[7744]: I0220 14:49:14.047033 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 14:49:14.051441 master-0 kubenswrapper[7744]: I0220 14:49:14.049050 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-b9jmk"] Feb 20 14:49:14.051441 master-0 kubenswrapper[7744]: I0220 14:49:14.050256 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 20 14:49:14.051441 master-0 kubenswrapper[7744]: I0220 14:49:14.050397 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 20 14:49:14.051441 master-0 kubenswrapper[7744]: I0220 14:49:14.050508 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 20 14:49:14.051657 master-0 kubenswrapper[7744]: I0220 14:49:14.051461 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-4xtlh" Feb 20 14:49:14.051657 master-0 kubenswrapper[7744]: I0220 14:49:14.051600 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.057007 master-0 kubenswrapper[7744]: I0220 14:49:14.056973 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 20 14:49:14.059366 master-0 kubenswrapper[7744]: I0220 14:49:14.058188 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-jscmz" Feb 20 14:49:14.059366 master-0 kubenswrapper[7744]: I0220 14:49:14.058509 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 20 14:49:14.059366 master-0 kubenswrapper[7744]: I0220 14:49:14.058659 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 20 14:49:14.059366 master-0 kubenswrapper[7744]: I0220 14:49:14.058777 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 20 14:49:14.059366 master-0 kubenswrapper[7744]: I0220 14:49:14.059168 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 20 14:49:14.062353 master-0 kubenswrapper[7744]: I0220 14:49:14.061622 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 20 14:49:14.062353 master-0 kubenswrapper[7744]: I0220 14:49:14.061876 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-b9jmk"] Feb 20 14:49:14.065210 master-0 kubenswrapper[7744]: I0220 14:49:14.065172 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7"] Feb 20 14:49:14.084079 master-0 kubenswrapper[7744]: I0220 14:49:14.084005 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-fjtrw"] Feb 20 14:49:14.112772 master-0 kubenswrapper[7744]: I0220 14:49:14.111808 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8"] Feb 20 14:49:14.112772 master-0 kubenswrapper[7744]: I0220 14:49:14.112345 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" Feb 20 14:49:14.116541 master-0 kubenswrapper[7744]: I0220 14:49:14.116508 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 20 14:49:14.119960 master-0 kubenswrapper[7744]: I0220 14:49:14.116697 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 20 14:49:14.119960 master-0 kubenswrapper[7744]: I0220 14:49:14.116831 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-btmxs" Feb 20 14:49:14.119960 master-0 kubenswrapper[7744]: I0220 14:49:14.116960 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 20 14:49:14.120289 master-0 kubenswrapper[7744]: I0220 14:49:14.120234 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpt8j\" (UniqueName: \"kubernetes.io/projected/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-kube-api-access-jpt8j\") pod \"cloud-credential-operator-6968c58f46-mv42p\" (UID: \"6949e9d5-460c-4b63-94cb-1b20ad75ee1c\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 14:49:14.120289 master-0 kubenswrapper[7744]: I0220 14:49:14.120266 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee3a6748-0bbc-41bf-8726-a8db18faf03b-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-92c4x\" (UID: \"ee3a6748-0bbc-41bf-8726-a8db18faf03b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" Feb 20 14:49:14.120353 master-0 kubenswrapper[7744]: I0220 14:49:14.120289 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-mv42p\" (UID: \"6949e9d5-460c-4b63-94cb-1b20ad75ee1c\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 14:49:14.120353 master-0 kubenswrapper[7744]: I0220 14:49:14.120308 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-machine-approver-tls\") pod \"machine-approver-798b897698-vzbjf\" (UID: \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" Feb 20 14:49:14.120353 master-0 kubenswrapper[7744]: I0220 14:49:14.120323 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2fcg\" (UniqueName: \"kubernetes.io/projected/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-kube-api-access-m2fcg\") pod \"machine-approver-798b897698-vzbjf\" (UID: \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" Feb 20 14:49:14.120353 master-0 kubenswrapper[7744]: I0220 14:49:14.120342 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-config\") pod \"machine-approver-798b897698-vzbjf\" (UID: \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" Feb 20 14:49:14.120470 master-0 kubenswrapper[7744]: I0220 14:49:14.120360 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk2pl\" (UniqueName: \"kubernetes.io/projected/ee3a6748-0bbc-41bf-8726-a8db18faf03b-kube-api-access-mk2pl\") pod \"cluster-samples-operator-65c5c48b9b-92c4x\" (UID: \"ee3a6748-0bbc-41bf-8726-a8db18faf03b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" Feb 20 14:49:14.120470 master-0 kubenswrapper[7744]: I0220 14:49:14.120377 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-auth-proxy-config\") pod \"machine-approver-798b897698-vzbjf\" (UID: \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" Feb 20 14:49:14.120470 master-0 kubenswrapper[7744]: I0220 14:49:14.120393 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-mv42p\" (UID: \"6949e9d5-460c-4b63-94cb-1b20ad75ee1c\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 14:49:14.124943 master-0 kubenswrapper[7744]: I0220 14:49:14.121005 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-mv42p\" (UID: \"6949e9d5-460c-4b63-94cb-1b20ad75ee1c\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 14:49:14.128231 master-0 kubenswrapper[7744]: I0220 14:49:14.125172 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r"] Feb 20 14:49:14.128231 master-0 kubenswrapper[7744]: I0220 14:49:14.125977 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 14:49:14.128231 master-0 kubenswrapper[7744]: I0220 14:49:14.127421 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 20 14:49:14.128231 master-0 kubenswrapper[7744]: I0220 14:49:14.127592 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-n8qfb" Feb 20 14:49:14.128231 master-0 kubenswrapper[7744]: I0220 14:49:14.128018 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 20 14:49:14.132978 master-0 kubenswrapper[7744]: I0220 14:49:14.129370 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-auth-proxy-config\") pod \"machine-approver-798b897698-vzbjf\" (UID: \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" Feb 20 14:49:14.132978 master-0 kubenswrapper[7744]: I0220 14:49:14.129425 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-config\") pod \"machine-approver-798b897698-vzbjf\" (UID: \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" Feb 20 14:49:14.132978 master-0 kubenswrapper[7744]: I0220 14:49:14.130613 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-machine-approver-tls\") pod \"machine-approver-798b897698-vzbjf\" (UID: \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" Feb 20 14:49:14.132978 master-0 kubenswrapper[7744]: I0220 14:49:14.131522 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee3a6748-0bbc-41bf-8726-a8db18faf03b-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-92c4x\" (UID: \"ee3a6748-0bbc-41bf-8726-a8db18faf03b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" Feb 20 14:49:14.132978 master-0 kubenswrapper[7744]: I0220 14:49:14.132259 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-mv42p\" (UID: \"6949e9d5-460c-4b63-94cb-1b20ad75ee1c\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 14:49:14.154955 master-0 kubenswrapper[7744]: I0220 14:49:14.152283 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpt8j\" (UniqueName: \"kubernetes.io/projected/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-kube-api-access-jpt8j\") pod \"cloud-credential-operator-6968c58f46-mv42p\" (UID: \"6949e9d5-460c-4b63-94cb-1b20ad75ee1c\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 14:49:14.154955 master-0 kubenswrapper[7744]: I0220 14:49:14.152345 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk"] Feb 20 14:49:14.169027 master-0 kubenswrapper[7744]: I0220 14:49:14.165031 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2fcg\" (UniqueName: \"kubernetes.io/projected/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-kube-api-access-m2fcg\") pod \"machine-approver-798b897698-vzbjf\" (UID: \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" Feb 20 14:49:14.169027 master-0 kubenswrapper[7744]: I0220 14:49:14.165898 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4"] Feb 20 14:49:14.169027 master-0 kubenswrapper[7744]: I0220 14:49:14.166158 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.175967 master-0 kubenswrapper[7744]: I0220 14:49:14.169623 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk2pl\" (UniqueName: \"kubernetes.io/projected/ee3a6748-0bbc-41bf-8726-a8db18faf03b-kube-api-access-mk2pl\") pod \"cluster-samples-operator-65c5c48b9b-92c4x\" (UID: \"ee3a6748-0bbc-41bf-8726-a8db18faf03b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" Feb 20 14:49:14.175967 master-0 kubenswrapper[7744]: I0220 14:49:14.171862 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 20 14:49:14.175967 master-0 kubenswrapper[7744]: I0220 14:49:14.172122 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 20 14:49:14.175967 master-0 kubenswrapper[7744]: I0220 14:49:14.174795 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 20 14:49:14.175967 master-0 kubenswrapper[7744]: I0220 14:49:14.174860 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 20 14:49:14.175967 master-0 kubenswrapper[7744]: I0220 14:49:14.174473 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-8m9cn" Feb 20 14:49:14.178936 master-0 kubenswrapper[7744]: I0220 14:49:14.178893 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 14:49:14.188233 master-0 kubenswrapper[7744]: I0220 14:49:14.187867 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc"] Feb 20 14:49:14.189864 master-0 kubenswrapper[7744]: I0220 14:49:14.189834 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:49:14.194963 master-0 kubenswrapper[7744]: I0220 14:49:14.191023 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8"] Feb 20 14:49:14.194963 master-0 kubenswrapper[7744]: I0220 14:49:14.193913 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 20 14:49:14.194963 master-0 kubenswrapper[7744]: I0220 14:49:14.194137 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mvrxq" Feb 20 14:49:14.196867 master-0 kubenswrapper[7744]: I0220 14:49:14.196824 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 20 14:49:14.197060 master-0 kubenswrapper[7744]: I0220 14:49:14.196975 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 20 14:49:14.197060 master-0 kubenswrapper[7744]: I0220 14:49:14.197048 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-tljfd" Feb 20 14:49:14.197522 master-0 kubenswrapper[7744]: I0220 14:49:14.197486 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 20 14:49:14.197591 master-0 kubenswrapper[7744]: I0220 14:49:14.197555 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 20 14:49:14.197591 master-0 kubenswrapper[7744]: I0220 14:49:14.197577 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 20 14:49:14.197760 master-0 kubenswrapper[7744]: I0220 14:49:14.197724 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk"] Feb 20 14:49:14.200469 master-0 kubenswrapper[7744]: I0220 14:49:14.199863 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r"] Feb 20 14:49:14.206024 master-0 kubenswrapper[7744]: I0220 14:49:14.205975 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc"] Feb 20 14:49:14.216080 master-0 kubenswrapper[7744]: I0220 14:49:14.213811 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4"] Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.220662 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7"] Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.221528 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd"] Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.221577 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9crd\" (UniqueName: \"kubernetes.io/projected/8a278abf-8c59-4454-94d0-a0d0768cbec5-kube-api-access-r9crd\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.221616 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a278abf-8c59-4454-94d0-a0d0768cbec5-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.221643 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a278abf-8c59-4454-94d0-a0d0768cbec5-serving-cert\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.221666 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-trusted-ca\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.221684 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jshgm\" (UniqueName: \"kubernetes.io/projected/27ab8945-6a5b-4f7d-b893-6358da214499-kube-api-access-jshgm\") pod \"cluster-storage-operator-f94476f49-m2bj7\" (UID: \"27ab8945-6a5b-4f7d-b893-6358da214499\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.221701 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.221734 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8a278abf-8c59-4454-94d0-a0d0768cbec5-snapshots\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.221760 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/27ab8945-6a5b-4f7d-b893-6358da214499-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-m2bj7\" (UID: \"27ab8945-6a5b-4f7d-b893-6358da214499\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.221780 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwnq7\" (UniqueName: \"kubernetes.io/projected/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-kube-api-access-mwnq7\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.221798 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.221816 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a278abf-8c59-4454-94d0-a0d0768cbec5-service-ca-bundle\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.221847 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-bound-sa-token\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.221868 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sktfg\" (UniqueName: \"kubernetes.io/projected/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-kube-api-access-sktfg\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.221885 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jljjg\" (UniqueName: \"kubernetes.io/projected/49044786-483a-406e-8750-f6ded400841d-kube-api-access-jljjg\") pod \"control-plane-machine-set-operator-686847ff5f-2tpv8\" (UID: \"49044786-483a-406e-8750-f6ded400841d\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.221904 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.222008 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.222053 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.222234 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-metrics-tls\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.222260 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.222969 master-0 kubenswrapper[7744]: I0220 14:49:14.222290 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49044786-483a-406e-8750-f6ded400841d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-2tpv8\" (UID: \"49044786-483a-406e-8750-f6ded400841d\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" Feb 20 14:49:14.232216 master-0 kubenswrapper[7744]: I0220 14:49:14.230414 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 20 14:49:14.232216 master-0 kubenswrapper[7744]: I0220 14:49:14.230518 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 20 14:49:14.232216 master-0 kubenswrapper[7744]: I0220 14:49:14.231546 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-sn97p" Feb 20 14:49:14.232216 master-0 kubenswrapper[7744]: I0220 14:49:14.231739 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-mxnq7" Feb 20 14:49:14.232216 master-0 kubenswrapper[7744]: I0220 14:49:14.231771 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 20 14:49:14.232216 master-0 kubenswrapper[7744]: I0220 14:49:14.232119 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 20 14:49:14.232216 master-0 kubenswrapper[7744]: I0220 14:49:14.232223 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 20 14:49:14.232612 master-0 kubenswrapper[7744]: I0220 14:49:14.232293 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 20 14:49:14.232612 master-0 kubenswrapper[7744]: I0220 14:49:14.232497 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 20 14:49:14.235787 master-0 kubenswrapper[7744]: I0220 14:49:14.233234 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb"] Feb 20 14:49:14.235787 master-0 kubenswrapper[7744]: I0220 14:49:14.233920 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 14:49:14.235787 master-0 kubenswrapper[7744]: I0220 14:49:14.235643 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7"] Feb 20 14:49:14.239714 master-0 kubenswrapper[7744]: I0220 14:49:14.236846 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 20 14:49:14.239714 master-0 kubenswrapper[7744]: I0220 14:49:14.239396 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd"] Feb 20 14:49:14.254896 master-0 kubenswrapper[7744]: I0220 14:49:14.254841 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb"] Feb 20 14:49:14.305183 master-0 kubenswrapper[7744]: I0220 14:49:14.305149 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 14:49:14.326852 master-0 kubenswrapper[7744]: I0220 14:49:14.326570 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b385880b-a26b-4353-8f6f-b7f926bcc67c-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 14:49:14.326852 master-0 kubenswrapper[7744]: I0220 14:49:14.326641 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwnq7\" (UniqueName: \"kubernetes.io/projected/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-kube-api-access-mwnq7\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 14:49:14.326852 master-0 kubenswrapper[7744]: I0220 14:49:14.326687 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.326852 master-0 kubenswrapper[7744]: I0220 14:49:14.326722 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a278abf-8c59-4454-94d0-a0d0768cbec5-service-ca-bundle\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.326852 master-0 kubenswrapper[7744]: I0220 14:49:14.326752 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-images\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 14:49:14.327106 master-0 kubenswrapper[7744]: I0220 14:49:14.327007 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvthk\" (UniqueName: \"kubernetes.io/projected/a4339bd5-b8d1-467e-8158-4464ea901148-kube-api-access-jvthk\") pod \"openshift-config-operator-6f47d587d6-hsqjc\" (UID: \"a4339bd5-b8d1-467e-8158-4464ea901148\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:49:14.327139 master-0 kubenswrapper[7744]: I0220 14:49:14.327100 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 14:49:14.327171 master-0 kubenswrapper[7744]: I0220 14:49:14.327143 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk2lq\" (UniqueName: \"kubernetes.io/projected/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-kube-api-access-gk2lq\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 14:49:14.327204 master-0 kubenswrapper[7744]: I0220 14:49:14.327183 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwclx\" (UniqueName: \"kubernetes.io/projected/b385880b-a26b-4353-8f6f-b7f926bcc67c-kube-api-access-fwclx\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 14:49:14.327237 master-0 kubenswrapper[7744]: I0220 14:49:14.327216 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-bound-sa-token\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 14:49:14.327268 master-0 kubenswrapper[7744]: I0220 14:49:14.327236 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-config\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 14:49:14.327301 master-0 kubenswrapper[7744]: I0220 14:49:14.327275 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sktfg\" (UniqueName: \"kubernetes.io/projected/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-kube-api-access-sktfg\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.327301 master-0 kubenswrapper[7744]: I0220 14:49:14.327296 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcffg\" (UniqueName: \"kubernetes.io/projected/86f6836b-b018-4c7a-87ad-51809a4b9c7a-kube-api-access-wcffg\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.327374 master-0 kubenswrapper[7744]: I0220 14:49:14.327316 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jljjg\" (UniqueName: \"kubernetes.io/projected/49044786-483a-406e-8750-f6ded400841d-kube-api-access-jljjg\") pod \"control-plane-machine-set-operator-686847ff5f-2tpv8\" (UID: \"49044786-483a-406e-8750-f6ded400841d\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" Feb 20 14:49:14.327374 master-0 kubenswrapper[7744]: I0220 14:49:14.327337 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4339bd5-b8d1-467e-8158-4464ea901148-serving-cert\") pod \"openshift-config-operator-6f47d587d6-hsqjc\" (UID: \"a4339bd5-b8d1-467e-8158-4464ea901148\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:49:14.327374 master-0 kubenswrapper[7744]: I0220 14:49:14.327361 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.327460 master-0 kubenswrapper[7744]: I0220 14:49:14.327384 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.327460 master-0 kubenswrapper[7744]: I0220 14:49:14.327405 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-metrics-tls\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 14:49:14.327460 master-0 kubenswrapper[7744]: I0220 14:49:14.327426 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.327460 master-0 kubenswrapper[7744]: I0220 14:49:14.327452 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxncg\" (UniqueName: \"kubernetes.io/projected/16d6dd52-d73b-4696-873e-00a6d4bb2c77-kube-api-access-sxncg\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 14:49:14.327575 master-0 kubenswrapper[7744]: I0220 14:49:14.327483 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.327575 master-0 kubenswrapper[7744]: I0220 14:49:14.327506 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49044786-483a-406e-8750-f6ded400841d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-2tpv8\" (UID: \"49044786-483a-406e-8750-f6ded400841d\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" Feb 20 14:49:14.327575 master-0 kubenswrapper[7744]: I0220 14:49:14.327523 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.328657 master-0 kubenswrapper[7744]: I0220 14:49:14.328632 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a278abf-8c59-4454-94d0-a0d0768cbec5-service-ca-bundle\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.329387 master-0 kubenswrapper[7744]: I0220 14:49:14.329360 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.329387 master-0 kubenswrapper[7744]: I0220 14:49:14.329379 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.329969 master-0 kubenswrapper[7744]: I0220 14:49:14.329941 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.330238 master-0 kubenswrapper[7744]: I0220 14:49:14.330212 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-srv-cert\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 14:49:14.330316 master-0 kubenswrapper[7744]: I0220 14:49:14.330297 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9crd\" (UniqueName: \"kubernetes.io/projected/8a278abf-8c59-4454-94d0-a0d0768cbec5-kube-api-access-r9crd\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.330358 master-0 kubenswrapper[7744]: I0220 14:49:14.330337 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a4339bd5-b8d1-467e-8158-4464ea901148-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-hsqjc\" (UID: \"a4339bd5-b8d1-467e-8158-4464ea901148\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:49:14.330398 master-0 kubenswrapper[7744]: I0220 14:49:14.330377 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b385880b-a26b-4353-8f6f-b7f926bcc67c-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 14:49:14.330460 master-0 kubenswrapper[7744]: I0220 14:49:14.330411 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a278abf-8c59-4454-94d0-a0d0768cbec5-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.330460 master-0 kubenswrapper[7744]: I0220 14:49:14.330432 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.330460 master-0 kubenswrapper[7744]: I0220 14:49:14.330450 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/16d6dd52-d73b-4696-873e-00a6d4bb2c77-images\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 14:49:14.330568 master-0 kubenswrapper[7744]: I0220 14:49:14.330488 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a278abf-8c59-4454-94d0-a0d0768cbec5-serving-cert\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.330614 master-0 kubenswrapper[7744]: I0220 14:49:14.330576 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/16d6dd52-d73b-4696-873e-00a6d4bb2c77-proxy-tls\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 14:49:14.330614 master-0 kubenswrapper[7744]: I0220 14:49:14.330605 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/16d6dd52-d73b-4696-873e-00a6d4bb2c77-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 14:49:14.330696 master-0 kubenswrapper[7744]: I0220 14:49:14.330629 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-trusted-ca\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 14:49:14.330696 master-0 kubenswrapper[7744]: I0220 14:49:14.330651 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 14:49:14.330873 master-0 kubenswrapper[7744]: I0220 14:49:14.330841 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jshgm\" (UniqueName: \"kubernetes.io/projected/27ab8945-6a5b-4f7d-b893-6358da214499-kube-api-access-jshgm\") pod \"cluster-storage-operator-f94476f49-m2bj7\" (UID: \"27ab8945-6a5b-4f7d-b893-6358da214499\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" Feb 20 14:49:14.330939 master-0 kubenswrapper[7744]: I0220 14:49:14.330892 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl7wm\" (UniqueName: \"kubernetes.io/projected/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-kube-api-access-tl7wm\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 14:49:14.330986 master-0 kubenswrapper[7744]: I0220 14:49:14.330948 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.331025 master-0 kubenswrapper[7744]: I0220 14:49:14.330985 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8a278abf-8c59-4454-94d0-a0d0768cbec5-snapshots\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.331025 master-0 kubenswrapper[7744]: I0220 14:49:14.331012 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/27ab8945-6a5b-4f7d-b893-6358da214499-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-m2bj7\" (UID: \"27ab8945-6a5b-4f7d-b893-6358da214499\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" Feb 20 14:49:14.331313 master-0 kubenswrapper[7744]: I0220 14:49:14.331289 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a278abf-8c59-4454-94d0-a0d0768cbec5-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.331852 master-0 kubenswrapper[7744]: I0220 14:49:14.331817 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-metrics-tls\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 14:49:14.331977 master-0 kubenswrapper[7744]: I0220 14:49:14.331942 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.332201 master-0 kubenswrapper[7744]: I0220 14:49:14.332166 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-trusted-ca\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 14:49:14.332312 master-0 kubenswrapper[7744]: I0220 14:49:14.332289 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8a278abf-8c59-4454-94d0-a0d0768cbec5-snapshots\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.333356 master-0 kubenswrapper[7744]: I0220 14:49:14.333325 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a278abf-8c59-4454-94d0-a0d0768cbec5-serving-cert\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.335075 master-0 kubenswrapper[7744]: I0220 14:49:14.335040 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/27ab8945-6a5b-4f7d-b893-6358da214499-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-m2bj7\" (UID: \"27ab8945-6a5b-4f7d-b893-6358da214499\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" Feb 20 14:49:14.343518 master-0 kubenswrapper[7744]: I0220 14:49:14.343489 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwnq7\" (UniqueName: \"kubernetes.io/projected/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-kube-api-access-mwnq7\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 14:49:14.346842 master-0 kubenswrapper[7744]: I0220 14:49:14.346805 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49044786-483a-406e-8750-f6ded400841d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-2tpv8\" (UID: \"49044786-483a-406e-8750-f6ded400841d\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" Feb 20 14:49:14.348040 master-0 kubenswrapper[7744]: I0220 14:49:14.348013 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jshgm\" (UniqueName: \"kubernetes.io/projected/27ab8945-6a5b-4f7d-b893-6358da214499-kube-api-access-jshgm\") pod \"cluster-storage-operator-f94476f49-m2bj7\" (UID: \"27ab8945-6a5b-4f7d-b893-6358da214499\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" Feb 20 14:49:14.348331 master-0 kubenswrapper[7744]: I0220 14:49:14.348307 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" Feb 20 14:49:14.348607 master-0 kubenswrapper[7744]: I0220 14:49:14.348574 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-bound-sa-token\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 14:49:14.351031 master-0 kubenswrapper[7744]: I0220 14:49:14.350990 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jljjg\" (UniqueName: \"kubernetes.io/projected/49044786-483a-406e-8750-f6ded400841d-kube-api-access-jljjg\") pod \"control-plane-machine-set-operator-686847ff5f-2tpv8\" (UID: \"49044786-483a-406e-8750-f6ded400841d\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" Feb 20 14:49:14.355692 master-0 kubenswrapper[7744]: I0220 14:49:14.355656 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sktfg\" (UniqueName: \"kubernetes.io/projected/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-kube-api-access-sktfg\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.359883 master-0 kubenswrapper[7744]: I0220 14:49:14.359846 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9crd\" (UniqueName: \"kubernetes.io/projected/8a278abf-8c59-4454-94d0-a0d0768cbec5-kube-api-access-r9crd\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.369890 master-0 kubenswrapper[7744]: I0220 14:49:14.369829 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" Feb 20 14:49:14.380992 master-0 kubenswrapper[7744]: W0220 14:49:14.380948 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cb0fc3f_6897_4927_80fa_40cdf43be9a9.slice/crio-256b92231ab60b649b2c69cb8d06f03de4978bccecb61c0ac5c4c69b38a812a5 WatchSource:0}: Error finding container 256b92231ab60b649b2c69cb8d06f03de4978bccecb61c0ac5c4c69b38a812a5: Status 404 returned error can't find the container with id 256b92231ab60b649b2c69cb8d06f03de4978bccecb61c0ac5c4c69b38a812a5 Feb 20 14:49:14.399076 master-0 kubenswrapper[7744]: I0220 14:49:14.397336 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:14.423958 master-0 kubenswrapper[7744]: W0220 14:49:14.423838 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fa7b31e_a95e_44dc_9c4c_211e8b5718e4.slice/crio-527e8d5ed18d502241ed77c666a2e3681427d1002e24a244333ea95634d1d2f8 WatchSource:0}: Error finding container 527e8d5ed18d502241ed77c666a2e3681427d1002e24a244333ea95634d1d2f8: Status 404 returned error can't find the container with id 527e8d5ed18d502241ed77c666a2e3681427d1002e24a244333ea95634d1d2f8 Feb 20 14:49:14.429389 master-0 kubenswrapper[7744]: I0220 14:49:14.429353 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" Feb 20 14:49:14.431776 master-0 kubenswrapper[7744]: I0220 14:49:14.431739 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-srv-cert\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 14:49:14.431821 master-0 kubenswrapper[7744]: I0220 14:49:14.431782 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.431854 master-0 kubenswrapper[7744]: I0220 14:49:14.431815 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a4339bd5-b8d1-467e-8158-4464ea901148-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-hsqjc\" (UID: \"a4339bd5-b8d1-467e-8158-4464ea901148\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:49:14.431854 master-0 kubenswrapper[7744]: I0220 14:49:14.431846 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b385880b-a26b-4353-8f6f-b7f926bcc67c-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 14:49:14.431908 master-0 kubenswrapper[7744]: I0220 14:49:14.431870 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.431908 master-0 kubenswrapper[7744]: I0220 14:49:14.431894 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/16d6dd52-d73b-4696-873e-00a6d4bb2c77-images\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 14:49:14.431986 master-0 kubenswrapper[7744]: I0220 14:49:14.431965 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47sqj\" (UniqueName: \"kubernetes.io/projected/64e9eca9-bbdd-4eca-9219-922bbab9b388-kube-api-access-47sqj\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 14:49:14.432015 master-0 kubenswrapper[7744]: I0220 14:49:14.431992 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/16d6dd52-d73b-4696-873e-00a6d4bb2c77-proxy-tls\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 14:49:14.432046 master-0 kubenswrapper[7744]: I0220 14:49:14.432017 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/16d6dd52-d73b-4696-873e-00a6d4bb2c77-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 14:49:14.432570 master-0 kubenswrapper[7744]: I0220 14:49:14.432536 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 14:49:14.432609 master-0 kubenswrapper[7744]: I0220 14:49:14.432578 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl7wm\" (UniqueName: \"kubernetes.io/projected/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-kube-api-access-tl7wm\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 14:49:14.432609 master-0 kubenswrapper[7744]: I0220 14:49:14.432580 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a4339bd5-b8d1-467e-8158-4464ea901148-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-hsqjc\" (UID: \"a4339bd5-b8d1-467e-8158-4464ea901148\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:49:14.432609 master-0 kubenswrapper[7744]: I0220 14:49:14.432601 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b385880b-a26b-4353-8f6f-b7f926bcc67c-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 14:49:14.432705 master-0 kubenswrapper[7744]: I0220 14:49:14.432643 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-images\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 14:49:14.432821 master-0 kubenswrapper[7744]: I0220 14:49:14.432794 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvthk\" (UniqueName: \"kubernetes.io/projected/a4339bd5-b8d1-467e-8158-4464ea901148-kube-api-access-jvthk\") pod \"openshift-config-operator-6f47d587d6-hsqjc\" (UID: \"a4339bd5-b8d1-467e-8158-4464ea901148\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:49:14.432857 master-0 kubenswrapper[7744]: I0220 14:49:14.432836 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 14:49:14.432897 master-0 kubenswrapper[7744]: I0220 14:49:14.432867 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk2lq\" (UniqueName: \"kubernetes.io/projected/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-kube-api-access-gk2lq\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 14:49:14.432950 master-0 kubenswrapper[7744]: I0220 14:49:14.432895 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-srv-cert\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 14:49:14.432950 master-0 kubenswrapper[7744]: I0220 14:49:14.432941 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 14:49:14.433015 master-0 kubenswrapper[7744]: I0220 14:49:14.432967 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwclx\" (UniqueName: \"kubernetes.io/projected/b385880b-a26b-4353-8f6f-b7f926bcc67c-kube-api-access-fwclx\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 14:49:14.433015 master-0 kubenswrapper[7744]: I0220 14:49:14.432991 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-config\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 14:49:14.433075 master-0 kubenswrapper[7744]: I0220 14:49:14.433015 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcffg\" (UniqueName: \"kubernetes.io/projected/86f6836b-b018-4c7a-87ad-51809a4b9c7a-kube-api-access-wcffg\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.433075 master-0 kubenswrapper[7744]: I0220 14:49:14.433044 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4339bd5-b8d1-467e-8158-4464ea901148-serving-cert\") pod \"openshift-config-operator-6f47d587d6-hsqjc\" (UID: \"a4339bd5-b8d1-467e-8158-4464ea901148\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:49:14.433075 master-0 kubenswrapper[7744]: I0220 14:49:14.433070 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.434054 master-0 kubenswrapper[7744]: I0220 14:49:14.433305 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b385880b-a26b-4353-8f6f-b7f926bcc67c-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 14:49:14.434054 master-0 kubenswrapper[7744]: I0220 14:49:14.433554 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.434054 master-0 kubenswrapper[7744]: I0220 14:49:14.434002 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-config\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 14:49:14.435191 master-0 kubenswrapper[7744]: I0220 14:49:14.434951 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-images\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 14:49:14.435191 master-0 kubenswrapper[7744]: I0220 14:49:14.435063 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.435191 master-0 kubenswrapper[7744]: I0220 14:49:14.435147 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxncg\" (UniqueName: \"kubernetes.io/projected/16d6dd52-d73b-4696-873e-00a6d4bb2c77-kube-api-access-sxncg\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 14:49:14.441881 master-0 kubenswrapper[7744]: I0220 14:49:14.435987 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/16d6dd52-d73b-4696-873e-00a6d4bb2c77-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 14:49:14.441881 master-0 kubenswrapper[7744]: I0220 14:49:14.436507 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/16d6dd52-d73b-4696-873e-00a6d4bb2c77-images\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 14:49:14.444008 master-0 kubenswrapper[7744]: I0220 14:49:14.443891 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b385880b-a26b-4353-8f6f-b7f926bcc67c-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 14:49:14.451191 master-0 kubenswrapper[7744]: I0220 14:49:14.450251 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 14:49:14.452501 master-0 kubenswrapper[7744]: I0220 14:49:14.452045 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.452501 master-0 kubenswrapper[7744]: I0220 14:49:14.452109 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4339bd5-b8d1-467e-8158-4464ea901148-serving-cert\") pod \"openshift-config-operator-6f47d587d6-hsqjc\" (UID: \"a4339bd5-b8d1-467e-8158-4464ea901148\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:49:14.452501 master-0 kubenswrapper[7744]: I0220 14:49:14.452121 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 14:49:14.452812 master-0 kubenswrapper[7744]: I0220 14:49:14.452777 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.453803 master-0 kubenswrapper[7744]: I0220 14:49:14.453757 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/16d6dd52-d73b-4696-873e-00a6d4bb2c77-proxy-tls\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 14:49:14.455255 master-0 kubenswrapper[7744]: I0220 14:49:14.455230 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcffg\" (UniqueName: \"kubernetes.io/projected/86f6836b-b018-4c7a-87ad-51809a4b9c7a-kube-api-access-wcffg\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.455629 master-0 kubenswrapper[7744]: I0220 14:49:14.455599 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl7wm\" (UniqueName: \"kubernetes.io/projected/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-kube-api-access-tl7wm\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 14:49:14.456835 master-0 kubenswrapper[7744]: I0220 14:49:14.456801 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwclx\" (UniqueName: \"kubernetes.io/projected/b385880b-a26b-4353-8f6f-b7f926bcc67c-kube-api-access-fwclx\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 14:49:14.456945 master-0 kubenswrapper[7744]: I0220 14:49:14.456910 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvthk\" (UniqueName: \"kubernetes.io/projected/a4339bd5-b8d1-467e-8158-4464ea901148-kube-api-access-jvthk\") pod \"openshift-config-operator-6f47d587d6-hsqjc\" (UID: \"a4339bd5-b8d1-467e-8158-4464ea901148\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:49:14.460613 master-0 kubenswrapper[7744]: I0220 14:49:14.460570 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxncg\" (UniqueName: \"kubernetes.io/projected/16d6dd52-d73b-4696-873e-00a6d4bb2c77-kube-api-access-sxncg\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 14:49:14.461104 master-0 kubenswrapper[7744]: I0220 14:49:14.461076 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk2lq\" (UniqueName: \"kubernetes.io/projected/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-kube-api-access-gk2lq\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 14:49:14.464639 master-0 kubenswrapper[7744]: I0220 14:49:14.464592 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.481473 master-0 kubenswrapper[7744]: I0220 14:49:14.481424 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-srv-cert\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 14:49:14.496542 master-0 kubenswrapper[7744]: I0220 14:49:14.496158 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 14:49:14.506679 master-0 kubenswrapper[7744]: I0220 14:49:14.506648 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 14:49:14.532364 master-0 kubenswrapper[7744]: I0220 14:49:14.532316 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" Feb 20 14:49:14.537833 master-0 kubenswrapper[7744]: I0220 14:49:14.536881 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-srv-cert\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 14:49:14.537833 master-0 kubenswrapper[7744]: I0220 14:49:14.536947 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 14:49:14.537833 master-0 kubenswrapper[7744]: I0220 14:49:14.537017 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47sqj\" (UniqueName: \"kubernetes.io/projected/64e9eca9-bbdd-4eca-9219-922bbab9b388-kube-api-access-47sqj\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 14:49:14.541657 master-0 kubenswrapper[7744]: I0220 14:49:14.541623 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 14:49:14.542830 master-0 kubenswrapper[7744]: I0220 14:49:14.542459 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 14:49:14.552563 master-0 kubenswrapper[7744]: I0220 14:49:14.552514 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-srv-cert\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 14:49:14.557341 master-0 kubenswrapper[7744]: I0220 14:49:14.557288 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47sqj\" (UniqueName: \"kubernetes.io/projected/64e9eca9-bbdd-4eca-9219-922bbab9b388-kube-api-access-47sqj\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 14:49:14.576755 master-0 kubenswrapper[7744]: I0220 14:49:14.576595 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 14:49:14.585102 master-0 kubenswrapper[7744]: I0220 14:49:14.584915 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 14:49:14.598520 master-0 kubenswrapper[7744]: I0220 14:49:14.598418 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:49:14.620534 master-0 kubenswrapper[7744]: I0220 14:49:14.613973 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 14:49:14.660474 master-0 kubenswrapper[7744]: I0220 14:49:14.660210 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 14:49:14.682861 master-0 kubenswrapper[7744]: I0220 14:49:14.682792 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 14:49:14.751316 master-0 kubenswrapper[7744]: I0220 14:49:14.750218 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p"] Feb 20 14:49:14.786054 master-0 kubenswrapper[7744]: I0220 14:49:14.785238 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x"] Feb 20 14:49:14.936316 master-0 kubenswrapper[7744]: I0220 14:49:14.936207 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-b9jmk"] Feb 20 14:49:14.946130 master-0 kubenswrapper[7744]: W0220 14:49:14.946077 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a278abf_8c59_4454_94d0_a0d0768cbec5.slice/crio-56dab50a6ee92d8b7787a1ffbdfc72e9a26511781eb108040e7d6dc84a65109f WatchSource:0}: Error finding container 56dab50a6ee92d8b7787a1ffbdfc72e9a26511781eb108040e7d6dc84a65109f: Status 404 returned error can't find the container with id 56dab50a6ee92d8b7787a1ffbdfc72e9a26511781eb108040e7d6dc84a65109f Feb 20 14:49:14.979409 master-0 kubenswrapper[7744]: I0220 14:49:14.979362 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-fjtrw"] Feb 20 14:49:14.985841 master-0 kubenswrapper[7744]: W0220 14:49:14.985792 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b6a656c_40d6_4c63_9c6f_ac943eae4c9a.slice/crio-22094081262cfd9afca75424166ecb944e973d770312e29078a1dee4fb675d30 WatchSource:0}: Error finding container 22094081262cfd9afca75424166ecb944e973d770312e29078a1dee4fb675d30: Status 404 returned error can't find the container with id 22094081262cfd9afca75424166ecb944e973d770312e29078a1dee4fb675d30 Feb 20 14:49:14.997110 master-0 kubenswrapper[7744]: I0220 14:49:14.996836 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7"] Feb 20 14:49:15.006618 master-0 kubenswrapper[7744]: W0220 14:49:15.006583 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27ab8945_6a5b_4f7d_b893_6358da214499.slice/crio-07243cbc35256d0bbc44485dfcf1dcdc835463392fa9dc5f89599380e929e672 WatchSource:0}: Error finding container 07243cbc35256d0bbc44485dfcf1dcdc835463392fa9dc5f89599380e929e672: Status 404 returned error can't find the container with id 07243cbc35256d0bbc44485dfcf1dcdc835463392fa9dc5f89599380e929e672 Feb 20 14:49:15.144780 master-0 kubenswrapper[7744]: I0220 14:49:15.144658 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk"] Feb 20 14:49:15.153995 master-0 kubenswrapper[7744]: W0220 14:49:15.153243 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86f6836b_b018_4c7a_87ad_51809a4b9c7a.slice/crio-51c5a5d32ca643efba642911927baab174d9c9270d18541b0810089261e8c8d5 WatchSource:0}: Error finding container 51c5a5d32ca643efba642911927baab174d9c9270d18541b0810089261e8c8d5: Status 404 returned error can't find the container with id 51c5a5d32ca643efba642911927baab174d9c9270d18541b0810089261e8c8d5 Feb 20 14:49:15.158031 master-0 kubenswrapper[7744]: I0220 14:49:15.157935 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8"] Feb 20 14:49:15.168333 master-0 kubenswrapper[7744]: I0220 14:49:15.168296 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r"] Feb 20 14:49:15.171590 master-0 kubenswrapper[7744]: W0220 14:49:15.171538 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49044786_483a_406e_8750_f6ded400841d.slice/crio-92c9b6ef7965615602e16b5814c26d9915a23507222fc502b624945d6f4ccc53 WatchSource:0}: Error finding container 92c9b6ef7965615602e16b5814c26d9915a23507222fc502b624945d6f4ccc53: Status 404 returned error can't find the container with id 92c9b6ef7965615602e16b5814c26d9915a23507222fc502b624945d6f4ccc53 Feb 20 14:49:15.175788 master-0 kubenswrapper[7744]: I0220 14:49:15.175683 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd"] Feb 20 14:49:15.181017 master-0 kubenswrapper[7744]: W0220 14:49:15.180971 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb385880b_a26b_4353_8f6f_b7f926bcc67c.slice/crio-437abb0aba17c9c29dae7086b861fc64a62c90a30c1567fbdec9a15f52cef039 WatchSource:0}: Error finding container 437abb0aba17c9c29dae7086b861fc64a62c90a30c1567fbdec9a15f52cef039: Status 404 returned error can't find the container with id 437abb0aba17c9c29dae7086b861fc64a62c90a30c1567fbdec9a15f52cef039 Feb 20 14:49:15.182466 master-0 kubenswrapper[7744]: W0220 14:49:15.182432 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2edb5bfc_a0a7_4bc9_80f5_c14436f9af7a.slice/crio-56784add7fab2d6fa30c1dec4a904d183b8bd0ff401f8eca8e9ad2aff7741c30 WatchSource:0}: Error finding container 56784add7fab2d6fa30c1dec4a904d183b8bd0ff401f8eca8e9ad2aff7741c30: Status 404 returned error can't find the container with id 56784add7fab2d6fa30c1dec4a904d183b8bd0ff401f8eca8e9ad2aff7741c30 Feb 20 14:49:15.319340 master-0 kubenswrapper[7744]: I0220 14:49:15.319284 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7"] Feb 20 14:49:15.338314 master-0 kubenswrapper[7744]: W0220 14:49:15.338176 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d6dd52_d73b_4696_873e_00a6d4bb2c77.slice/crio-ed48d3d3cb753c9bbe342f9ecdd79f0991ed3456ddbdf3081cbeeab5126bcab1 WatchSource:0}: Error finding container ed48d3d3cb753c9bbe342f9ecdd79f0991ed3456ddbdf3081cbeeab5126bcab1: Status 404 returned error can't find the container with id ed48d3d3cb753c9bbe342f9ecdd79f0991ed3456ddbdf3081cbeeab5126bcab1 Feb 20 14:49:15.359212 master-0 kubenswrapper[7744]: I0220 14:49:15.359128 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" event={"ID":"49044786-483a-406e-8750-f6ded400841d","Type":"ContainerStarted","Data":"92c9b6ef7965615602e16b5814c26d9915a23507222fc502b624945d6f4ccc53"} Feb 20 14:49:15.360477 master-0 kubenswrapper[7744]: I0220 14:49:15.360302 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" event={"ID":"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a","Type":"ContainerStarted","Data":"22094081262cfd9afca75424166ecb944e973d770312e29078a1dee4fb675d30"} Feb 20 14:49:15.374211 master-0 kubenswrapper[7744]: I0220 14:49:15.373253 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb"] Feb 20 14:49:15.378548 master-0 kubenswrapper[7744]: I0220 14:49:15.378466 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc"] Feb 20 14:49:15.379784 master-0 kubenswrapper[7744]: I0220 14:49:15.379125 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" event={"ID":"3cb0fc3f-6897-4927-80fa-40cdf43be9a9","Type":"ContainerStarted","Data":"9a9be4e938a26463163025a87daa920277947a01e83f7f6d8b5804bafc0ed314"} Feb 20 14:49:15.379784 master-0 kubenswrapper[7744]: I0220 14:49:15.379160 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" event={"ID":"3cb0fc3f-6897-4927-80fa-40cdf43be9a9","Type":"ContainerStarted","Data":"256b92231ab60b649b2c69cb8d06f03de4978bccecb61c0ac5c4c69b38a812a5"} Feb 20 14:49:15.381344 master-0 kubenswrapper[7744]: I0220 14:49:15.381159 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" event={"ID":"8a278abf-8c59-4454-94d0-a0d0768cbec5","Type":"ContainerStarted","Data":"56dab50a6ee92d8b7787a1ffbdfc72e9a26511781eb108040e7d6dc84a65109f"} Feb 20 14:49:15.382860 master-0 kubenswrapper[7744]: I0220 14:49:15.382837 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" event={"ID":"ee3a6748-0bbc-41bf-8726-a8db18faf03b","Type":"ContainerStarted","Data":"5412cad37cfea94450b3688c380c9cc1161ff7a9a7f0b141297d24e746b33629"} Feb 20 14:49:15.384592 master-0 kubenswrapper[7744]: W0220 14:49:15.384418 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64e9eca9_bbdd_4eca_9219_922bbab9b388.slice/crio-cc5528fa6db2bfe114c1842f536c398cb14a3103bc976fa904abdc30e48bc9b3 WatchSource:0}: Error finding container cc5528fa6db2bfe114c1842f536c398cb14a3103bc976fa904abdc30e48bc9b3: Status 404 returned error can't find the container with id cc5528fa6db2bfe114c1842f536c398cb14a3103bc976fa904abdc30e48bc9b3 Feb 20 14:49:15.384592 master-0 kubenswrapper[7744]: I0220 14:49:15.384442 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" event={"ID":"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a","Type":"ContainerStarted","Data":"56784add7fab2d6fa30c1dec4a904d183b8bd0ff401f8eca8e9ad2aff7741c30"} Feb 20 14:49:15.389571 master-0 kubenswrapper[7744]: I0220 14:49:15.386047 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" event={"ID":"16d6dd52-d73b-4696-873e-00a6d4bb2c77","Type":"ContainerStarted","Data":"ed48d3d3cb753c9bbe342f9ecdd79f0991ed3456ddbdf3081cbeeab5126bcab1"} Feb 20 14:49:15.389571 master-0 kubenswrapper[7744]: W0220 14:49:15.387349 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4339bd5_b8d1_467e_8158_4464ea901148.slice/crio-a9fb4904f90243607c1bd114c0e1c541fb17de9f6f5ce80d7f75369901ce613b WatchSource:0}: Error finding container a9fb4904f90243607c1bd114c0e1c541fb17de9f6f5ce80d7f75369901ce613b: Status 404 returned error can't find the container with id a9fb4904f90243607c1bd114c0e1c541fb17de9f6f5ce80d7f75369901ce613b Feb 20 14:49:15.390030 master-0 kubenswrapper[7744]: I0220 14:49:15.389719 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" event={"ID":"b385880b-a26b-4353-8f6f-b7f926bcc67c","Type":"ContainerStarted","Data":"437abb0aba17c9c29dae7086b861fc64a62c90a30c1567fbdec9a15f52cef039"} Feb 20 14:49:15.396676 master-0 kubenswrapper[7744]: I0220 14:49:15.396633 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4"] Feb 20 14:49:15.400693 master-0 kubenswrapper[7744]: I0220 14:49:15.400650 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" event={"ID":"27ab8945-6a5b-4f7d-b893-6358da214499","Type":"ContainerStarted","Data":"07243cbc35256d0bbc44485dfcf1dcdc835463392fa9dc5f89599380e929e672"} Feb 20 14:49:15.403576 master-0 kubenswrapper[7744]: I0220 14:49:15.403536 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" event={"ID":"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4","Type":"ContainerStarted","Data":"527e8d5ed18d502241ed77c666a2e3681427d1002e24a244333ea95634d1d2f8"} Feb 20 14:49:15.406288 master-0 kubenswrapper[7744]: I0220 14:49:15.406246 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" event={"ID":"6949e9d5-460c-4b63-94cb-1b20ad75ee1c","Type":"ContainerStarted","Data":"07e9c574c476b552e4880ea04698b85b76f727446caf5a26cb7851d60cbd7d25"} Feb 20 14:49:15.406353 master-0 kubenswrapper[7744]: I0220 14:49:15.406291 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" event={"ID":"6949e9d5-460c-4b63-94cb-1b20ad75ee1c","Type":"ContainerStarted","Data":"8468bd2a2161175e696f20868531488b079471cbb37c953cccf04ab9a47ce2b3"} Feb 20 14:49:15.408364 master-0 kubenswrapper[7744]: I0220 14:49:15.408325 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" event={"ID":"86f6836b-b018-4c7a-87ad-51809a4b9c7a","Type":"ContainerStarted","Data":"51c5a5d32ca643efba642911927baab174d9c9270d18541b0810089261e8c8d5"} Feb 20 14:49:15.410436 master-0 kubenswrapper[7744]: W0220 14:49:15.410394 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bedbe69_fc4b_4bd7_bcc2_acead927eda2.slice/crio-934ad9d048e353486054177eacce7219c994c68dfad561ddfd4035fc938101d3 WatchSource:0}: Error finding container 934ad9d048e353486054177eacce7219c994c68dfad561ddfd4035fc938101d3: Status 404 returned error can't find the container with id 934ad9d048e353486054177eacce7219c994c68dfad561ddfd4035fc938101d3 Feb 20 14:49:16.413508 master-0 kubenswrapper[7744]: I0220 14:49:16.413459 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" event={"ID":"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a","Type":"ContainerStarted","Data":"0280eb835d13df084844046377ae19fb68b78454fc360d2bbb9b0a6d7af5b23f"} Feb 20 14:49:16.414331 master-0 kubenswrapper[7744]: I0220 14:49:16.414259 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 14:49:16.416120 master-0 kubenswrapper[7744]: I0220 14:49:16.416050 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" event={"ID":"16d6dd52-d73b-4696-873e-00a6d4bb2c77","Type":"ContainerStarted","Data":"78400b95bca26d5b5b6a101069ed9dc03c843794bfde2ac90af2ffd94a8b56c9"} Feb 20 14:49:16.416185 master-0 kubenswrapper[7744]: I0220 14:49:16.416125 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" event={"ID":"16d6dd52-d73b-4696-873e-00a6d4bb2c77","Type":"ContainerStarted","Data":"e6e379ec088445dd86d2191d2d0584d608d0fb6a75f60858cd436421f083f620"} Feb 20 14:49:16.418545 master-0 kubenswrapper[7744]: I0220 14:49:16.418496 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" event={"ID":"b385880b-a26b-4353-8f6f-b7f926bcc67c","Type":"ContainerStarted","Data":"fdca7d5d1704511dbbe557b4aad88eeb5de8fd854245d73d9f7c0ff99dbe2f76"} Feb 20 14:49:16.421671 master-0 kubenswrapper[7744]: I0220 14:49:16.421628 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" event={"ID":"a4339bd5-b8d1-467e-8158-4464ea901148","Type":"ContainerStarted","Data":"a9fb4904f90243607c1bd114c0e1c541fb17de9f6f5ce80d7f75369901ce613b"} Feb 20 14:49:16.422837 master-0 kubenswrapper[7744]: I0220 14:49:16.422785 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" event={"ID":"64e9eca9-bbdd-4eca-9219-922bbab9b388","Type":"ContainerStarted","Data":"b0ee251bfbaebe0892c05b520dd9bb47366244efbf8033dfe4b8b8ef8373e2f0"} Feb 20 14:49:16.422837 master-0 kubenswrapper[7744]: I0220 14:49:16.422828 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" event={"ID":"64e9eca9-bbdd-4eca-9219-922bbab9b388","Type":"ContainerStarted","Data":"cc5528fa6db2bfe114c1842f536c398cb14a3103bc976fa904abdc30e48bc9b3"} Feb 20 14:49:16.423905 master-0 kubenswrapper[7744]: I0220 14:49:16.423861 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 14:49:16.424928 master-0 kubenswrapper[7744]: I0220 14:49:16.424883 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" event={"ID":"0bedbe69-fc4b-4bd7-bcc2-acead927eda2","Type":"ContainerStarted","Data":"163c978ceae0c9e27e26a1ee8ee74f1b64e99eeb67823839b6892e27b1c56ac9"} Feb 20 14:49:16.424997 master-0 kubenswrapper[7744]: I0220 14:49:16.424940 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" event={"ID":"0bedbe69-fc4b-4bd7-bcc2-acead927eda2","Type":"ContainerStarted","Data":"934ad9d048e353486054177eacce7219c994c68dfad561ddfd4035fc938101d3"} Feb 20 14:49:16.428133 master-0 kubenswrapper[7744]: I0220 14:49:16.428096 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 14:49:16.429302 master-0 kubenswrapper[7744]: I0220 14:49:16.429272 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 14:49:16.436775 master-0 kubenswrapper[7744]: I0220 14:49:16.436707 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" podStartSLOduration=2.43668805 podStartE2EDuration="2.43668805s" podCreationTimestamp="2026-02-20 14:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:49:16.434869625 +0000 UTC m=+155.637069605" watchObservedRunningTime="2026-02-20 14:49:16.43668805 +0000 UTC m=+155.638887970" Feb 20 14:49:16.452996 master-0 kubenswrapper[7744]: I0220 14:49:16.452912 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" podStartSLOduration=2.452888058 podStartE2EDuration="2.452888058s" podCreationTimestamp="2026-02-20 14:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:49:16.451097854 +0000 UTC m=+155.653297784" watchObservedRunningTime="2026-02-20 14:49:16.452888058 +0000 UTC m=+155.655087978" Feb 20 14:49:16.484845 master-0 kubenswrapper[7744]: I0220 14:49:16.484783 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" podStartSLOduration=2.484765661 podStartE2EDuration="2.484765661s" podCreationTimestamp="2026-02-20 14:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:49:16.48348754 +0000 UTC m=+155.685687460" watchObservedRunningTime="2026-02-20 14:49:16.484765661 +0000 UTC m=+155.686965591" Feb 20 14:49:16.564687 master-0 kubenswrapper[7744]: I0220 14:49:16.564483 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9wddt"] Feb 20 14:49:16.566245 master-0 kubenswrapper[7744]: I0220 14:49:16.566213 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wddt" Feb 20 14:49:16.573962 master-0 kubenswrapper[7744]: I0220 14:49:16.568797 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-67ksg" Feb 20 14:49:16.585002 master-0 kubenswrapper[7744]: I0220 14:49:16.579539 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-catalog-content\") pod \"certified-operators-9wddt\" (UID: \"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d\") " pod="openshift-marketplace/certified-operators-9wddt" Feb 20 14:49:16.585002 master-0 kubenswrapper[7744]: I0220 14:49:16.579689 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-utilities\") pod \"certified-operators-9wddt\" (UID: \"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d\") " pod="openshift-marketplace/certified-operators-9wddt" Feb 20 14:49:16.585002 master-0 kubenswrapper[7744]: I0220 14:49:16.579735 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtgrt\" (UniqueName: \"kubernetes.io/projected/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-kube-api-access-xtgrt\") pod \"certified-operators-9wddt\" (UID: \"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d\") " pod="openshift-marketplace/certified-operators-9wddt" Feb 20 14:49:16.589097 master-0 kubenswrapper[7744]: I0220 14:49:16.589026 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wddt"] Feb 20 14:49:17.105864 master-0 kubenswrapper[7744]: I0220 14:49:17.105814 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-catalog-content\") pod \"certified-operators-9wddt\" (UID: \"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d\") " pod="openshift-marketplace/certified-operators-9wddt" Feb 20 14:49:17.106065 master-0 kubenswrapper[7744]: I0220 14:49:17.105891 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-utilities\") pod \"certified-operators-9wddt\" (UID: \"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d\") " pod="openshift-marketplace/certified-operators-9wddt" Feb 20 14:49:17.106065 master-0 kubenswrapper[7744]: I0220 14:49:17.105913 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtgrt\" (UniqueName: \"kubernetes.io/projected/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-kube-api-access-xtgrt\") pod \"certified-operators-9wddt\" (UID: \"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d\") " pod="openshift-marketplace/certified-operators-9wddt" Feb 20 14:49:17.107685 master-0 kubenswrapper[7744]: I0220 14:49:17.107537 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-utilities\") pod \"certified-operators-9wddt\" (UID: \"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d\") " pod="openshift-marketplace/certified-operators-9wddt" Feb 20 14:49:17.109029 master-0 kubenswrapper[7744]: I0220 14:49:17.108988 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-catalog-content\") pod \"certified-operators-9wddt\" (UID: \"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d\") " pod="openshift-marketplace/certified-operators-9wddt" Feb 20 14:49:17.149765 master-0 kubenswrapper[7744]: I0220 14:49:17.149708 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x5fhb"] Feb 20 14:49:17.158520 master-0 kubenswrapper[7744]: I0220 14:49:17.153979 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5fhb" Feb 20 14:49:17.158520 master-0 kubenswrapper[7744]: I0220 14:49:17.156781 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-77fdh" Feb 20 14:49:17.158748 master-0 kubenswrapper[7744]: I0220 14:49:17.158651 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtgrt\" (UniqueName: \"kubernetes.io/projected/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-kube-api-access-xtgrt\") pod \"certified-operators-9wddt\" (UID: \"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d\") " pod="openshift-marketplace/certified-operators-9wddt" Feb 20 14:49:17.171914 master-0 kubenswrapper[7744]: I0220 14:49:17.162429 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x5fhb"] Feb 20 14:49:17.211225 master-0 kubenswrapper[7744]: I0220 14:49:17.211140 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-utilities\") pod \"community-operators-x5fhb\" (UID: \"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3\") " pod="openshift-marketplace/community-operators-x5fhb" Feb 20 14:49:17.211517 master-0 kubenswrapper[7744]: I0220 14:49:17.211303 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-catalog-content\") pod \"community-operators-x5fhb\" (UID: \"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3\") " pod="openshift-marketplace/community-operators-x5fhb" Feb 20 14:49:17.211517 master-0 kubenswrapper[7744]: I0220 14:49:17.211356 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xd6r\" (UniqueName: \"kubernetes.io/projected/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-kube-api-access-2xd6r\") pod \"community-operators-x5fhb\" (UID: \"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3\") " pod="openshift-marketplace/community-operators-x5fhb" Feb 20 14:49:17.312943 master-0 kubenswrapper[7744]: I0220 14:49:17.312517 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-catalog-content\") pod \"community-operators-x5fhb\" (UID: \"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3\") " pod="openshift-marketplace/community-operators-x5fhb" Feb 20 14:49:17.312943 master-0 kubenswrapper[7744]: I0220 14:49:17.312583 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xd6r\" (UniqueName: \"kubernetes.io/projected/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-kube-api-access-2xd6r\") pod \"community-operators-x5fhb\" (UID: \"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3\") " pod="openshift-marketplace/community-operators-x5fhb" Feb 20 14:49:17.313174 master-0 kubenswrapper[7744]: I0220 14:49:17.312953 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-utilities\") pod \"community-operators-x5fhb\" (UID: \"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3\") " pod="openshift-marketplace/community-operators-x5fhb" Feb 20 14:49:17.313174 master-0 kubenswrapper[7744]: I0220 14:49:17.313043 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-catalog-content\") pod \"community-operators-x5fhb\" (UID: \"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3\") " pod="openshift-marketplace/community-operators-x5fhb" Feb 20 14:49:17.314949 master-0 kubenswrapper[7744]: I0220 14:49:17.314876 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-utilities\") pod \"community-operators-x5fhb\" (UID: \"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3\") " pod="openshift-marketplace/community-operators-x5fhb" Feb 20 14:49:17.336572 master-0 kubenswrapper[7744]: I0220 14:49:17.336508 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xd6r\" (UniqueName: \"kubernetes.io/projected/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-kube-api-access-2xd6r\") pod \"community-operators-x5fhb\" (UID: \"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3\") " pod="openshift-marketplace/community-operators-x5fhb" Feb 20 14:49:17.436109 master-0 kubenswrapper[7744]: I0220 14:49:17.434795 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wddt" Feb 20 14:49:17.494459 master-0 kubenswrapper[7744]: I0220 14:49:17.494426 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5fhb" Feb 20 14:49:17.714472 master-0 kubenswrapper[7744]: I0220 14:49:17.714327 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884"] Feb 20 14:49:17.715411 master-0 kubenswrapper[7744]: I0220 14:49:17.715375 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 14:49:17.717744 master-0 kubenswrapper[7744]: I0220 14:49:17.717706 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 20 14:49:17.730154 master-0 kubenswrapper[7744]: I0220 14:49:17.730094 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884"] Feb 20 14:49:17.821409 master-0 kubenswrapper[7744]: I0220 14:49:17.821272 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-webhook-cert\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 14:49:17.821706 master-0 kubenswrapper[7744]: I0220 14:49:17.821437 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-tmpfs\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 14:49:17.821706 master-0 kubenswrapper[7744]: I0220 14:49:17.821554 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-apiservice-cert\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 14:49:17.821706 master-0 kubenswrapper[7744]: I0220 14:49:17.821682 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntlv2\" (UniqueName: \"kubernetes.io/projected/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-kube-api-access-ntlv2\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 14:49:17.908881 master-0 kubenswrapper[7744]: I0220 14:49:17.908831 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wddt"] Feb 20 14:49:17.914403 master-0 kubenswrapper[7744]: W0220 14:49:17.914361 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb011cf4d_4822_4fc7_9f11_62f1f8c5cf4d.slice/crio-236aeb004972a9d3e9949ce545b3cfedb3b4ea60df38f4b61a82d0b2465524af WatchSource:0}: Error finding container 236aeb004972a9d3e9949ce545b3cfedb3b4ea60df38f4b61a82d0b2465524af: Status 404 returned error can't find the container with id 236aeb004972a9d3e9949ce545b3cfedb3b4ea60df38f4b61a82d0b2465524af Feb 20 14:49:17.922567 master-0 kubenswrapper[7744]: I0220 14:49:17.922530 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-tmpfs\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 14:49:17.923861 master-0 kubenswrapper[7744]: I0220 14:49:17.923822 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-tmpfs\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 14:49:17.924168 master-0 kubenswrapper[7744]: I0220 14:49:17.923876 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-apiservice-cert\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 14:49:17.924270 master-0 kubenswrapper[7744]: I0220 14:49:17.924248 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntlv2\" (UniqueName: \"kubernetes.io/projected/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-kube-api-access-ntlv2\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 14:49:17.924352 master-0 kubenswrapper[7744]: I0220 14:49:17.924335 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-webhook-cert\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 14:49:17.930673 master-0 kubenswrapper[7744]: I0220 14:49:17.930638 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-webhook-cert\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 14:49:17.932987 master-0 kubenswrapper[7744]: I0220 14:49:17.932874 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-apiservice-cert\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 14:49:17.947172 master-0 kubenswrapper[7744]: I0220 14:49:17.946225 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntlv2\" (UniqueName: \"kubernetes.io/projected/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-kube-api-access-ntlv2\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 14:49:17.952032 master-0 kubenswrapper[7744]: I0220 14:49:17.951277 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x5fhb"] Feb 20 14:49:18.047938 master-0 kubenswrapper[7744]: I0220 14:49:18.047874 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 14:49:18.441149 master-0 kubenswrapper[7744]: I0220 14:49:18.441040 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wddt" event={"ID":"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d","Type":"ContainerStarted","Data":"236aeb004972a9d3e9949ce545b3cfedb3b4ea60df38f4b61a82d0b2465524af"} Feb 20 14:49:18.821729 master-0 kubenswrapper[7744]: I0220 14:49:18.821682 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n2cdp"] Feb 20 14:49:18.824061 master-0 kubenswrapper[7744]: I0220 14:49:18.823553 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 14:49:18.833376 master-0 kubenswrapper[7744]: I0220 14:49:18.833315 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-7zgzx" Feb 20 14:49:18.836138 master-0 kubenswrapper[7744]: I0220 14:49:18.836053 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2cdp"] Feb 20 14:49:18.855592 master-0 kubenswrapper[7744]: I0220 14:49:18.855550 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-ztgdm"] Feb 20 14:49:18.858620 master-0 kubenswrapper[7744]: I0220 14:49:18.858601 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 14:49:18.861252 master-0 kubenswrapper[7744]: I0220 14:49:18.861226 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-9g7zv" Feb 20 14:49:18.861987 master-0 kubenswrapper[7744]: I0220 14:49:18.861967 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 20 14:49:18.935997 master-0 kubenswrapper[7744]: I0220 14:49:18.935889 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rln42\" (UniqueName: \"kubernetes.io/projected/ac3680de-aabf-414b-a340-5e5e6aea4822-kube-api-access-rln42\") pod \"redhat-marketplace-n2cdp\" (UID: \"ac3680de-aabf-414b-a340-5e5e6aea4822\") " pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 14:49:18.936208 master-0 kubenswrapper[7744]: I0220 14:49:18.936075 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac3680de-aabf-414b-a340-5e5e6aea4822-utilities\") pod \"redhat-marketplace-n2cdp\" (UID: \"ac3680de-aabf-414b-a340-5e5e6aea4822\") " pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 14:49:18.936208 master-0 kubenswrapper[7744]: I0220 14:49:18.936146 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac3680de-aabf-414b-a340-5e5e6aea4822-catalog-content\") pod \"redhat-marketplace-n2cdp\" (UID: \"ac3680de-aabf-414b-a340-5e5e6aea4822\") " pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 14:49:19.039972 master-0 kubenswrapper[7744]: I0220 14:49:19.037827 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac3680de-aabf-414b-a340-5e5e6aea4822-utilities\") pod \"redhat-marketplace-n2cdp\" (UID: \"ac3680de-aabf-414b-a340-5e5e6aea4822\") " pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 14:49:19.039972 master-0 kubenswrapper[7744]: I0220 14:49:19.037885 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-rootfs\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 14:49:19.039972 master-0 kubenswrapper[7744]: I0220 14:49:19.037937 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcfnf\" (UniqueName: \"kubernetes.io/projected/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-kube-api-access-wcfnf\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 14:49:19.039972 master-0 kubenswrapper[7744]: I0220 14:49:19.038002 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-mcd-auth-proxy-config\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 14:49:19.039972 master-0 kubenswrapper[7744]: I0220 14:49:19.038137 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac3680de-aabf-414b-a340-5e5e6aea4822-catalog-content\") pod \"redhat-marketplace-n2cdp\" (UID: \"ac3680de-aabf-414b-a340-5e5e6aea4822\") " pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 14:49:19.039972 master-0 kubenswrapper[7744]: I0220 14:49:19.038194 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rln42\" (UniqueName: \"kubernetes.io/projected/ac3680de-aabf-414b-a340-5e5e6aea4822-kube-api-access-rln42\") pod \"redhat-marketplace-n2cdp\" (UID: \"ac3680de-aabf-414b-a340-5e5e6aea4822\") " pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 14:49:19.039972 master-0 kubenswrapper[7744]: I0220 14:49:19.038227 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-proxy-tls\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 14:49:19.039972 master-0 kubenswrapper[7744]: I0220 14:49:19.038597 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac3680de-aabf-414b-a340-5e5e6aea4822-utilities\") pod \"redhat-marketplace-n2cdp\" (UID: \"ac3680de-aabf-414b-a340-5e5e6aea4822\") " pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 14:49:19.039972 master-0 kubenswrapper[7744]: I0220 14:49:19.038737 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac3680de-aabf-414b-a340-5e5e6aea4822-catalog-content\") pod \"redhat-marketplace-n2cdp\" (UID: \"ac3680de-aabf-414b-a340-5e5e6aea4822\") " pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 14:49:19.056740 master-0 kubenswrapper[7744]: I0220 14:49:19.056633 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rln42\" (UniqueName: \"kubernetes.io/projected/ac3680de-aabf-414b-a340-5e5e6aea4822-kube-api-access-rln42\") pod \"redhat-marketplace-n2cdp\" (UID: \"ac3680de-aabf-414b-a340-5e5e6aea4822\") " pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 14:49:19.139034 master-0 kubenswrapper[7744]: I0220 14:49:19.138959 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-proxy-tls\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 14:49:19.139034 master-0 kubenswrapper[7744]: I0220 14:49:19.139026 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-rootfs\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 14:49:19.139288 master-0 kubenswrapper[7744]: I0220 14:49:19.139087 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-rootfs\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 14:49:19.143233 master-0 kubenswrapper[7744]: I0220 14:49:19.141994 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-proxy-tls\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 14:49:19.143233 master-0 kubenswrapper[7744]: I0220 14:49:19.142024 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-mcd-auth-proxy-config\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 14:49:19.143233 master-0 kubenswrapper[7744]: I0220 14:49:19.142051 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcfnf\" (UniqueName: \"kubernetes.io/projected/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-kube-api-access-wcfnf\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 14:49:19.145125 master-0 kubenswrapper[7744]: I0220 14:49:19.144613 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-mcd-auth-proxy-config\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 14:49:19.156495 master-0 kubenswrapper[7744]: I0220 14:49:19.156461 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 14:49:19.448769 master-0 kubenswrapper[7744]: I0220 14:49:19.448580 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5fhb" event={"ID":"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3","Type":"ContainerStarted","Data":"0c1b7791952a54d8b3ef36cceac195dbbcc9face3120a05a59672ee12b84ba46"} Feb 20 14:49:19.911945 master-0 kubenswrapper[7744]: I0220 14:49:19.911901 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcfnf\" (UniqueName: \"kubernetes.io/projected/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-kube-api-access-wcfnf\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 14:49:19.936171 master-0 kubenswrapper[7744]: I0220 14:49:19.935302 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z4wzg"] Feb 20 14:49:19.937250 master-0 kubenswrapper[7744]: I0220 14:49:19.937227 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z4wzg"] Feb 20 14:49:19.937443 master-0 kubenswrapper[7744]: I0220 14:49:19.937427 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 14:49:19.943510 master-0 kubenswrapper[7744]: I0220 14:49:19.943471 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-d7z2t" Feb 20 14:49:20.058866 master-0 kubenswrapper[7744]: I0220 14:49:20.058808 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm2jn\" (UniqueName: \"kubernetes.io/projected/93786626-fac4-48f0-bf72-992bc39f4a82-kube-api-access-fm2jn\") pod \"redhat-operators-z4wzg\" (UID: \"93786626-fac4-48f0-bf72-992bc39f4a82\") " pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 14:49:20.058866 master-0 kubenswrapper[7744]: I0220 14:49:20.058867 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93786626-fac4-48f0-bf72-992bc39f4a82-utilities\") pod \"redhat-operators-z4wzg\" (UID: \"93786626-fac4-48f0-bf72-992bc39f4a82\") " pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 14:49:20.059138 master-0 kubenswrapper[7744]: I0220 14:49:20.058985 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93786626-fac4-48f0-bf72-992bc39f4a82-catalog-content\") pod \"redhat-operators-z4wzg\" (UID: \"93786626-fac4-48f0-bf72-992bc39f4a82\") " pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 14:49:20.082681 master-0 kubenswrapper[7744]: I0220 14:49:20.082605 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 14:49:20.160146 master-0 kubenswrapper[7744]: I0220 14:49:20.160043 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93786626-fac4-48f0-bf72-992bc39f4a82-catalog-content\") pod \"redhat-operators-z4wzg\" (UID: \"93786626-fac4-48f0-bf72-992bc39f4a82\") " pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 14:49:20.160448 master-0 kubenswrapper[7744]: I0220 14:49:20.160228 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm2jn\" (UniqueName: \"kubernetes.io/projected/93786626-fac4-48f0-bf72-992bc39f4a82-kube-api-access-fm2jn\") pod \"redhat-operators-z4wzg\" (UID: \"93786626-fac4-48f0-bf72-992bc39f4a82\") " pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 14:49:20.160448 master-0 kubenswrapper[7744]: I0220 14:49:20.160303 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93786626-fac4-48f0-bf72-992bc39f4a82-utilities\") pod \"redhat-operators-z4wzg\" (UID: \"93786626-fac4-48f0-bf72-992bc39f4a82\") " pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 14:49:20.161248 master-0 kubenswrapper[7744]: I0220 14:49:20.161197 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93786626-fac4-48f0-bf72-992bc39f4a82-utilities\") pod \"redhat-operators-z4wzg\" (UID: \"93786626-fac4-48f0-bf72-992bc39f4a82\") " pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 14:49:20.161403 master-0 kubenswrapper[7744]: I0220 14:49:20.161256 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93786626-fac4-48f0-bf72-992bc39f4a82-catalog-content\") pod \"redhat-operators-z4wzg\" (UID: \"93786626-fac4-48f0-bf72-992bc39f4a82\") " pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 14:49:20.179603 master-0 kubenswrapper[7744]: I0220 14:49:20.179457 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm2jn\" (UniqueName: \"kubernetes.io/projected/93786626-fac4-48f0-bf72-992bc39f4a82-kube-api-access-fm2jn\") pod \"redhat-operators-z4wzg\" (UID: \"93786626-fac4-48f0-bf72-992bc39f4a82\") " pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 14:49:20.262644 master-0 kubenswrapper[7744]: I0220 14:49:20.262595 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 14:49:25.543715 master-0 kubenswrapper[7744]: W0220 14:49:25.543662 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3feb3da_f4fa_4b30_a55c_a0ac9c28b5de.slice/crio-dd0998467d8099b6ff8531304dd3f0e97b5c79ad6520753dadef997846c4d469 WatchSource:0}: Error finding container dd0998467d8099b6ff8531304dd3f0e97b5c79ad6520753dadef997846c4d469: Status 404 returned error can't find the container with id dd0998467d8099b6ff8531304dd3f0e97b5c79ad6520753dadef997846c4d469 Feb 20 14:49:25.923085 master-0 kubenswrapper[7744]: I0220 14:49:25.923035 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z4wzg"] Feb 20 14:49:25.932331 master-0 kubenswrapper[7744]: W0220 14:49:25.931161 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93786626_fac4_48f0_bf72_992bc39f4a82.slice/crio-db318f21d539d497ae2372897b56aaa3b6fedeaae97e556d74c5b3c251315d6e WatchSource:0}: Error finding container db318f21d539d497ae2372897b56aaa3b6fedeaae97e556d74c5b3c251315d6e: Status 404 returned error can't find the container with id db318f21d539d497ae2372897b56aaa3b6fedeaae97e556d74c5b3c251315d6e Feb 20 14:49:25.978041 master-0 kubenswrapper[7744]: I0220 14:49:25.977885 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2cdp"] Feb 20 14:49:26.083218 master-0 kubenswrapper[7744]: I0220 14:49:26.083184 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884"] Feb 20 14:49:26.501997 master-0 kubenswrapper[7744]: I0220 14:49:26.501934 7744 generic.go:334] "Generic (PLEG): container finished" podID="6e5d953b-dbc7-48df-9d6b-d61030ffd6e3" containerID="47e63e5f96b20c92842a652e9774f2aec1b3dc91bda96ad7600899bf883b2ca7" exitCode=0 Feb 20 14:49:26.501997 master-0 kubenswrapper[7744]: I0220 14:49:26.501992 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5fhb" event={"ID":"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3","Type":"ContainerDied","Data":"47e63e5f96b20c92842a652e9774f2aec1b3dc91bda96ad7600899bf883b2ca7"} Feb 20 14:49:26.529636 master-0 kubenswrapper[7744]: I0220 14:49:26.529579 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" event={"ID":"27ab8945-6a5b-4f7d-b893-6358da214499","Type":"ContainerStarted","Data":"3a018b588cd0fab81aef4437e8a3c01bf2d7562f85789ce7770c3b488cc91b89"} Feb 20 14:49:26.536852 master-0 kubenswrapper[7744]: I0220 14:49:26.536803 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" event={"ID":"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4","Type":"ContainerStarted","Data":"ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645"} Feb 20 14:49:26.538223 master-0 kubenswrapper[7744]: I0220 14:49:26.538197 7744 generic.go:334] "Generic (PLEG): container finished" podID="b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d" containerID="e7333c1741153b59af991a3ad87866cd9c88f6ffc09e8e9cf921a7d0c933ce1e" exitCode=0 Feb 20 14:49:26.538284 master-0 kubenswrapper[7744]: I0220 14:49:26.538241 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wddt" event={"ID":"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d","Type":"ContainerDied","Data":"e7333c1741153b59af991a3ad87866cd9c88f6ffc09e8e9cf921a7d0c933ce1e"} Feb 20 14:49:26.543726 master-0 kubenswrapper[7744]: I0220 14:49:26.543651 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" event={"ID":"3cb0fc3f-6897-4927-80fa-40cdf43be9a9","Type":"ContainerStarted","Data":"b6ddae4597fc71cd45b90b78a64fd133418f5e2decf6123b01925d04d7783ce6"} Feb 20 14:49:26.545633 master-0 kubenswrapper[7744]: I0220 14:49:26.545591 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" event={"ID":"86f6836b-b018-4c7a-87ad-51809a4b9c7a","Type":"ContainerStarted","Data":"ffba8bf7d32818ce3d1f44fe5a89b60d238835da37d2ceadd236bb1c7f7c8066"} Feb 20 14:49:26.545633 master-0 kubenswrapper[7744]: I0220 14:49:26.545641 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" event={"ID":"86f6836b-b018-4c7a-87ad-51809a4b9c7a","Type":"ContainerStarted","Data":"57a9d244672b000b813223a646214cb5149d5553c3f6c953fcf4645211da137b"} Feb 20 14:49:26.552374 master-0 kubenswrapper[7744]: I0220 14:49:26.550092 7744 generic.go:334] "Generic (PLEG): container finished" podID="93786626-fac4-48f0-bf72-992bc39f4a82" containerID="56019874af29c4e772f7520294fcbc7349ad7c86907d26939ad87c2a68027c4a" exitCode=0 Feb 20 14:49:26.552374 master-0 kubenswrapper[7744]: I0220 14:49:26.550139 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4wzg" event={"ID":"93786626-fac4-48f0-bf72-992bc39f4a82","Type":"ContainerDied","Data":"56019874af29c4e772f7520294fcbc7349ad7c86907d26939ad87c2a68027c4a"} Feb 20 14:49:26.552374 master-0 kubenswrapper[7744]: I0220 14:49:26.550186 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4wzg" event={"ID":"93786626-fac4-48f0-bf72-992bc39f4a82","Type":"ContainerStarted","Data":"db318f21d539d497ae2372897b56aaa3b6fedeaae97e556d74c5b3c251315d6e"} Feb 20 14:49:26.552374 master-0 kubenswrapper[7744]: I0220 14:49:26.551608 7744 generic.go:334] "Generic (PLEG): container finished" podID="ac3680de-aabf-414b-a340-5e5e6aea4822" containerID="c971e0c69d94dc6cc3921b26332fbb6cd07c9071a5e1bbf75f6a1abf3da41b6d" exitCode=0 Feb 20 14:49:26.552374 master-0 kubenswrapper[7744]: I0220 14:49:26.551711 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2cdp" event={"ID":"ac3680de-aabf-414b-a340-5e5e6aea4822","Type":"ContainerDied","Data":"c971e0c69d94dc6cc3921b26332fbb6cd07c9071a5e1bbf75f6a1abf3da41b6d"} Feb 20 14:49:26.552374 master-0 kubenswrapper[7744]: I0220 14:49:26.551739 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2cdp" event={"ID":"ac3680de-aabf-414b-a340-5e5e6aea4822","Type":"ContainerStarted","Data":"8df5627ff680da0c81aa3a3c2df511cdff6fa3f30ba3845441250cbb689ca7f4"} Feb 20 14:49:26.558703 master-0 kubenswrapper[7744]: I0220 14:49:26.558660 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" event={"ID":"8a278abf-8c59-4454-94d0-a0d0768cbec5","Type":"ContainerStarted","Data":"c01f7b48911df7bce77798908697a8a45e47a4d0fafcbef1fd81d40c9b28eb31"} Feb 20 14:49:26.581958 master-0 kubenswrapper[7744]: I0220 14:49:26.568608 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" event={"ID":"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de","Type":"ContainerStarted","Data":"b30b0c2af77d1a0b3adcf4f9ef949ee16aed89bd4c51896da816fdd72fb442ea"} Feb 20 14:49:26.581958 master-0 kubenswrapper[7744]: I0220 14:49:26.568675 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" event={"ID":"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de","Type":"ContainerStarted","Data":"dd0998467d8099b6ff8531304dd3f0e97b5c79ad6520753dadef997846c4d469"} Feb 20 14:49:26.592952 master-0 kubenswrapper[7744]: I0220 14:49:26.590560 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" event={"ID":"b385880b-a26b-4353-8f6f-b7f926bcc67c","Type":"ContainerStarted","Data":"fcbb2a13969414b96cd30dbad7457a49997232b9842608fdd68bbd19061a8401"} Feb 20 14:49:26.626945 master-0 kubenswrapper[7744]: I0220 14:49:26.626843 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" podStartSLOduration=3.097318871 podStartE2EDuration="13.626821925s" podCreationTimestamp="2026-02-20 14:49:13 +0000 UTC" firstStartedPulling="2026-02-20 14:49:15.007804706 +0000 UTC m=+154.210004626" lastFinishedPulling="2026-02-20 14:49:25.53730776 +0000 UTC m=+164.739507680" observedRunningTime="2026-02-20 14:49:26.559160612 +0000 UTC m=+165.761360542" watchObservedRunningTime="2026-02-20 14:49:26.626821925 +0000 UTC m=+165.829021845" Feb 20 14:49:26.632962 master-0 kubenswrapper[7744]: I0220 14:49:26.630601 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" event={"ID":"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a","Type":"ContainerStarted","Data":"d3902c23a65d809f06a7ebdcc4af6b01c4d6059cec90ec0825ac32ffd942466d"} Feb 20 14:49:26.653905 master-0 kubenswrapper[7744]: I0220 14:49:26.651224 7744 generic.go:334] "Generic (PLEG): container finished" podID="a4339bd5-b8d1-467e-8158-4464ea901148" containerID="638df7437edc2bded4ad7d7ef94d2b7ebf2de761535638d3ecef6e0202944682" exitCode=0 Feb 20 14:49:26.653905 master-0 kubenswrapper[7744]: I0220 14:49:26.651285 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" event={"ID":"a4339bd5-b8d1-467e-8158-4464ea901148","Type":"ContainerDied","Data":"638df7437edc2bded4ad7d7ef94d2b7ebf2de761535638d3ecef6e0202944682"} Feb 20 14:49:26.654767 master-0 kubenswrapper[7744]: I0220 14:49:26.654701 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" event={"ID":"ee3a6748-0bbc-41bf-8726-a8db18faf03b","Type":"ContainerStarted","Data":"f8fca03cf5f84009dcd71f69da0387d2543ee01bf4de2848abd4137f8d885ea7"} Feb 20 14:49:26.654767 master-0 kubenswrapper[7744]: I0220 14:49:26.654745 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" event={"ID":"ee3a6748-0bbc-41bf-8726-a8db18faf03b","Type":"ContainerStarted","Data":"d290cc412a6f01775c2fd7e994caaa64944314f12f379ba1be952f8d473106fb"} Feb 20 14:49:26.655734 master-0 kubenswrapper[7744]: I0220 14:49:26.655701 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" event={"ID":"49044786-483a-406e-8750-f6ded400841d","Type":"ContainerStarted","Data":"c537be0fb6abb27532917c3ba13de8d47b09b2f7faa20aacc94423594538336f"} Feb 20 14:49:26.666950 master-0 kubenswrapper[7744]: I0220 14:49:26.665044 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" podStartSLOduration=2.290968999 podStartE2EDuration="12.665023303s" podCreationTimestamp="2026-02-20 14:49:14 +0000 UTC" firstStartedPulling="2026-02-20 14:49:15.157235028 +0000 UTC m=+154.359434948" lastFinishedPulling="2026-02-20 14:49:25.531289332 +0000 UTC m=+164.733489252" observedRunningTime="2026-02-20 14:49:26.625545263 +0000 UTC m=+165.827745183" watchObservedRunningTime="2026-02-20 14:49:26.665023303 +0000 UTC m=+165.867223223" Feb 20 14:49:26.666950 master-0 kubenswrapper[7744]: I0220 14:49:26.666062 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" podStartSLOduration=2.920044244 podStartE2EDuration="13.666055829s" podCreationTimestamp="2026-02-20 14:49:13 +0000 UTC" firstStartedPulling="2026-02-20 14:49:14.670794454 +0000 UTC m=+153.872994374" lastFinishedPulling="2026-02-20 14:49:25.416806009 +0000 UTC m=+164.619005959" observedRunningTime="2026-02-20 14:49:26.662710657 +0000 UTC m=+165.864910577" watchObservedRunningTime="2026-02-20 14:49:26.666055829 +0000 UTC m=+165.868255749" Feb 20 14:49:26.705902 master-0 kubenswrapper[7744]: I0220 14:49:26.705815 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" podStartSLOduration=3.11525241 podStartE2EDuration="13.705798455s" podCreationTimestamp="2026-02-20 14:49:13 +0000 UTC" firstStartedPulling="2026-02-20 14:49:14.949993205 +0000 UTC m=+154.152193125" lastFinishedPulling="2026-02-20 14:49:25.54053925 +0000 UTC m=+164.742739170" observedRunningTime="2026-02-20 14:49:26.700007423 +0000 UTC m=+165.902207343" watchObservedRunningTime="2026-02-20 14:49:26.705798455 +0000 UTC m=+165.907998375" Feb 20 14:49:26.789189 master-0 kubenswrapper[7744]: I0220 14:49:26.789058 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" podStartSLOduration=3.261373532 podStartE2EDuration="13.789038841s" podCreationTimestamp="2026-02-20 14:49:13 +0000 UTC" firstStartedPulling="2026-02-20 14:49:14.965571548 +0000 UTC m=+154.167771478" lastFinishedPulling="2026-02-20 14:49:25.493236867 +0000 UTC m=+164.695436787" observedRunningTime="2026-02-20 14:49:26.788738184 +0000 UTC m=+165.990938124" watchObservedRunningTime="2026-02-20 14:49:26.789038841 +0000 UTC m=+165.991238761" Feb 20 14:49:26.818461 master-0 kubenswrapper[7744]: I0220 14:49:26.818398 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" podStartSLOduration=2.463301204 podStartE2EDuration="12.818383552s" podCreationTimestamp="2026-02-20 14:49:14 +0000 UTC" firstStartedPulling="2026-02-20 14:49:15.176644355 +0000 UTC m=+154.378844275" lastFinishedPulling="2026-02-20 14:49:25.531726713 +0000 UTC m=+164.733926623" observedRunningTime="2026-02-20 14:49:26.816042725 +0000 UTC m=+166.018242655" watchObservedRunningTime="2026-02-20 14:49:26.818383552 +0000 UTC m=+166.020583472" Feb 20 14:49:26.843096 master-0 kubenswrapper[7744]: I0220 14:49:26.843031 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" podStartSLOduration=2.756065858 podStartE2EDuration="12.843017077s" podCreationTimestamp="2026-02-20 14:49:14 +0000 UTC" firstStartedPulling="2026-02-20 14:49:15.444817245 +0000 UTC m=+154.647017165" lastFinishedPulling="2026-02-20 14:49:25.531768424 +0000 UTC m=+164.733968384" observedRunningTime="2026-02-20 14:49:26.840915586 +0000 UTC m=+166.043115506" watchObservedRunningTime="2026-02-20 14:49:26.843017077 +0000 UTC m=+166.045216997" Feb 20 14:49:29.679650 master-0 kubenswrapper[7744]: I0220 14:49:29.679591 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" event={"ID":"4ecbdf77-0c73-487e-943e-5315a0f8b8d4","Type":"ContainerStarted","Data":"972260fa4d71d5a14fa2c2c948e5708100e799e6a9e6ff6a656d3e5a79c34eaa"} Feb 20 14:49:31.695866 master-0 kubenswrapper[7744]: I0220 14:49:31.695505 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" event={"ID":"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a","Type":"ContainerStarted","Data":"7e5ec22b696c92663538e3ab3921b281ba772a0d18ef481de63a1f9eb71af2ff"} Feb 20 14:49:31.697139 master-0 kubenswrapper[7744]: I0220 14:49:31.697075 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" event={"ID":"4ecbdf77-0c73-487e-943e-5315a0f8b8d4","Type":"ContainerStarted","Data":"e95fbb8f53ad11db019f7bffa9dab7bf19c983cfeacec893299776b627fcb23e"} Feb 20 14:49:31.699091 master-0 kubenswrapper[7744]: I0220 14:49:31.697467 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 14:49:31.700406 master-0 kubenswrapper[7744]: I0220 14:49:31.700363 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" event={"ID":"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4","Type":"ContainerStarted","Data":"54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f"} Feb 20 14:49:31.717319 master-0 kubenswrapper[7744]: I0220 14:49:31.715379 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" podStartSLOduration=8.175442929999999 podStartE2EDuration="18.715363941s" podCreationTimestamp="2026-02-20 14:49:13 +0000 UTC" firstStartedPulling="2026-02-20 14:49:14.991492855 +0000 UTC m=+154.193692775" lastFinishedPulling="2026-02-20 14:49:25.531413866 +0000 UTC m=+164.733613786" observedRunningTime="2026-02-20 14:49:31.712846419 +0000 UTC m=+170.915046349" watchObservedRunningTime="2026-02-20 14:49:31.715363941 +0000 UTC m=+170.917563861" Feb 20 14:49:31.744944 master-0 kubenswrapper[7744]: I0220 14:49:31.742098 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" podStartSLOduration=14.742082827 podStartE2EDuration="14.742082827s" podCreationTimestamp="2026-02-20 14:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:49:31.742017136 +0000 UTC m=+170.944217066" watchObservedRunningTime="2026-02-20 14:49:31.742082827 +0000 UTC m=+170.944282747" Feb 20 14:49:32.196769 master-0 kubenswrapper[7744]: I0220 14:49:32.196553 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 14:49:32.712062 master-0 kubenswrapper[7744]: I0220 14:49:32.711959 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" event={"ID":"a4339bd5-b8d1-467e-8158-4464ea901148","Type":"ContainerStarted","Data":"23b61efd81399a78fa532e7f0cf8b35a9b7f7f7e97f61e6f0f85ac41949a2a92"} Feb 20 14:49:32.712712 master-0 kubenswrapper[7744]: I0220 14:49:32.712119 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:49:32.714366 master-0 kubenswrapper[7744]: I0220 14:49:32.714159 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" event={"ID":"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de","Type":"ContainerStarted","Data":"8eb9245e6af7170d918b2860e9811085196312cb4a81756246c16b0213c120bb"} Feb 20 14:49:32.720281 master-0 kubenswrapper[7744]: I0220 14:49:32.719854 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" event={"ID":"0bedbe69-fc4b-4bd7-bcc2-acead927eda2","Type":"ContainerStarted","Data":"09c2a559e7cc2a5451aca2755577ab8e7c2b5ea2ef73bac50c4295f2287bdf15"} Feb 20 14:49:32.723343 master-0 kubenswrapper[7744]: I0220 14:49:32.723292 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" event={"ID":"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4","Type":"ContainerStarted","Data":"515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a"} Feb 20 14:49:32.726035 master-0 kubenswrapper[7744]: I0220 14:49:32.725976 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" event={"ID":"6949e9d5-460c-4b63-94cb-1b20ad75ee1c","Type":"ContainerStarted","Data":"3191dd09efb413807b0f7ac65de89263fd86c8dfae5fdd396c8d8c4703e7e79b"} Feb 20 14:49:32.733994 master-0 kubenswrapper[7744]: I0220 14:49:32.730874 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" podStartSLOduration=1.856622995 podStartE2EDuration="18.730858406s" podCreationTimestamp="2026-02-20 14:49:14 +0000 UTC" firstStartedPulling="2026-02-20 14:49:15.388834759 +0000 UTC m=+154.591034679" lastFinishedPulling="2026-02-20 14:49:32.26307017 +0000 UTC m=+171.465270090" observedRunningTime="2026-02-20 14:49:32.729333078 +0000 UTC m=+171.931532998" watchObservedRunningTime="2026-02-20 14:49:32.730858406 +0000 UTC m=+171.933058326" Feb 20 14:49:32.750805 master-0 kubenswrapper[7744]: I0220 14:49:32.750726 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" podStartSLOduration=14.750710124 podStartE2EDuration="14.750710124s" podCreationTimestamp="2026-02-20 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:49:32.746645764 +0000 UTC m=+171.948845694" watchObservedRunningTime="2026-02-20 14:49:32.750710124 +0000 UTC m=+171.952910044" Feb 20 14:49:32.766600 master-0 kubenswrapper[7744]: I0220 14:49:32.766485 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" podStartSLOduration=2.073506044 podStartE2EDuration="18.766464261s" podCreationTimestamp="2026-02-20 14:49:14 +0000 UTC" firstStartedPulling="2026-02-20 14:49:15.528533962 +0000 UTC m=+154.730733882" lastFinishedPulling="2026-02-20 14:49:32.221492179 +0000 UTC m=+171.423692099" observedRunningTime="2026-02-20 14:49:32.762497173 +0000 UTC m=+171.964697103" watchObservedRunningTime="2026-02-20 14:49:32.766464261 +0000 UTC m=+171.968664181" Feb 20 14:49:32.796966 master-0 kubenswrapper[7744]: I0220 14:49:32.796793 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" podStartSLOduration=2.578407008 podStartE2EDuration="19.796770646s" podCreationTimestamp="2026-02-20 14:49:13 +0000 UTC" firstStartedPulling="2026-02-20 14:49:14.967873204 +0000 UTC m=+154.170073124" lastFinishedPulling="2026-02-20 14:49:32.186236842 +0000 UTC m=+171.388436762" observedRunningTime="2026-02-20 14:49:32.791018854 +0000 UTC m=+171.993218794" watchObservedRunningTime="2026-02-20 14:49:32.796770646 +0000 UTC m=+171.998970566" Feb 20 14:49:32.809063 master-0 kubenswrapper[7744]: I0220 14:49:32.808989 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" podStartSLOduration=8.706939871 podStartE2EDuration="19.808968165s" podCreationTimestamp="2026-02-20 14:49:13 +0000 UTC" firstStartedPulling="2026-02-20 14:49:14.426260725 +0000 UTC m=+153.628460635" lastFinishedPulling="2026-02-20 14:49:25.528289009 +0000 UTC m=+164.730488929" observedRunningTime="2026-02-20 14:49:32.804982527 +0000 UTC m=+172.007182457" watchObservedRunningTime="2026-02-20 14:49:32.808968165 +0000 UTC m=+172.011168085" Feb 20 14:49:35.078493 master-0 kubenswrapper[7744]: I0220 14:49:35.078222 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m"] Feb 20 14:49:35.079384 master-0 kubenswrapper[7744]: I0220 14:49:35.079221 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 14:49:35.081476 master-0 kubenswrapper[7744]: I0220 14:49:35.081437 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 20 14:49:35.081662 master-0 kubenswrapper[7744]: I0220 14:49:35.081448 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m"] Feb 20 14:49:35.081662 master-0 kubenswrapper[7744]: I0220 14:49:35.081577 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-tkxrl" Feb 20 14:49:35.180124 master-0 kubenswrapper[7744]: I0220 14:49:35.180071 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-proxy-tls\") pod \"machine-config-controller-54cb48566c-j9q5m\" (UID: \"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 14:49:35.180315 master-0 kubenswrapper[7744]: I0220 14:49:35.180216 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vz22\" (UniqueName: \"kubernetes.io/projected/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-kube-api-access-2vz22\") pod \"machine-config-controller-54cb48566c-j9q5m\" (UID: \"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 14:49:35.180424 master-0 kubenswrapper[7744]: I0220 14:49:35.180376 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-j9q5m\" (UID: \"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 14:49:35.281054 master-0 kubenswrapper[7744]: I0220 14:49:35.280995 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-proxy-tls\") pod \"machine-config-controller-54cb48566c-j9q5m\" (UID: \"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 14:49:35.281054 master-0 kubenswrapper[7744]: I0220 14:49:35.281048 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vz22\" (UniqueName: \"kubernetes.io/projected/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-kube-api-access-2vz22\") pod \"machine-config-controller-54cb48566c-j9q5m\" (UID: \"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 14:49:35.281259 master-0 kubenswrapper[7744]: I0220 14:49:35.281077 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-j9q5m\" (UID: \"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 14:49:35.284064 master-0 kubenswrapper[7744]: I0220 14:49:35.284003 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-j9q5m\" (UID: \"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 14:49:35.285452 master-0 kubenswrapper[7744]: I0220 14:49:35.285417 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-proxy-tls\") pod \"machine-config-controller-54cb48566c-j9q5m\" (UID: \"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 14:49:35.297573 master-0 kubenswrapper[7744]: I0220 14:49:35.297538 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vz22\" (UniqueName: \"kubernetes.io/projected/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-kube-api-access-2vz22\") pod \"machine-config-controller-54cb48566c-j9q5m\" (UID: \"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 14:49:35.393757 master-0 kubenswrapper[7744]: I0220 14:49:35.393655 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 14:49:35.604560 master-0 kubenswrapper[7744]: I0220 14:49:35.604497 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:49:35.767136 master-0 kubenswrapper[7744]: I0220 14:49:35.767084 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m"] Feb 20 14:49:36.023990 master-0 kubenswrapper[7744]: I0220 14:49:36.023715 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2"] Feb 20 14:49:36.024370 master-0 kubenswrapper[7744]: I0220 14:49:36.024353 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" Feb 20 14:49:36.027264 master-0 kubenswrapper[7744]: I0220 14:49:36.026226 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 20 14:49:36.036113 master-0 kubenswrapper[7744]: I0220 14:49:36.032734 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp"] Feb 20 14:49:36.036113 master-0 kubenswrapper[7744]: I0220 14:49:36.033482 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp" Feb 20 14:49:36.036113 master-0 kubenswrapper[7744]: I0220 14:49:36.035396 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-nth67"] Feb 20 14:49:36.037531 master-0 kubenswrapper[7744]: I0220 14:49:36.037489 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-nth67" Feb 20 14:49:36.039112 master-0 kubenswrapper[7744]: I0220 14:49:36.039089 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 20 14:49:36.045539 master-0 kubenswrapper[7744]: I0220 14:49:36.045492 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2"] Feb 20 14:49:36.087354 master-0 kubenswrapper[7744]: I0220 14:49:36.087301 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp"] Feb 20 14:49:36.093477 master-0 kubenswrapper[7744]: I0220 14:49:36.093433 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98rjm\" (UniqueName: \"kubernetes.io/projected/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-kube-api-access-98rjm\") pod \"collect-profiles-29526645-ff5j2\" (UID: \"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" Feb 20 14:49:36.093477 master-0 kubenswrapper[7744]: I0220 14:49:36.093471 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4dn4\" (UniqueName: \"kubernetes.io/projected/92008ac4-8deb-4fb9-9116-14d2d005bd36-kube-api-access-n4dn4\") pod \"network-check-source-58fb6744f5-nth67\" (UID: \"92008ac4-8deb-4fb9-9116-14d2d005bd36\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-nth67" Feb 20 14:49:36.093477 master-0 kubenswrapper[7744]: I0220 14:49:36.093507 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-config-volume\") pod \"collect-profiles-29526645-ff5j2\" (UID: \"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" Feb 20 14:49:36.093763 master-0 kubenswrapper[7744]: I0220 14:49:36.093530 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e3cc4073-a926-4aba-81e6-c616c2bb2987-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-d8fvp\" (UID: \"e3cc4073-a926-4aba-81e6-c616c2bb2987\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp" Feb 20 14:49:36.093763 master-0 kubenswrapper[7744]: I0220 14:49:36.093548 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-secret-volume\") pod \"collect-profiles-29526645-ff5j2\" (UID: \"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" Feb 20 14:49:36.108322 master-0 kubenswrapper[7744]: I0220 14:49:36.108284 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-nth67"] Feb 20 14:49:36.194776 master-0 kubenswrapper[7744]: I0220 14:49:36.194715 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98rjm\" (UniqueName: \"kubernetes.io/projected/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-kube-api-access-98rjm\") pod \"collect-profiles-29526645-ff5j2\" (UID: \"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" Feb 20 14:49:36.195075 master-0 kubenswrapper[7744]: I0220 14:49:36.195052 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4dn4\" (UniqueName: \"kubernetes.io/projected/92008ac4-8deb-4fb9-9116-14d2d005bd36-kube-api-access-n4dn4\") pod \"network-check-source-58fb6744f5-nth67\" (UID: \"92008ac4-8deb-4fb9-9116-14d2d005bd36\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-nth67" Feb 20 14:49:36.195171 master-0 kubenswrapper[7744]: I0220 14:49:36.195133 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-config-volume\") pod \"collect-profiles-29526645-ff5j2\" (UID: \"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" Feb 20 14:49:36.195171 master-0 kubenswrapper[7744]: I0220 14:49:36.195167 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e3cc4073-a926-4aba-81e6-c616c2bb2987-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-d8fvp\" (UID: \"e3cc4073-a926-4aba-81e6-c616c2bb2987\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp" Feb 20 14:49:36.195340 master-0 kubenswrapper[7744]: I0220 14:49:36.195210 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-secret-volume\") pod \"collect-profiles-29526645-ff5j2\" (UID: \"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" Feb 20 14:49:36.196214 master-0 kubenswrapper[7744]: I0220 14:49:36.196185 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-config-volume\") pod \"collect-profiles-29526645-ff5j2\" (UID: \"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" Feb 20 14:49:36.198996 master-0 kubenswrapper[7744]: I0220 14:49:36.198969 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e3cc4073-a926-4aba-81e6-c616c2bb2987-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-d8fvp\" (UID: \"e3cc4073-a926-4aba-81e6-c616c2bb2987\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp" Feb 20 14:49:36.199568 master-0 kubenswrapper[7744]: I0220 14:49:36.199545 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-secret-volume\") pod \"collect-profiles-29526645-ff5j2\" (UID: \"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" Feb 20 14:49:36.202234 master-0 kubenswrapper[7744]: I0220 14:49:36.202204 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-6r5qx_8157f73d-c757-40c4-80bc-3c9de2f2288a/authentication-operator/0.log" Feb 20 14:49:36.221530 master-0 kubenswrapper[7744]: I0220 14:49:36.221495 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4dn4\" (UniqueName: \"kubernetes.io/projected/92008ac4-8deb-4fb9-9116-14d2d005bd36-kube-api-access-n4dn4\") pod \"network-check-source-58fb6744f5-nth67\" (UID: \"92008ac4-8deb-4fb9-9116-14d2d005bd36\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-nth67" Feb 20 14:49:36.221679 master-0 kubenswrapper[7744]: I0220 14:49:36.221645 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98rjm\" (UniqueName: \"kubernetes.io/projected/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-kube-api-access-98rjm\") pod \"collect-profiles-29526645-ff5j2\" (UID: \"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" Feb 20 14:49:36.395592 master-0 kubenswrapper[7744]: I0220 14:49:36.395475 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" Feb 20 14:49:36.401083 master-0 kubenswrapper[7744]: I0220 14:49:36.401059 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-6r5qx_8157f73d-c757-40c4-80bc-3c9de2f2288a/authentication-operator/1.log" Feb 20 14:49:36.424669 master-0 kubenswrapper[7744]: I0220 14:49:36.424621 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-nth67" Feb 20 14:49:36.442895 master-0 kubenswrapper[7744]: I0220 14:49:36.442865 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp" Feb 20 14:49:36.595994 master-0 kubenswrapper[7744]: I0220 14:49:36.595963 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-7659f6b598-z8454_a8c0a6d2-f1f9-49e3-9475-4983b50667bf/fix-audit-permissions/0.log" Feb 20 14:49:36.756341 master-0 kubenswrapper[7744]: I0220 14:49:36.756283 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" event={"ID":"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367","Type":"ContainerStarted","Data":"89ac61279a537a1903577035106a26789b5d8208200729a8969d5e1dbcb119e4"} Feb 20 14:49:36.756341 master-0 kubenswrapper[7744]: I0220 14:49:36.756345 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" event={"ID":"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367","Type":"ContainerStarted","Data":"361cac7f381ef490c05a6ad20d7d519e61ac704ec32bc6d37576fd4551ff3afc"} Feb 20 14:49:36.756572 master-0 kubenswrapper[7744]: I0220 14:49:36.756359 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" event={"ID":"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367","Type":"ContainerStarted","Data":"864b7e188cfb62e2b7e87dc90ff4536aab0f9cd5aed1bd5481272fd1babe2e98"} Feb 20 14:49:36.782103 master-0 kubenswrapper[7744]: I0220 14:49:36.782038 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" podStartSLOduration=1.78201995 podStartE2EDuration="1.78201995s" podCreationTimestamp="2026-02-20 14:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:49:36.777488569 +0000 UTC m=+175.979688489" watchObservedRunningTime="2026-02-20 14:49:36.78201995 +0000 UTC m=+175.984219870" Feb 20 14:49:36.797486 master-0 kubenswrapper[7744]: I0220 14:49:36.797440 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-7659f6b598-z8454_a8c0a6d2-f1f9-49e3-9475-4983b50667bf/oauth-apiserver/0.log" Feb 20 14:49:36.817180 master-0 kubenswrapper[7744]: I0220 14:49:36.816404 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2"] Feb 20 14:49:36.865231 master-0 kubenswrapper[7744]: I0220 14:49:36.865184 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-nth67"] Feb 20 14:49:37.004380 master-0 kubenswrapper[7744]: I0220 14:49:37.004318 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-c8w7r_b385880b-a26b-4353-8f6f-b7f926bcc67c/kube-rbac-proxy/0.log" Feb 20 14:49:37.121582 master-0 kubenswrapper[7744]: I0220 14:49:37.121509 7744 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 14:49:37.198964 master-0 kubenswrapper[7744]: I0220 14:49:37.198895 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-c8w7r_b385880b-a26b-4353-8f6f-b7f926bcc67c/cluster-autoscaler-operator/0.log" Feb 20 14:49:37.399469 master-0 kubenswrapper[7744]: I0220 14:49:37.399366 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k2tnk_86f6836b-b018-4c7a-87ad-51809a4b9c7a/cluster-baremetal-operator/0.log" Feb 20 14:49:37.602319 master-0 kubenswrapper[7744]: I0220 14:49:37.602276 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k2tnk_86f6836b-b018-4c7a-87ad-51809a4b9c7a/baremetal-kube-rbac-proxy/0.log" Feb 20 14:49:37.796441 master-0 kubenswrapper[7744]: I0220 14:49:37.796405 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-2tpv8_49044786-483a-406e-8750-f6ded400841d/control-plane-machine-set-operator/0.log" Feb 20 14:49:38.002646 master-0 kubenswrapper[7744]: I0220 14:49:38.002557 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-gjdb4_0bedbe69-fc4b-4bd7-bcc2-acead927eda2/kube-rbac-proxy/0.log" Feb 20 14:49:38.199918 master-0 kubenswrapper[7744]: I0220 14:49:38.199796 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-gjdb4_0bedbe69-fc4b-4bd7-bcc2-acead927eda2/machine-api-operator/0.log" Feb 20 14:49:38.405946 master-0 kubenswrapper[7744]: I0220 14:49:38.405883 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-jhd5c_234a44fd-c153-47a6-a11d-7d4b7165c236/etcd-operator/0.log" Feb 20 14:49:38.471415 master-0 kubenswrapper[7744]: I0220 14:49:38.469334 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-7b65dc9fcb-tlsdt"] Feb 20 14:49:38.471415 master-0 kubenswrapper[7744]: I0220 14:49:38.470702 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:38.477003 master-0 kubenswrapper[7744]: I0220 14:49:38.474838 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 20 14:49:38.477003 master-0 kubenswrapper[7744]: I0220 14:49:38.475057 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 20 14:49:38.477003 master-0 kubenswrapper[7744]: I0220 14:49:38.475166 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 20 14:49:38.477003 master-0 kubenswrapper[7744]: I0220 14:49:38.475067 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-gfr9m" Feb 20 14:49:38.477003 master-0 kubenswrapper[7744]: I0220 14:49:38.475312 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 20 14:49:38.477003 master-0 kubenswrapper[7744]: I0220 14:49:38.475352 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 20 14:49:38.477003 master-0 kubenswrapper[7744]: I0220 14:49:38.475465 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 20 14:49:38.529404 master-0 kubenswrapper[7744]: I0220 14:49:38.529349 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-service-ca-bundle\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:38.529404 master-0 kubenswrapper[7744]: I0220 14:49:38.529392 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-metrics-certs\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:38.529630 master-0 kubenswrapper[7744]: I0220 14:49:38.529419 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-default-certificate\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:38.529630 master-0 kubenswrapper[7744]: I0220 14:49:38.529437 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj796\" (UniqueName: \"kubernetes.io/projected/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-kube-api-access-rj796\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:38.529630 master-0 kubenswrapper[7744]: I0220 14:49:38.529453 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-stats-auth\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:38.620951 master-0 kubenswrapper[7744]: I0220 14:49:38.620084 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-jhd5c_234a44fd-c153-47a6-a11d-7d4b7165c236/etcd-operator/1.log" Feb 20 14:49:38.640951 master-0 kubenswrapper[7744]: I0220 14:49:38.632542 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-service-ca-bundle\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:38.640951 master-0 kubenswrapper[7744]: I0220 14:49:38.632598 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-metrics-certs\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:38.640951 master-0 kubenswrapper[7744]: I0220 14:49:38.632628 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-default-certificate\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:38.640951 master-0 kubenswrapper[7744]: I0220 14:49:38.632677 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj796\" (UniqueName: \"kubernetes.io/projected/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-kube-api-access-rj796\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:38.640951 master-0 kubenswrapper[7744]: I0220 14:49:38.632694 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-stats-auth\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:38.640951 master-0 kubenswrapper[7744]: I0220 14:49:38.633883 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-service-ca-bundle\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:38.640951 master-0 kubenswrapper[7744]: E0220 14:49:38.633975 7744 secret.go:189] Couldn't get secret openshift-ingress/router-certs-default: secret "router-certs-default" not found Feb 20 14:49:38.640951 master-0 kubenswrapper[7744]: E0220 14:49:38.634014 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-default-certificate podName:5f55b652-bef8-4f50-9d1d-9d0a340c1dea nodeName:}" failed. No retries permitted until 2026-02-20 14:49:39.13400068 +0000 UTC m=+178.336200590 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-default-certificate") pod "router-default-7b65dc9fcb-tlsdt" (UID: "5f55b652-bef8-4f50-9d1d-9d0a340c1dea") : secret "router-certs-default" not found Feb 20 14:49:38.640951 master-0 kubenswrapper[7744]: I0220 14:49:38.640608 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-stats-auth\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:38.656947 master-0 kubenswrapper[7744]: I0220 14:49:38.653509 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-metrics-certs\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:38.663941 master-0 kubenswrapper[7744]: I0220 14:49:38.663645 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj796\" (UniqueName: \"kubernetes.io/projected/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-kube-api-access-rj796\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:38.796223 master-0 kubenswrapper[7744]: I0220 14:49:38.796178 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/setup/0.log" Feb 20 14:49:39.000825 master-0 kubenswrapper[7744]: I0220 14:49:39.000776 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-ensure-env-vars/0.log" Feb 20 14:49:39.144003 master-0 kubenswrapper[7744]: I0220 14:49:39.140526 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-default-certificate\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:39.144003 master-0 kubenswrapper[7744]: E0220 14:49:39.140775 7744 secret.go:189] Couldn't get secret openshift-ingress/router-certs-default: secret "router-certs-default" not found Feb 20 14:49:39.144003 master-0 kubenswrapper[7744]: E0220 14:49:39.140898 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-default-certificate podName:5f55b652-bef8-4f50-9d1d-9d0a340c1dea nodeName:}" failed. No retries permitted until 2026-02-20 14:49:40.140859266 +0000 UTC m=+179.343059236 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-default-certificate") pod "router-default-7b65dc9fcb-tlsdt" (UID: "5f55b652-bef8-4f50-9d1d-9d0a340c1dea") : secret "router-certs-default" not found Feb 20 14:49:39.198616 master-0 kubenswrapper[7744]: I0220 14:49:39.198558 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-resources-copy/0.log" Feb 20 14:49:39.395641 master-0 kubenswrapper[7744]: I0220 14:49:39.395551 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 20 14:49:39.536112 master-0 kubenswrapper[7744]: I0220 14:49:39.536041 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5frvf"] Feb 20 14:49:39.537330 master-0 kubenswrapper[7744]: I0220 14:49:39.537291 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 14:49:39.538799 master-0 kubenswrapper[7744]: I0220 14:49:39.538766 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-rnbdm" Feb 20 14:49:39.539273 master-0 kubenswrapper[7744]: I0220 14:49:39.539205 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 20 14:49:39.539762 master-0 kubenswrapper[7744]: I0220 14:49:39.539741 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 20 14:49:39.602124 master-0 kubenswrapper[7744]: I0220 14:49:39.601893 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd/0.log" Feb 20 14:49:39.647040 master-0 kubenswrapper[7744]: I0220 14:49:39.646889 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-certs\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 14:49:39.647219 master-0 kubenswrapper[7744]: I0220 14:49:39.647049 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-node-bootstrap-token\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 14:49:39.647219 master-0 kubenswrapper[7744]: I0220 14:49:39.647075 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcqd4\" (UniqueName: \"kubernetes.io/projected/ef3a09a5-b019-48a3-97f8-7ddadb37394e-kube-api-access-pcqd4\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 14:49:39.748702 master-0 kubenswrapper[7744]: I0220 14:49:39.748633 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-node-bootstrap-token\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 14:49:39.748901 master-0 kubenswrapper[7744]: I0220 14:49:39.748794 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcqd4\" (UniqueName: \"kubernetes.io/projected/ef3a09a5-b019-48a3-97f8-7ddadb37394e-kube-api-access-pcqd4\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 14:49:39.748901 master-0 kubenswrapper[7744]: I0220 14:49:39.748863 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-certs\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 14:49:39.752895 master-0 kubenswrapper[7744]: I0220 14:49:39.752837 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-certs\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 14:49:39.765795 master-0 kubenswrapper[7744]: I0220 14:49:39.765754 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-node-bootstrap-token\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 14:49:39.765908 master-0 kubenswrapper[7744]: I0220 14:49:39.765807 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcqd4\" (UniqueName: \"kubernetes.io/projected/ef3a09a5-b019-48a3-97f8-7ddadb37394e-kube-api-access-pcqd4\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 14:49:39.799514 master-0 kubenswrapper[7744]: I0220 14:49:39.799384 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 20 14:49:39.856491 master-0 kubenswrapper[7744]: I0220 14:49:39.856434 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 14:49:39.996408 master-0 kubenswrapper[7744]: I0220 14:49:39.996283 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-readyz/0.log" Feb 20 14:49:40.154262 master-0 kubenswrapper[7744]: I0220 14:49:40.154155 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-default-certificate\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:40.173125 master-0 kubenswrapper[7744]: I0220 14:49:40.173069 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-default-certificate\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:40.195907 master-0 kubenswrapper[7744]: I0220 14:49:40.195853 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 20 14:49:40.313139 master-0 kubenswrapper[7744]: I0220 14:49:40.313014 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:40.398677 master-0 kubenswrapper[7744]: I0220 14:49:40.398073 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_53835140-8eed-401c-ac07-f89b554ff616/installer/0.log" Feb 20 14:49:40.604337 master-0 kubenswrapper[7744]: I0220 14:49:40.604136 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-pptg6_43e9807a-859c-44c1-8511-0066b0f59ff8/kube-apiserver-operator/0.log" Feb 20 14:49:40.799194 master-0 kubenswrapper[7744]: I0220 14:49:40.799117 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-pptg6_43e9807a-859c-44c1-8511-0066b0f59ff8/kube-apiserver-operator/1.log" Feb 20 14:49:40.997338 master-0 kubenswrapper[7744]: I0220 14:49:40.997278 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/setup/0.log" Feb 20 14:49:41.204363 master-0 kubenswrapper[7744]: I0220 14:49:41.204312 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/kube-apiserver/0.log" Feb 20 14:49:41.392399 master-0 kubenswrapper[7744]: W0220 14:49:41.392006 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92008ac4_8deb_4fb9_9116_14d2d005bd36.slice/crio-6ea59bb762ddd917687d0ab9c9b4c4c212079c243fa33d303d25cc82d89c923b WatchSource:0}: Error finding container 6ea59bb762ddd917687d0ab9c9b4c4c212079c243fa33d303d25cc82d89c923b: Status 404 returned error can't find the container with id 6ea59bb762ddd917687d0ab9c9b4c4c212079c243fa33d303d25cc82d89c923b Feb 20 14:49:41.396220 master-0 kubenswrapper[7744]: I0220 14:49:41.396181 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/kube-apiserver-insecure-readyz/0.log" Feb 20 14:49:41.600197 master-0 kubenswrapper[7744]: I0220 14:49:41.600110 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_986049a1-b3e4-4dca-b178-55eaa7a27bfb/installer/0.log" Feb 20 14:49:41.785255 master-0 kubenswrapper[7744]: I0220 14:49:41.785193 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-nth67" event={"ID":"92008ac4-8deb-4fb9-9116-14d2d005bd36","Type":"ContainerStarted","Data":"6ea59bb762ddd917687d0ab9c9b4c4c212079c243fa33d303d25cc82d89c923b"} Feb 20 14:49:41.800340 master-0 kubenswrapper[7744]: I0220 14:49:41.800274 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_277ab008-e6f0-49cd-801d-54d3071036d4/installer/0.log" Feb 20 14:49:42.002592 master-0 kubenswrapper[7744]: I0220 14:49:42.002522 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-lt7ww_4c31b8a7-edcb-403d-9122-7eb740f7d659/kube-controller-manager-operator/0.log" Feb 20 14:49:42.197262 master-0 kubenswrapper[7744]: I0220 14:49:42.197134 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-lt7ww_4c31b8a7-edcb-403d-9122-7eb740f7d659/kube-controller-manager-operator/1.log" Feb 20 14:49:42.408606 master-0 kubenswrapper[7744]: I0220 14:49:42.408505 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/kube-controller-manager/2.log" Feb 20 14:49:42.804444 master-0 kubenswrapper[7744]: I0220 14:49:42.804262 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/kube-controller-manager/3.log" Feb 20 14:49:42.827018 master-0 kubenswrapper[7744]: W0220 14:49:42.826870 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c60ad1f_f8d9_4c67_97a3_f9fa491bd463.slice/crio-ea68c4defdeeb01e90817720006f1125f253badcc4d0dde7d2c2223dd487b94c WatchSource:0}: Error finding container ea68c4defdeeb01e90817720006f1125f253badcc4d0dde7d2c2223dd487b94c: Status 404 returned error can't find the container with id ea68c4defdeeb01e90817720006f1125f253badcc4d0dde7d2c2223dd487b94c Feb 20 14:49:43.005422 master-0 kubenswrapper[7744]: I0220 14:49:43.004502 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/cluster-policy-controller/0.log" Feb 20 14:49:43.203208 master-0 kubenswrapper[7744]: I0220 14:49:43.203053 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_56c3cb71c9851003c8de7e7c5db4b87e/kube-scheduler/0.log" Feb 20 14:49:43.311554 master-0 kubenswrapper[7744]: I0220 14:49:43.311428 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp"] Feb 20 14:49:43.401846 master-0 kubenswrapper[7744]: I0220 14:49:43.401743 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_56c3cb71c9851003c8de7e7c5db4b87e/kube-scheduler/1.log" Feb 20 14:49:43.602021 master-0 kubenswrapper[7744]: I0220 14:49:43.601887 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_975d0fde-cb2f-4599-b3b7-7de876307a61/installer/0.log" Feb 20 14:49:43.799738 master-0 kubenswrapper[7744]: I0220 14:49:43.799600 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" event={"ID":"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463","Type":"ContainerStarted","Data":"ea68c4defdeeb01e90817720006f1125f253badcc4d0dde7d2c2223dd487b94c"} Feb 20 14:49:43.803007 master-0 kubenswrapper[7744]: I0220 14:49:43.802945 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-n29zt_989af121-da08-4f40-b08c-dd2aa67bc60c/kube-scheduler-operator-container/0.log" Feb 20 14:49:43.998265 master-0 kubenswrapper[7744]: I0220 14:49:43.998208 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-n29zt_989af121-da08-4f40-b08c-dd2aa67bc60c/kube-scheduler-operator-container/1.log" Feb 20 14:49:44.365453 master-0 kubenswrapper[7744]: I0220 14:49:44.365261 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-pwm24_db9dc349-5216-43ff-8c17-3a9384a010ea/openshift-apiserver-operator/0.log" Feb 20 14:49:44.403356 master-0 kubenswrapper[7744]: I0220 14:49:44.403314 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-pwm24_db9dc349-5216-43ff-8c17-3a9384a010ea/openshift-apiserver-operator/1.log" Feb 20 14:49:44.596177 master-0 kubenswrapper[7744]: I0220 14:49:44.596138 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-776c8f54bc-gmvx8_c5429ce9-f3b7-4024-ac77-3a93a2ac77bb/fix-audit-permissions/0.log" Feb 20 14:49:44.803808 master-0 kubenswrapper[7744]: I0220 14:49:44.803754 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-776c8f54bc-gmvx8_c5429ce9-f3b7-4024-ac77-3a93a2ac77bb/openshift-apiserver/0.log" Feb 20 14:49:45.056978 master-0 kubenswrapper[7744]: I0220 14:49:45.056707 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-776c8f54bc-gmvx8_c5429ce9-f3b7-4024-ac77-3a93a2ac77bb/openshift-apiserver-check-endpoints/0.log" Feb 20 14:49:45.205430 master-0 kubenswrapper[7744]: I0220 14:49:45.205377 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-jhd5c_234a44fd-c153-47a6-a11d-7d4b7165c236/etcd-operator/0.log" Feb 20 14:49:45.398161 master-0 kubenswrapper[7744]: I0220 14:49:45.398011 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-jhd5c_234a44fd-c153-47a6-a11d-7d4b7165c236/etcd-operator/1.log" Feb 20 14:49:45.603681 master-0 kubenswrapper[7744]: I0220 14:49:45.603632 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-j66jm_45d7ef0c-272b-4d1e-965f-484975d5d25c/openshift-controller-manager-operator/0.log" Feb 20 14:49:45.842036 master-0 kubenswrapper[7744]: I0220 14:49:45.841969 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-j66jm_45d7ef0c-272b-4d1e-965f-484975d5d25c/openshift-controller-manager-operator/1.log" Feb 20 14:49:45.889601 master-0 kubenswrapper[7744]: I0220 14:49:45.889517 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:49:46.032908 master-0 kubenswrapper[7744]: I0220 14:49:46.032866 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-69686f5989-xkhpg_26c30461-efe3-4999-9698-f3c478c71fa0/controller-manager/0.log" Feb 20 14:49:46.668549 master-0 kubenswrapper[7744]: I0220 14:49:46.668494 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6c6947b888-mrmnj_a5aae2e2-7323-4927-a5ca-645e2a8b7bf9/route-controller-manager/0.log" Feb 20 14:49:47.126503 master-0 kubenswrapper[7744]: I0220 14:49:47.126431 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-596f79dd6f-2g7jd_2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a/catalog-operator/0.log" Feb 20 14:49:47.667888 master-0 kubenswrapper[7744]: I0220 14:49:47.667845 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-5499d7f7bb-57rwb_64e9eca9-bbdd-4eca-9219-922bbab9b388/olm-operator/0.log" Feb 20 14:49:47.674200 master-0 kubenswrapper[7744]: I0220 14:49:47.674160 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-2sw9z_1fe69517-eec2-4721-933c-fa27cea7ab1f/kube-rbac-proxy/0.log" Feb 20 14:49:47.681320 master-0 kubenswrapper[7744]: I0220 14:49:47.681270 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-2sw9z_1fe69517-eec2-4721-933c-fa27cea7ab1f/package-server-manager/0.log" Feb 20 14:49:47.690428 master-0 kubenswrapper[7744]: I0220 14:49:47.690388 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-6c5ff764cd-l2884_4ecbdf77-0c73-487e-943e-5315a0f8b8d4/packageserver/0.log" Feb 20 14:49:49.363714 master-0 kubenswrapper[7744]: W0220 14:49:49.363537 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3cc4073_a926_4aba_81e6_c616c2bb2987.slice/crio-535151362e36c1745033704c37dfb910d9260b348b0c35a197ec5a2c74a4ea53 WatchSource:0}: Error finding container 535151362e36c1745033704c37dfb910d9260b348b0c35a197ec5a2c74a4ea53: Status 404 returned error can't find the container with id 535151362e36c1745033704c37dfb910d9260b348b0c35a197ec5a2c74a4ea53 Feb 20 14:49:49.836404 master-0 kubenswrapper[7744]: I0220 14:49:49.836323 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp" event={"ID":"e3cc4073-a926-4aba-81e6-c616c2bb2987","Type":"ContainerStarted","Data":"535151362e36c1745033704c37dfb910d9260b348b0c35a197ec5a2c74a4ea53"} Feb 20 14:49:50.077342 master-0 kubenswrapper[7744]: I0220 14:49:50.077245 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf"] Feb 20 14:49:50.077751 master-0 kubenswrapper[7744]: I0220 14:49:50.077685 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" podUID="3cb0fc3f-6897-4927-80fa-40cdf43be9a9" containerName="kube-rbac-proxy" containerID="cri-o://9a9be4e938a26463163025a87daa920277947a01e83f7f6d8b5804bafc0ed314" gracePeriod=30 Feb 20 14:49:50.077975 master-0 kubenswrapper[7744]: I0220 14:49:50.077819 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" podUID="3cb0fc3f-6897-4927-80fa-40cdf43be9a9" containerName="machine-approver-controller" containerID="cri-o://b6ddae4597fc71cd45b90b78a64fd133418f5e2decf6123b01925d04d7783ce6" gracePeriod=30 Feb 20 14:49:50.845464 master-0 kubenswrapper[7744]: I0220 14:49:50.845338 7744 generic.go:334] "Generic (PLEG): container finished" podID="3cb0fc3f-6897-4927-80fa-40cdf43be9a9" containerID="b6ddae4597fc71cd45b90b78a64fd133418f5e2decf6123b01925d04d7783ce6" exitCode=0 Feb 20 14:49:50.845464 master-0 kubenswrapper[7744]: I0220 14:49:50.845392 7744 generic.go:334] "Generic (PLEG): container finished" podID="3cb0fc3f-6897-4927-80fa-40cdf43be9a9" containerID="9a9be4e938a26463163025a87daa920277947a01e83f7f6d8b5804bafc0ed314" exitCode=0 Feb 20 14:49:50.846414 master-0 kubenswrapper[7744]: I0220 14:49:50.845454 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" event={"ID":"3cb0fc3f-6897-4927-80fa-40cdf43be9a9","Type":"ContainerDied","Data":"b6ddae4597fc71cd45b90b78a64fd133418f5e2decf6123b01925d04d7783ce6"} Feb 20 14:49:50.846414 master-0 kubenswrapper[7744]: I0220 14:49:50.845543 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" event={"ID":"3cb0fc3f-6897-4927-80fa-40cdf43be9a9","Type":"ContainerDied","Data":"9a9be4e938a26463163025a87daa920277947a01e83f7f6d8b5804bafc0ed314"} Feb 20 14:49:51.361812 master-0 kubenswrapper[7744]: W0220 14:49:51.361754 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef3a09a5_b019_48a3_97f8_7ddadb37394e.slice/crio-34cc992d367669608546ba8ae39873d4139dfeeb4850c5979567cde508c8b524 WatchSource:0}: Error finding container 34cc992d367669608546ba8ae39873d4139dfeeb4850c5979567cde508c8b524: Status 404 returned error can't find the container with id 34cc992d367669608546ba8ae39873d4139dfeeb4850c5979567cde508c8b524 Feb 20 14:49:51.390399 master-0 kubenswrapper[7744]: W0220 14:49:51.390341 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f55b652_bef8_4f50_9d1d_9d0a340c1dea.slice/crio-80505c2710f2e2216eec6a4e82e9601038f01af58386ea11bb977eb9c2b78e51 WatchSource:0}: Error finding container 80505c2710f2e2216eec6a4e82e9601038f01af58386ea11bb977eb9c2b78e51: Status 404 returned error can't find the container with id 80505c2710f2e2216eec6a4e82e9601038f01af58386ea11bb977eb9c2b78e51 Feb 20 14:49:51.426399 master-0 kubenswrapper[7744]: I0220 14:49:51.426350 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" Feb 20 14:49:51.523616 master-0 kubenswrapper[7744]: I0220 14:49:51.523490 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-auth-proxy-config\") pod \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\" (UID: \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\") " Feb 20 14:49:51.523616 master-0 kubenswrapper[7744]: I0220 14:49:51.523576 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-machine-approver-tls\") pod \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\" (UID: \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\") " Feb 20 14:49:51.524107 master-0 kubenswrapper[7744]: I0220 14:49:51.524034 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2fcg\" (UniqueName: \"kubernetes.io/projected/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-kube-api-access-m2fcg\") pod \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\" (UID: \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\") " Feb 20 14:49:51.524208 master-0 kubenswrapper[7744]: I0220 14:49:51.524153 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-config\") pod \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\" (UID: \"3cb0fc3f-6897-4927-80fa-40cdf43be9a9\") " Feb 20 14:49:51.526015 master-0 kubenswrapper[7744]: I0220 14:49:51.525856 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-config" (OuterVolumeSpecName: "config") pod "3cb0fc3f-6897-4927-80fa-40cdf43be9a9" (UID: "3cb0fc3f-6897-4927-80fa-40cdf43be9a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:49:51.527689 master-0 kubenswrapper[7744]: I0220 14:49:51.526894 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "3cb0fc3f-6897-4927-80fa-40cdf43be9a9" (UID: "3cb0fc3f-6897-4927-80fa-40cdf43be9a9"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:49:51.533351 master-0 kubenswrapper[7744]: I0220 14:49:51.533314 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "3cb0fc3f-6897-4927-80fa-40cdf43be9a9" (UID: "3cb0fc3f-6897-4927-80fa-40cdf43be9a9"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 14:49:51.533351 master-0 kubenswrapper[7744]: I0220 14:49:51.533409 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-kube-api-access-m2fcg" (OuterVolumeSpecName: "kube-api-access-m2fcg") pod "3cb0fc3f-6897-4927-80fa-40cdf43be9a9" (UID: "3cb0fc3f-6897-4927-80fa-40cdf43be9a9"). InnerVolumeSpecName "kube-api-access-m2fcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:49:51.628379 master-0 kubenswrapper[7744]: I0220 14:49:51.628329 7744 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-config\") on node \"master-0\" DevicePath \"\"" Feb 20 14:49:51.628379 master-0 kubenswrapper[7744]: I0220 14:49:51.628377 7744 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 20 14:49:51.628563 master-0 kubenswrapper[7744]: I0220 14:49:51.628395 7744 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Feb 20 14:49:51.628563 master-0 kubenswrapper[7744]: I0220 14:49:51.628410 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2fcg\" (UniqueName: \"kubernetes.io/projected/3cb0fc3f-6897-4927-80fa-40cdf43be9a9-kube-api-access-m2fcg\") on node \"master-0\" DevicePath \"\"" Feb 20 14:49:51.856909 master-0 kubenswrapper[7744]: I0220 14:49:51.856748 7744 generic.go:334] "Generic (PLEG): container finished" podID="ac3680de-aabf-414b-a340-5e5e6aea4822" containerID="55b93e62b4f65de932584b817ba60092f21e3f44ea709a7dccfe6475d2084e38" exitCode=0 Feb 20 14:49:51.856909 master-0 kubenswrapper[7744]: I0220 14:49:51.856852 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2cdp" event={"ID":"ac3680de-aabf-414b-a340-5e5e6aea4822","Type":"ContainerDied","Data":"55b93e62b4f65de932584b817ba60092f21e3f44ea709a7dccfe6475d2084e38"} Feb 20 14:49:51.865629 master-0 kubenswrapper[7744]: I0220 14:49:51.863750 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-nth67" event={"ID":"92008ac4-8deb-4fb9-9116-14d2d005bd36","Type":"ContainerStarted","Data":"e366984c121e8d2e113065b7ddcf8c580aefffdb74afe23f16e38dc9a00e5aa3"} Feb 20 14:49:51.866212 master-0 kubenswrapper[7744]: I0220 14:49:51.866108 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5frvf" event={"ID":"ef3a09a5-b019-48a3-97f8-7ddadb37394e","Type":"ContainerStarted","Data":"ed3a51968be582f9405f7944baf7a18811d9549cf28f115ef204ab2e3755c685"} Feb 20 14:49:51.866212 master-0 kubenswrapper[7744]: I0220 14:49:51.866189 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5frvf" event={"ID":"ef3a09a5-b019-48a3-97f8-7ddadb37394e","Type":"ContainerStarted","Data":"34cc992d367669608546ba8ae39873d4139dfeeb4850c5979567cde508c8b524"} Feb 20 14:49:51.869333 master-0 kubenswrapper[7744]: I0220 14:49:51.868254 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4wzg" event={"ID":"93786626-fac4-48f0-bf72-992bc39f4a82","Type":"ContainerStarted","Data":"4b48185bed34b04ded3112db1a2c329d504a7ceb8c020ba9fbe406707b9c3662"} Feb 20 14:49:51.872434 master-0 kubenswrapper[7744]: I0220 14:49:51.871422 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" event={"ID":"3cb0fc3f-6897-4927-80fa-40cdf43be9a9","Type":"ContainerDied","Data":"256b92231ab60b649b2c69cb8d06f03de4978bccecb61c0ac5c4c69b38a812a5"} Feb 20 14:49:51.872434 master-0 kubenswrapper[7744]: I0220 14:49:51.871443 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf" Feb 20 14:49:51.872434 master-0 kubenswrapper[7744]: I0220 14:49:51.871480 7744 scope.go:117] "RemoveContainer" containerID="b6ddae4597fc71cd45b90b78a64fd133418f5e2decf6123b01925d04d7783ce6" Feb 20 14:49:51.883200 master-0 kubenswrapper[7744]: I0220 14:49:51.883131 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" event={"ID":"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463","Type":"ContainerStarted","Data":"39729aa63d210240a6c419acbf228b3a124ab4900f3cc120e7b7aead6bf8e73a"} Feb 20 14:49:51.887314 master-0 kubenswrapper[7744]: I0220 14:49:51.887218 7744 generic.go:334] "Generic (PLEG): container finished" podID="6e5d953b-dbc7-48df-9d6b-d61030ffd6e3" containerID="6bb51ccc67529cda0c8d2e85bd6a87b5b5906d7277689a9401dd4cc5bc52c400" exitCode=0 Feb 20 14:49:51.887314 master-0 kubenswrapper[7744]: I0220 14:49:51.887301 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5fhb" event={"ID":"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3","Type":"ContainerDied","Data":"6bb51ccc67529cda0c8d2e85bd6a87b5b5906d7277689a9401dd4cc5bc52c400"} Feb 20 14:49:51.890467 master-0 kubenswrapper[7744]: I0220 14:49:51.890411 7744 generic.go:334] "Generic (PLEG): container finished" podID="b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d" containerID="501e152806072f51a6aa348d15cc2667dcd91a44e63ea82bf19b7f6a5b79b7c9" exitCode=0 Feb 20 14:49:51.890642 master-0 kubenswrapper[7744]: I0220 14:49:51.890605 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wddt" event={"ID":"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d","Type":"ContainerDied","Data":"501e152806072f51a6aa348d15cc2667dcd91a44e63ea82bf19b7f6a5b79b7c9"} Feb 20 14:49:51.897185 master-0 kubenswrapper[7744]: I0220 14:49:51.892376 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" event={"ID":"5f55b652-bef8-4f50-9d1d-9d0a340c1dea","Type":"ContainerStarted","Data":"80505c2710f2e2216eec6a4e82e9601038f01af58386ea11bb977eb9c2b78e51"} Feb 20 14:49:51.904961 master-0 kubenswrapper[7744]: I0220 14:49:51.904846 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5frvf" podStartSLOduration=12.904822108 podStartE2EDuration="12.904822108s" podCreationTimestamp="2026-02-20 14:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:49:51.897971749 +0000 UTC m=+191.100171659" watchObservedRunningTime="2026-02-20 14:49:51.904822108 +0000 UTC m=+191.107022028" Feb 20 14:49:51.918655 master-0 kubenswrapper[7744]: I0220 14:49:51.918568 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-nth67" podStartSLOduration=240.918548126 podStartE2EDuration="4m0.918548126s" podCreationTimestamp="2026-02-20 14:45:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:49:51.915977593 +0000 UTC m=+191.118177543" watchObservedRunningTime="2026-02-20 14:49:51.918548126 +0000 UTC m=+191.120748046" Feb 20 14:49:51.960334 master-0 kubenswrapper[7744]: I0220 14:49:51.960267 7744 scope.go:117] "RemoveContainer" containerID="9a9be4e938a26463163025a87daa920277947a01e83f7f6d8b5804bafc0ed314" Feb 20 14:49:52.009150 master-0 kubenswrapper[7744]: I0220 14:49:52.009079 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" podStartSLOduration=292.009056355 podStartE2EDuration="4m52.009056355s" podCreationTimestamp="2026-02-20 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:49:52.007890386 +0000 UTC m=+191.210090326" watchObservedRunningTime="2026-02-20 14:49:52.009056355 +0000 UTC m=+191.211256275" Feb 20 14:49:52.028322 master-0 kubenswrapper[7744]: I0220 14:49:52.028251 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf"] Feb 20 14:49:52.036555 master-0 kubenswrapper[7744]: I0220 14:49:52.036464 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-vzbjf"] Feb 20 14:49:52.057326 master-0 kubenswrapper[7744]: I0220 14:49:52.057199 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh"] Feb 20 14:49:52.057531 master-0 kubenswrapper[7744]: E0220 14:49:52.057493 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb0fc3f-6897-4927-80fa-40cdf43be9a9" containerName="machine-approver-controller" Feb 20 14:49:52.057531 master-0 kubenswrapper[7744]: I0220 14:49:52.057522 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb0fc3f-6897-4927-80fa-40cdf43be9a9" containerName="machine-approver-controller" Feb 20 14:49:52.057622 master-0 kubenswrapper[7744]: E0220 14:49:52.057550 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb0fc3f-6897-4927-80fa-40cdf43be9a9" containerName="kube-rbac-proxy" Feb 20 14:49:52.057622 master-0 kubenswrapper[7744]: I0220 14:49:52.057561 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb0fc3f-6897-4927-80fa-40cdf43be9a9" containerName="kube-rbac-proxy" Feb 20 14:49:52.057751 master-0 kubenswrapper[7744]: I0220 14:49:52.057712 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb0fc3f-6897-4927-80fa-40cdf43be9a9" containerName="machine-approver-controller" Feb 20 14:49:52.057751 master-0 kubenswrapper[7744]: I0220 14:49:52.057746 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb0fc3f-6897-4927-80fa-40cdf43be9a9" containerName="kube-rbac-proxy" Feb 20 14:49:52.058771 master-0 kubenswrapper[7744]: I0220 14:49:52.058735 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 14:49:52.064431 master-0 kubenswrapper[7744]: I0220 14:49:52.061663 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 20 14:49:52.064431 master-0 kubenswrapper[7744]: I0220 14:49:52.061978 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 20 14:49:52.064431 master-0 kubenswrapper[7744]: I0220 14:49:52.062150 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-ts5zc" Feb 20 14:49:52.064431 master-0 kubenswrapper[7744]: I0220 14:49:52.062359 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 20 14:49:52.064431 master-0 kubenswrapper[7744]: I0220 14:49:52.062509 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 20 14:49:52.064431 master-0 kubenswrapper[7744]: I0220 14:49:52.062688 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 20 14:49:52.241043 master-0 kubenswrapper[7744]: I0220 14:49:52.239366 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/996d4949-f92c-42ac-9bda-8c6ec0295e92-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 14:49:52.241043 master-0 kubenswrapper[7744]: I0220 14:49:52.239575 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-config\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 14:49:52.241043 master-0 kubenswrapper[7744]: I0220 14:49:52.239617 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kfqn\" (UniqueName: \"kubernetes.io/projected/996d4949-f92c-42ac-9bda-8c6ec0295e92-kube-api-access-4kfqn\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 14:49:52.241043 master-0 kubenswrapper[7744]: I0220 14:49:52.239677 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 14:49:52.341183 master-0 kubenswrapper[7744]: I0220 14:49:52.341132 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-config\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 14:49:52.341183 master-0 kubenswrapper[7744]: I0220 14:49:52.341179 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kfqn\" (UniqueName: \"kubernetes.io/projected/996d4949-f92c-42ac-9bda-8c6ec0295e92-kube-api-access-4kfqn\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 14:49:52.341404 master-0 kubenswrapper[7744]: I0220 14:49:52.341207 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 14:49:52.341485 master-0 kubenswrapper[7744]: I0220 14:49:52.341422 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/996d4949-f92c-42ac-9bda-8c6ec0295e92-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 14:49:52.341885 master-0 kubenswrapper[7744]: I0220 14:49:52.341843 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-config\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 14:49:52.341937 master-0 kubenswrapper[7744]: I0220 14:49:52.341895 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 14:49:52.344853 master-0 kubenswrapper[7744]: I0220 14:49:52.344785 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/996d4949-f92c-42ac-9bda-8c6ec0295e92-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 14:49:52.357798 master-0 kubenswrapper[7744]: I0220 14:49:52.357758 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kfqn\" (UniqueName: \"kubernetes.io/projected/996d4949-f92c-42ac-9bda-8c6ec0295e92-kube-api-access-4kfqn\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 14:49:52.387065 master-0 kubenswrapper[7744]: I0220 14:49:52.386996 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 14:49:52.495605 master-0 kubenswrapper[7744]: W0220 14:49:52.495483 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996d4949_f92c_42ac_9bda_8c6ec0295e92.slice/crio-8ef8165957098f6be8792289e9cb306a276c73110e287a7b80ba51a3888e812c WatchSource:0}: Error finding container 8ef8165957098f6be8792289e9cb306a276c73110e287a7b80ba51a3888e812c: Status 404 returned error can't find the container with id 8ef8165957098f6be8792289e9cb306a276c73110e287a7b80ba51a3888e812c Feb 20 14:49:52.908560 master-0 kubenswrapper[7744]: I0220 14:49:52.908380 7744 generic.go:334] "Generic (PLEG): container finished" podID="93786626-fac4-48f0-bf72-992bc39f4a82" containerID="4b48185bed34b04ded3112db1a2c329d504a7ceb8c020ba9fbe406707b9c3662" exitCode=0 Feb 20 14:49:52.908560 master-0 kubenswrapper[7744]: I0220 14:49:52.908470 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4wzg" event={"ID":"93786626-fac4-48f0-bf72-992bc39f4a82","Type":"ContainerDied","Data":"4b48185bed34b04ded3112db1a2c329d504a7ceb8c020ba9fbe406707b9c3662"} Feb 20 14:49:52.911560 master-0 kubenswrapper[7744]: I0220 14:49:52.911525 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" event={"ID":"996d4949-f92c-42ac-9bda-8c6ec0295e92","Type":"ContainerStarted","Data":"8ef8165957098f6be8792289e9cb306a276c73110e287a7b80ba51a3888e812c"} Feb 20 14:49:53.048669 master-0 kubenswrapper[7744]: I0220 14:49:53.048605 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb0fc3f-6897-4927-80fa-40cdf43be9a9" path="/var/lib/kubelet/pods/3cb0fc3f-6897-4927-80fa-40cdf43be9a9/volumes" Feb 20 14:49:53.929898 master-0 kubenswrapper[7744]: I0220 14:49:53.929854 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4wzg" event={"ID":"93786626-fac4-48f0-bf72-992bc39f4a82","Type":"ContainerStarted","Data":"b0987a23de1af7452aa858f67b72055860aa4f74e71922797df18cc1e04dddf0"} Feb 20 14:49:53.938528 master-0 kubenswrapper[7744]: I0220 14:49:53.938492 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2cdp" event={"ID":"ac3680de-aabf-414b-a340-5e5e6aea4822","Type":"ContainerStarted","Data":"c0aa3c8e94eb252e0640fa7490825fb0751114e737ccaefad40144b9aceb63d9"} Feb 20 14:49:53.944423 master-0 kubenswrapper[7744]: I0220 14:49:53.944136 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp" event={"ID":"e3cc4073-a926-4aba-81e6-c616c2bb2987","Type":"ContainerStarted","Data":"c7fbdaabc9defebc24663b20b460123ec251f6593568a39a6e85af3aef0bcfd5"} Feb 20 14:49:53.944652 master-0 kubenswrapper[7744]: I0220 14:49:53.944615 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp" Feb 20 14:49:53.947826 master-0 kubenswrapper[7744]: I0220 14:49:53.947798 7744 generic.go:334] "Generic (PLEG): container finished" podID="8c60ad1f-f8d9-4c67-97a3-f9fa491bd463" containerID="39729aa63d210240a6c419acbf228b3a124ab4900f3cc120e7b7aead6bf8e73a" exitCode=0 Feb 20 14:49:53.947972 master-0 kubenswrapper[7744]: I0220 14:49:53.947861 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" event={"ID":"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463","Type":"ContainerDied","Data":"39729aa63d210240a6c419acbf228b3a124ab4900f3cc120e7b7aead6bf8e73a"} Feb 20 14:49:53.950472 master-0 kubenswrapper[7744]: I0220 14:49:53.950459 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp" Feb 20 14:49:53.951047 master-0 kubenswrapper[7744]: I0220 14:49:53.951001 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5fhb" event={"ID":"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3","Type":"ContainerStarted","Data":"6bdf8ae8895847f111c076e57ac2ee7237248e5947f527438f2b1ae9a2034af5"} Feb 20 14:49:53.956698 master-0 kubenswrapper[7744]: I0220 14:49:53.956628 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z4wzg" podStartSLOduration=7.776908655 podStartE2EDuration="34.956611536s" podCreationTimestamp="2026-02-20 14:49:19 +0000 UTC" firstStartedPulling="2026-02-20 14:49:26.55339376 +0000 UTC m=+165.755593710" lastFinishedPulling="2026-02-20 14:49:53.733096631 +0000 UTC m=+192.935296591" observedRunningTime="2026-02-20 14:49:53.953787286 +0000 UTC m=+193.155987246" watchObservedRunningTime="2026-02-20 14:49:53.956611536 +0000 UTC m=+193.158811496" Feb 20 14:49:53.957393 master-0 kubenswrapper[7744]: I0220 14:49:53.957365 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" event={"ID":"5f55b652-bef8-4f50-9d1d-9d0a340c1dea","Type":"ContainerStarted","Data":"f3242db03cac46e4568d01c2eb90056f6c103228ea7040c2d234fdcf31ba865d"} Feb 20 14:49:53.958969 master-0 kubenswrapper[7744]: I0220 14:49:53.958935 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" event={"ID":"996d4949-f92c-42ac-9bda-8c6ec0295e92","Type":"ContainerStarted","Data":"4f03a59c794ee73bb7ffacd9f9054d362f5e6f5814326c13ca3530a0f5caacfb"} Feb 20 14:49:53.964702 master-0 kubenswrapper[7744]: I0220 14:49:53.964654 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wddt" event={"ID":"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d","Type":"ContainerStarted","Data":"52067142a26667c2638519b1973ebe093bf73aa0ce624b9cf4768d4f63063be7"} Feb 20 14:49:54.048615 master-0 kubenswrapper[7744]: I0220 14:49:54.048476 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp" podStartSLOduration=148.146939663 podStartE2EDuration="2m32.048452168s" podCreationTimestamp="2026-02-20 14:47:22 +0000 UTC" firstStartedPulling="2026-02-20 14:49:49.365948029 +0000 UTC m=+188.568147989" lastFinishedPulling="2026-02-20 14:49:53.267460574 +0000 UTC m=+192.469660494" observedRunningTime="2026-02-20 14:49:54.006536576 +0000 UTC m=+193.208736496" watchObservedRunningTime="2026-02-20 14:49:54.048452168 +0000 UTC m=+193.250652088" Feb 20 14:49:54.051389 master-0 kubenswrapper[7744]: I0220 14:49:54.051320 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n2cdp" podStartSLOduration=9.348574153 podStartE2EDuration="36.051302598s" podCreationTimestamp="2026-02-20 14:49:18 +0000 UTC" firstStartedPulling="2026-02-20 14:49:26.554352674 +0000 UTC m=+165.756552624" lastFinishedPulling="2026-02-20 14:49:53.257081149 +0000 UTC m=+192.459281069" observedRunningTime="2026-02-20 14:49:54.048001127 +0000 UTC m=+193.250201057" watchObservedRunningTime="2026-02-20 14:49:54.051302598 +0000 UTC m=+193.253502518" Feb 20 14:49:54.070090 master-0 kubenswrapper[7744]: I0220 14:49:54.069466 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x5fhb" podStartSLOduration=10.279885395 podStartE2EDuration="37.069452685s" podCreationTimestamp="2026-02-20 14:49:17 +0000 UTC" firstStartedPulling="2026-02-20 14:49:26.506061597 +0000 UTC m=+165.708261517" lastFinishedPulling="2026-02-20 14:49:53.295628867 +0000 UTC m=+192.497828807" observedRunningTime="2026-02-20 14:49:54.068310647 +0000 UTC m=+193.270510567" watchObservedRunningTime="2026-02-20 14:49:54.069452685 +0000 UTC m=+193.271652595" Feb 20 14:49:54.087453 master-0 kubenswrapper[7744]: I0220 14:49:54.087360 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9wddt" podStartSLOduration=11.331461933 podStartE2EDuration="38.087340886s" podCreationTimestamp="2026-02-20 14:49:16 +0000 UTC" firstStartedPulling="2026-02-20 14:49:26.539247692 +0000 UTC m=+165.741447612" lastFinishedPulling="2026-02-20 14:49:53.295126605 +0000 UTC m=+192.497326565" observedRunningTime="2026-02-20 14:49:54.086064804 +0000 UTC m=+193.288264784" watchObservedRunningTime="2026-02-20 14:49:54.087340886 +0000 UTC m=+193.289540806" Feb 20 14:49:54.111947 master-0 kubenswrapper[7744]: I0220 14:49:54.111870 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podStartSLOduration=14.150029767 podStartE2EDuration="16.111853679s" podCreationTimestamp="2026-02-20 14:49:38 +0000 UTC" firstStartedPulling="2026-02-20 14:49:51.39326546 +0000 UTC m=+190.595465380" lastFinishedPulling="2026-02-20 14:49:53.355089372 +0000 UTC m=+192.557289292" observedRunningTime="2026-02-20 14:49:54.11068426 +0000 UTC m=+193.312884180" watchObservedRunningTime="2026-02-20 14:49:54.111853679 +0000 UTC m=+193.314053599" Feb 20 14:49:54.314311 master-0 kubenswrapper[7744]: I0220 14:49:54.314170 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:49:54.319797 master-0 kubenswrapper[7744]: I0220 14:49:54.319743 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:49:54.319797 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:49:54.319797 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:49:54.319797 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:49:54.320117 master-0 kubenswrapper[7744]: I0220 14:49:54.319815 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:49:54.974458 master-0 kubenswrapper[7744]: I0220 14:49:54.974405 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" event={"ID":"996d4949-f92c-42ac-9bda-8c6ec0295e92","Type":"ContainerStarted","Data":"b6e9e6d9ccde8375bcdecc9c3bf9ed6951fb841bc2a4f124a46a0fefb565de16"} Feb 20 14:49:55.003370 master-0 kubenswrapper[7744]: I0220 14:49:55.003286 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" podStartSLOduration=3.003268702 podStartE2EDuration="3.003268702s" podCreationTimestamp="2026-02-20 14:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:49:55.002154275 +0000 UTC m=+194.204354235" watchObservedRunningTime="2026-02-20 14:49:55.003268702 +0000 UTC m=+194.205468632" Feb 20 14:49:55.118181 master-0 kubenswrapper[7744]: I0220 14:49:55.118099 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-gsn48"] Feb 20 14:49:55.122772 master-0 kubenswrapper[7744]: I0220 14:49:55.120087 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 14:49:55.126455 master-0 kubenswrapper[7744]: I0220 14:49:55.126419 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 20 14:49:55.126630 master-0 kubenswrapper[7744]: I0220 14:49:55.126518 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 20 14:49:55.126630 master-0 kubenswrapper[7744]: I0220 14:49:55.126543 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-s2d9t" Feb 20 14:49:55.126702 master-0 kubenswrapper[7744]: I0220 14:49:55.126669 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 20 14:49:55.131532 master-0 kubenswrapper[7744]: I0220 14:49:55.131470 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-gsn48"] Feb 20 14:49:55.276048 master-0 kubenswrapper[7744]: I0220 14:49:55.275996 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" Feb 20 14:49:55.277595 master-0 kubenswrapper[7744]: I0220 14:49:55.277558 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae43311e-14ba-40a1-bdbf-f02d68031757-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 14:49:55.277730 master-0 kubenswrapper[7744]: I0220 14:49:55.277672 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf5p9\" (UniqueName: \"kubernetes.io/projected/ae43311e-14ba-40a1-bdbf-f02d68031757-kube-api-access-mf5p9\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 14:49:55.277852 master-0 kubenswrapper[7744]: I0220 14:49:55.277816 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 14:49:55.278036 master-0 kubenswrapper[7744]: I0220 14:49:55.277909 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 14:49:55.317782 master-0 kubenswrapper[7744]: I0220 14:49:55.317731 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:49:55.317782 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:49:55.317782 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:49:55.317782 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:49:55.318038 master-0 kubenswrapper[7744]: I0220 14:49:55.317797 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:49:55.390023 master-0 kubenswrapper[7744]: I0220 14:49:55.387822 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-config-volume\") pod \"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463\" (UID: \"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463\") " Feb 20 14:49:55.390023 master-0 kubenswrapper[7744]: I0220 14:49:55.387934 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-secret-volume\") pod \"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463\" (UID: \"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463\") " Feb 20 14:49:55.390023 master-0 kubenswrapper[7744]: I0220 14:49:55.387995 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98rjm\" (UniqueName: \"kubernetes.io/projected/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-kube-api-access-98rjm\") pod \"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463\" (UID: \"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463\") " Feb 20 14:49:55.390023 master-0 kubenswrapper[7744]: I0220 14:49:55.388174 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae43311e-14ba-40a1-bdbf-f02d68031757-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 14:49:55.390023 master-0 kubenswrapper[7744]: I0220 14:49:55.388215 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf5p9\" (UniqueName: \"kubernetes.io/projected/ae43311e-14ba-40a1-bdbf-f02d68031757-kube-api-access-mf5p9\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 14:49:55.390023 master-0 kubenswrapper[7744]: I0220 14:49:55.388248 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 14:49:55.390023 master-0 kubenswrapper[7744]: I0220 14:49:55.388285 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 14:49:55.392427 master-0 kubenswrapper[7744]: I0220 14:49:55.392379 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-config-volume" (OuterVolumeSpecName: "config-volume") pod "8c60ad1f-f8d9-4c67-97a3-f9fa491bd463" (UID: "8c60ad1f-f8d9-4c67-97a3-f9fa491bd463"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:49:55.395374 master-0 kubenswrapper[7744]: I0220 14:49:55.395322 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae43311e-14ba-40a1-bdbf-f02d68031757-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 14:49:55.405565 master-0 kubenswrapper[7744]: I0220 14:49:55.399270 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 14:49:55.409626 master-0 kubenswrapper[7744]: I0220 14:49:55.409087 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8c60ad1f-f8d9-4c67-97a3-f9fa491bd463" (UID: "8c60ad1f-f8d9-4c67-97a3-f9fa491bd463"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 14:49:55.409626 master-0 kubenswrapper[7744]: I0220 14:49:55.409140 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-kube-api-access-98rjm" (OuterVolumeSpecName: "kube-api-access-98rjm") pod "8c60ad1f-f8d9-4c67-97a3-f9fa491bd463" (UID: "8c60ad1f-f8d9-4c67-97a3-f9fa491bd463"). InnerVolumeSpecName "kube-api-access-98rjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:49:55.412045 master-0 kubenswrapper[7744]: I0220 14:49:55.411980 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 14:49:55.413946 master-0 kubenswrapper[7744]: I0220 14:49:55.413537 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf5p9\" (UniqueName: \"kubernetes.io/projected/ae43311e-14ba-40a1-bdbf-f02d68031757-kube-api-access-mf5p9\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 14:49:55.445504 master-0 kubenswrapper[7744]: I0220 14:49:55.445450 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 14:49:55.489889 master-0 kubenswrapper[7744]: I0220 14:49:55.489848 7744 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 14:49:55.489889 master-0 kubenswrapper[7744]: I0220 14:49:55.489884 7744 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 14:49:55.490126 master-0 kubenswrapper[7744]: I0220 14:49:55.489901 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98rjm\" (UniqueName: \"kubernetes.io/projected/8c60ad1f-f8d9-4c67-97a3-f9fa491bd463-kube-api-access-98rjm\") on node \"master-0\" DevicePath \"\"" Feb 20 14:49:55.872642 master-0 kubenswrapper[7744]: I0220 14:49:55.872594 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-gsn48"] Feb 20 14:49:55.874052 master-0 kubenswrapper[7744]: W0220 14:49:55.874006 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae43311e_14ba_40a1_bdbf_f02d68031757.slice/crio-3c3c6a0066a2da65aa0c6f5621f865feea551c3602354f05a3bf53b7f588a01e WatchSource:0}: Error finding container 3c3c6a0066a2da65aa0c6f5621f865feea551c3602354f05a3bf53b7f588a01e: Status 404 returned error can't find the container with id 3c3c6a0066a2da65aa0c6f5621f865feea551c3602354f05a3bf53b7f588a01e Feb 20 14:49:55.982790 master-0 kubenswrapper[7744]: I0220 14:49:55.982712 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" event={"ID":"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463","Type":"ContainerDied","Data":"ea68c4defdeeb01e90817720006f1125f253badcc4d0dde7d2c2223dd487b94c"} Feb 20 14:49:55.982790 master-0 kubenswrapper[7744]: I0220 14:49:55.982765 7744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea68c4defdeeb01e90817720006f1125f253badcc4d0dde7d2c2223dd487b94c" Feb 20 14:49:55.983669 master-0 kubenswrapper[7744]: I0220 14:49:55.982833 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" Feb 20 14:49:55.984561 master-0 kubenswrapper[7744]: I0220 14:49:55.984524 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" event={"ID":"ae43311e-14ba-40a1-bdbf-f02d68031757","Type":"ContainerStarted","Data":"3c3c6a0066a2da65aa0c6f5621f865feea551c3602354f05a3bf53b7f588a01e"} Feb 20 14:49:55.987794 master-0 kubenswrapper[7744]: I0220 14:49:55.987742 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw"] Feb 20 14:49:55.988081 master-0 kubenswrapper[7744]: I0220 14:49:55.988032 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" podUID="9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" containerName="cluster-cloud-controller-manager" containerID="cri-o://ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645" gracePeriod=30 Feb 20 14:49:55.988081 master-0 kubenswrapper[7744]: I0220 14:49:55.988054 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" podUID="9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" containerName="config-sync-controllers" containerID="cri-o://54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f" gracePeriod=30 Feb 20 14:49:55.988251 master-0 kubenswrapper[7744]: I0220 14:49:55.988048 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" podUID="9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" containerName="kube-rbac-proxy" containerID="cri-o://515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a" gracePeriod=30 Feb 20 14:49:56.132361 master-0 kubenswrapper[7744]: I0220 14:49:56.132311 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:56.301319 master-0 kubenswrapper[7744]: I0220 14:49:56.301227 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sktfg\" (UniqueName: \"kubernetes.io/projected/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-kube-api-access-sktfg\") pod \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " Feb 20 14:49:56.301319 master-0 kubenswrapper[7744]: I0220 14:49:56.301307 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-images\") pod \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " Feb 20 14:49:56.301692 master-0 kubenswrapper[7744]: I0220 14:49:56.301435 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-auth-proxy-config\") pod \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " Feb 20 14:49:56.301692 master-0 kubenswrapper[7744]: I0220 14:49:56.301477 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-host-etc-kube\") pod \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " Feb 20 14:49:56.301692 master-0 kubenswrapper[7744]: I0220 14:49:56.301533 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-cloud-controller-manager-operator-tls\") pod \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\" (UID: \"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4\") " Feb 20 14:49:56.301692 master-0 kubenswrapper[7744]: I0220 14:49:56.301599 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" (UID: "9fa7b31e-a95e-44dc-9c4c-211e8b5718e4"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:49:56.302114 master-0 kubenswrapper[7744]: I0220 14:49:56.301891 7744 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Feb 20 14:49:56.302114 master-0 kubenswrapper[7744]: I0220 14:49:56.302013 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-images" (OuterVolumeSpecName: "images") pod "9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" (UID: "9fa7b31e-a95e-44dc-9c4c-211e8b5718e4"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:49:56.302114 master-0 kubenswrapper[7744]: I0220 14:49:56.302022 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" (UID: "9fa7b31e-a95e-44dc-9c4c-211e8b5718e4"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:49:56.304325 master-0 kubenswrapper[7744]: I0220 14:49:56.304263 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" (UID: "9fa7b31e-a95e-44dc-9c4c-211e8b5718e4"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 14:49:56.306768 master-0 kubenswrapper[7744]: I0220 14:49:56.306408 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-kube-api-access-sktfg" (OuterVolumeSpecName: "kube-api-access-sktfg") pod "9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" (UID: "9fa7b31e-a95e-44dc-9c4c-211e8b5718e4"). InnerVolumeSpecName "kube-api-access-sktfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:49:56.316653 master-0 kubenswrapper[7744]: I0220 14:49:56.316581 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:49:56.316653 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:49:56.316653 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:49:56.316653 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:49:56.316989 master-0 kubenswrapper[7744]: I0220 14:49:56.316658 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:49:56.404131 master-0 kubenswrapper[7744]: I0220 14:49:56.403959 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sktfg\" (UniqueName: \"kubernetes.io/projected/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-kube-api-access-sktfg\") on node \"master-0\" DevicePath \"\"" Feb 20 14:49:56.404131 master-0 kubenswrapper[7744]: I0220 14:49:56.404027 7744 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-images\") on node \"master-0\" DevicePath \"\"" Feb 20 14:49:56.404131 master-0 kubenswrapper[7744]: I0220 14:49:56.404050 7744 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 20 14:49:56.404131 master-0 kubenswrapper[7744]: I0220 14:49:56.404082 7744 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Feb 20 14:49:57.006970 master-0 kubenswrapper[7744]: I0220 14:49:57.001523 7744 generic.go:334] "Generic (PLEG): container finished" podID="9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" containerID="515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a" exitCode=0 Feb 20 14:49:57.006970 master-0 kubenswrapper[7744]: I0220 14:49:57.001582 7744 generic.go:334] "Generic (PLEG): container finished" podID="9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" containerID="54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f" exitCode=0 Feb 20 14:49:57.006970 master-0 kubenswrapper[7744]: I0220 14:49:57.001598 7744 generic.go:334] "Generic (PLEG): container finished" podID="9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" containerID="ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645" exitCode=0 Feb 20 14:49:57.006970 master-0 kubenswrapper[7744]: I0220 14:49:57.001649 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" Feb 20 14:49:57.006970 master-0 kubenswrapper[7744]: I0220 14:49:57.001681 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" event={"ID":"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4","Type":"ContainerDied","Data":"515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a"} Feb 20 14:49:57.006970 master-0 kubenswrapper[7744]: I0220 14:49:57.001738 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" event={"ID":"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4","Type":"ContainerDied","Data":"54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f"} Feb 20 14:49:57.006970 master-0 kubenswrapper[7744]: I0220 14:49:57.001763 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" event={"ID":"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4","Type":"ContainerDied","Data":"ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645"} Feb 20 14:49:57.006970 master-0 kubenswrapper[7744]: I0220 14:49:57.001783 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw" event={"ID":"9fa7b31e-a95e-44dc-9c4c-211e8b5718e4","Type":"ContainerDied","Data":"527e8d5ed18d502241ed77c666a2e3681427d1002e24a244333ea95634d1d2f8"} Feb 20 14:49:57.006970 master-0 kubenswrapper[7744]: I0220 14:49:57.001811 7744 scope.go:117] "RemoveContainer" containerID="515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a" Feb 20 14:49:57.063659 master-0 kubenswrapper[7744]: I0220 14:49:57.063597 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw"] Feb 20 14:49:57.064413 master-0 kubenswrapper[7744]: I0220 14:49:57.064368 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-s4pqw"] Feb 20 14:49:57.110085 master-0 kubenswrapper[7744]: I0220 14:49:57.110023 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj"] Feb 20 14:49:57.110289 master-0 kubenswrapper[7744]: E0220 14:49:57.110244 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" containerName="kube-rbac-proxy" Feb 20 14:49:57.110289 master-0 kubenswrapper[7744]: I0220 14:49:57.110256 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" containerName="kube-rbac-proxy" Feb 20 14:49:57.110289 master-0 kubenswrapper[7744]: E0220 14:49:57.110268 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" containerName="config-sync-controllers" Feb 20 14:49:57.110289 master-0 kubenswrapper[7744]: I0220 14:49:57.110275 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" containerName="config-sync-controllers" Feb 20 14:49:57.110289 master-0 kubenswrapper[7744]: E0220 14:49:57.110283 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c60ad1f-f8d9-4c67-97a3-f9fa491bd463" containerName="collect-profiles" Feb 20 14:49:57.110858 master-0 kubenswrapper[7744]: I0220 14:49:57.110780 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c60ad1f-f8d9-4c67-97a3-f9fa491bd463" containerName="collect-profiles" Feb 20 14:49:57.110858 master-0 kubenswrapper[7744]: E0220 14:49:57.110854 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" containerName="cluster-cloud-controller-manager" Feb 20 14:49:57.110995 master-0 kubenswrapper[7744]: I0220 14:49:57.110863 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" containerName="cluster-cloud-controller-manager" Feb 20 14:49:57.111074 master-0 kubenswrapper[7744]: I0220 14:49:57.111047 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c60ad1f-f8d9-4c67-97a3-f9fa491bd463" containerName="collect-profiles" Feb 20 14:49:57.111136 master-0 kubenswrapper[7744]: I0220 14:49:57.111080 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" containerName="kube-rbac-proxy" Feb 20 14:49:57.111136 master-0 kubenswrapper[7744]: I0220 14:49:57.111098 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" containerName="config-sync-controllers" Feb 20 14:49:57.111136 master-0 kubenswrapper[7744]: I0220 14:49:57.111114 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" containerName="cluster-cloud-controller-manager" Feb 20 14:49:57.112039 master-0 kubenswrapper[7744]: I0220 14:49:57.112013 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:57.115044 master-0 kubenswrapper[7744]: I0220 14:49:57.114986 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-mnmfc" Feb 20 14:49:57.115336 master-0 kubenswrapper[7744]: I0220 14:49:57.115283 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 20 14:49:57.115400 master-0 kubenswrapper[7744]: I0220 14:49:57.115361 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 20 14:49:57.115565 master-0 kubenswrapper[7744]: I0220 14:49:57.115503 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 14:49:57.115982 master-0 kubenswrapper[7744]: I0220 14:49:57.115704 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 20 14:49:57.115982 master-0 kubenswrapper[7744]: I0220 14:49:57.115946 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 20 14:49:57.223808 master-0 kubenswrapper[7744]: I0220 14:49:57.215044 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:57.223808 master-0 kubenswrapper[7744]: I0220 14:49:57.215322 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:57.223808 master-0 kubenswrapper[7744]: I0220 14:49:57.215416 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/caef1c17-56b0-479c-b000-caaac3c2b249-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:57.223808 master-0 kubenswrapper[7744]: I0220 14:49:57.215550 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kgzf\" (UniqueName: \"kubernetes.io/projected/caef1c17-56b0-479c-b000-caaac3c2b249-kube-api-access-8kgzf\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:57.223808 master-0 kubenswrapper[7744]: I0220 14:49:57.215575 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/caef1c17-56b0-479c-b000-caaac3c2b249-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:57.316322 master-0 kubenswrapper[7744]: I0220 14:49:57.316197 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:49:57.316322 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:49:57.316322 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:49:57.316322 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:49:57.316322 master-0 kubenswrapper[7744]: I0220 14:49:57.316299 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:49:57.316641 master-0 kubenswrapper[7744]: I0220 14:49:57.316495 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kgzf\" (UniqueName: \"kubernetes.io/projected/caef1c17-56b0-479c-b000-caaac3c2b249-kube-api-access-8kgzf\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:57.316641 master-0 kubenswrapper[7744]: I0220 14:49:57.316575 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/caef1c17-56b0-479c-b000-caaac3c2b249-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:57.316726 master-0 kubenswrapper[7744]: I0220 14:49:57.316698 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:57.316793 master-0 kubenswrapper[7744]: I0220 14:49:57.316761 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/caef1c17-56b0-479c-b000-caaac3c2b249-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:57.316863 master-0 kubenswrapper[7744]: I0220 14:49:57.316775 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:57.316910 master-0 kubenswrapper[7744]: I0220 14:49:57.316887 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/caef1c17-56b0-479c-b000-caaac3c2b249-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:57.318400 master-0 kubenswrapper[7744]: I0220 14:49:57.318365 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:57.318796 master-0 kubenswrapper[7744]: I0220 14:49:57.318756 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:57.322037 master-0 kubenswrapper[7744]: I0220 14:49:57.320720 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/caef1c17-56b0-479c-b000-caaac3c2b249-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:57.441242 master-0 kubenswrapper[7744]: I0220 14:49:57.441164 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9wddt" Feb 20 14:49:57.441242 master-0 kubenswrapper[7744]: I0220 14:49:57.441244 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9wddt" Feb 20 14:49:57.446619 master-0 kubenswrapper[7744]: I0220 14:49:57.446573 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kgzf\" (UniqueName: \"kubernetes.io/projected/caef1c17-56b0-479c-b000-caaac3c2b249-kube-api-access-8kgzf\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:57.498420 master-0 kubenswrapper[7744]: I0220 14:49:57.498327 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x5fhb" Feb 20 14:49:57.498608 master-0 kubenswrapper[7744]: I0220 14:49:57.498576 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9wddt" Feb 20 14:49:57.498714 master-0 kubenswrapper[7744]: I0220 14:49:57.498614 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x5fhb" Feb 20 14:49:57.556553 master-0 kubenswrapper[7744]: I0220 14:49:57.556519 7744 scope.go:117] "RemoveContainer" containerID="54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f" Feb 20 14:49:57.556941 master-0 kubenswrapper[7744]: I0220 14:49:57.556889 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x5fhb" Feb 20 14:49:57.622236 master-0 kubenswrapper[7744]: I0220 14:49:57.622162 7744 scope.go:117] "RemoveContainer" containerID="ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645" Feb 20 14:49:57.649199 master-0 kubenswrapper[7744]: I0220 14:49:57.649128 7744 scope.go:117] "RemoveContainer" containerID="515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a" Feb 20 14:49:57.649625 master-0 kubenswrapper[7744]: E0220 14:49:57.649579 7744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a\": container with ID starting with 515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a not found: ID does not exist" containerID="515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a" Feb 20 14:49:57.649806 master-0 kubenswrapper[7744]: I0220 14:49:57.649766 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a"} err="failed to get container status \"515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a\": rpc error: code = NotFound desc = could not find container \"515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a\": container with ID starting with 515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a not found: ID does not exist" Feb 20 14:49:57.649969 master-0 kubenswrapper[7744]: I0220 14:49:57.649946 7744 scope.go:117] "RemoveContainer" containerID="54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f" Feb 20 14:49:57.651327 master-0 kubenswrapper[7744]: E0220 14:49:57.651277 7744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f\": container with ID starting with 54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f not found: ID does not exist" containerID="54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f" Feb 20 14:49:57.651427 master-0 kubenswrapper[7744]: I0220 14:49:57.651343 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f"} err="failed to get container status \"54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f\": rpc error: code = NotFound desc = could not find container \"54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f\": container with ID starting with 54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f not found: ID does not exist" Feb 20 14:49:57.651427 master-0 kubenswrapper[7744]: I0220 14:49:57.651376 7744 scope.go:117] "RemoveContainer" containerID="ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645" Feb 20 14:49:57.651812 master-0 kubenswrapper[7744]: E0220 14:49:57.651766 7744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645\": container with ID starting with ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645 not found: ID does not exist" containerID="ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645" Feb 20 14:49:57.651812 master-0 kubenswrapper[7744]: I0220 14:49:57.651798 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645"} err="failed to get container status \"ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645\": rpc error: code = NotFound desc = could not find container \"ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645\": container with ID starting with ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645 not found: ID does not exist" Feb 20 14:49:57.651812 master-0 kubenswrapper[7744]: I0220 14:49:57.651816 7744 scope.go:117] "RemoveContainer" containerID="515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a" Feb 20 14:49:57.653189 master-0 kubenswrapper[7744]: I0220 14:49:57.653111 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a"} err="failed to get container status \"515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a\": rpc error: code = NotFound desc = could not find container \"515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a\": container with ID starting with 515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a not found: ID does not exist" Feb 20 14:49:57.653189 master-0 kubenswrapper[7744]: I0220 14:49:57.653146 7744 scope.go:117] "RemoveContainer" containerID="54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f" Feb 20 14:49:57.653426 master-0 kubenswrapper[7744]: I0220 14:49:57.653394 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f"} err="failed to get container status \"54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f\": rpc error: code = NotFound desc = could not find container \"54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f\": container with ID starting with 54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f not found: ID does not exist" Feb 20 14:49:57.653507 master-0 kubenswrapper[7744]: I0220 14:49:57.653429 7744 scope.go:117] "RemoveContainer" containerID="ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645" Feb 20 14:49:57.653800 master-0 kubenswrapper[7744]: I0220 14:49:57.653769 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645"} err="failed to get container status \"ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645\": rpc error: code = NotFound desc = could not find container \"ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645\": container with ID starting with ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645 not found: ID does not exist" Feb 20 14:49:57.653800 master-0 kubenswrapper[7744]: I0220 14:49:57.653790 7744 scope.go:117] "RemoveContainer" containerID="515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a" Feb 20 14:49:57.654129 master-0 kubenswrapper[7744]: I0220 14:49:57.654100 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a"} err="failed to get container status \"515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a\": rpc error: code = NotFound desc = could not find container \"515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a\": container with ID starting with 515415cb643a34fff9c61d7f0723ba2fd72d2d256d156e76e399b3617decc41a not found: ID does not exist" Feb 20 14:49:57.654129 master-0 kubenswrapper[7744]: I0220 14:49:57.654123 7744 scope.go:117] "RemoveContainer" containerID="54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f" Feb 20 14:49:57.654436 master-0 kubenswrapper[7744]: I0220 14:49:57.654373 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f"} err="failed to get container status \"54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f\": rpc error: code = NotFound desc = could not find container \"54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f\": container with ID starting with 54ec7ce9956b180ef005ca05cbfe1ce33f814778b26f3d0ceb60bda69a713f1f not found: ID does not exist" Feb 20 14:49:57.654436 master-0 kubenswrapper[7744]: I0220 14:49:57.654407 7744 scope.go:117] "RemoveContainer" containerID="ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645" Feb 20 14:49:57.654866 master-0 kubenswrapper[7744]: I0220 14:49:57.654830 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645"} err="failed to get container status \"ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645\": rpc error: code = NotFound desc = could not find container \"ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645\": container with ID starting with ac64c838fffa896b6a57baf07b2ddc2f960beaced229ab1d2a48f975cfba8645 not found: ID does not exist" Feb 20 14:49:57.730521 master-0 kubenswrapper[7744]: I0220 14:49:57.730471 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 14:49:58.008774 master-0 kubenswrapper[7744]: I0220 14:49:58.008723 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" event={"ID":"caef1c17-56b0-479c-b000-caaac3c2b249","Type":"ContainerStarted","Data":"f83848e1580bc2bc923ed29b258b640fe63d1b2a36889eeff462ef2f63db0d04"} Feb 20 14:49:58.020948 master-0 kubenswrapper[7744]: I0220 14:49:58.020686 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" event={"ID":"ae43311e-14ba-40a1-bdbf-f02d68031757","Type":"ContainerStarted","Data":"addc02a5d315de2c99503b1e4806fe5dbe1a200c2f58b5fc9834e01320e787df"} Feb 20 14:49:58.020948 master-0 kubenswrapper[7744]: I0220 14:49:58.020722 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" event={"ID":"ae43311e-14ba-40a1-bdbf-f02d68031757","Type":"ContainerStarted","Data":"98cefd97ab706d635159fe166fdd26af88aed13fdb9a558beff59fd90bc32cf6"} Feb 20 14:49:58.049641 master-0 kubenswrapper[7744]: I0220 14:49:58.044242 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" podStartSLOduration=1.269726011 podStartE2EDuration="3.044219491s" podCreationTimestamp="2026-02-20 14:49:55 +0000 UTC" firstStartedPulling="2026-02-20 14:49:55.876739763 +0000 UTC m=+195.078939683" lastFinishedPulling="2026-02-20 14:49:57.651233243 +0000 UTC m=+196.853433163" observedRunningTime="2026-02-20 14:49:58.03887746 +0000 UTC m=+197.241077380" watchObservedRunningTime="2026-02-20 14:49:58.044219491 +0000 UTC m=+197.246419411" Feb 20 14:49:58.078710 master-0 kubenswrapper[7744]: I0220 14:49:58.078652 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9wddt" Feb 20 14:49:58.079564 master-0 kubenswrapper[7744]: I0220 14:49:58.079274 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x5fhb" Feb 20 14:49:58.316843 master-0 kubenswrapper[7744]: I0220 14:49:58.316769 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:49:58.316843 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:49:58.316843 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:49:58.316843 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:49:58.317167 master-0 kubenswrapper[7744]: I0220 14:49:58.316876 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:49:59.030870 master-0 kubenswrapper[7744]: I0220 14:49:59.030808 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" event={"ID":"caef1c17-56b0-479c-b000-caaac3c2b249","Type":"ContainerStarted","Data":"24414df873e0571bc67283c69593b9f634f2224fa05d695362ba0c99afbe232a"} Feb 20 14:49:59.030870 master-0 kubenswrapper[7744]: I0220 14:49:59.030867 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" event={"ID":"caef1c17-56b0-479c-b000-caaac3c2b249","Type":"ContainerStarted","Data":"40d63e74e24fee68be44b5de74837dcb78a9dc13e3f7cf14b4e7c069fc14a3c1"} Feb 20 14:49:59.031852 master-0 kubenswrapper[7744]: I0220 14:49:59.030891 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" event={"ID":"caef1c17-56b0-479c-b000-caaac3c2b249","Type":"ContainerStarted","Data":"ba4791195ab28fdefd71609ee2f152b2f868666e0ec80047600b61f1c976a50f"} Feb 20 14:49:59.055999 master-0 kubenswrapper[7744]: I0220 14:49:59.055730 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa7b31e-a95e-44dc-9c4c-211e8b5718e4" path="/var/lib/kubelet/pods/9fa7b31e-a95e-44dc-9c4c-211e8b5718e4/volumes" Feb 20 14:49:59.059239 master-0 kubenswrapper[7744]: I0220 14:49:59.059164 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" podStartSLOduration=2.059144496 podStartE2EDuration="2.059144496s" podCreationTimestamp="2026-02-20 14:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:49:59.052265086 +0000 UTC m=+198.254465026" watchObservedRunningTime="2026-02-20 14:49:59.059144496 +0000 UTC m=+198.261344416" Feb 20 14:49:59.164406 master-0 kubenswrapper[7744]: I0220 14:49:59.159030 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 14:49:59.164406 master-0 kubenswrapper[7744]: I0220 14:49:59.161703 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 14:49:59.229943 master-0 kubenswrapper[7744]: I0220 14:49:59.229814 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 14:49:59.316611 master-0 kubenswrapper[7744]: I0220 14:49:59.316463 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:49:59.316611 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:49:59.316611 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:49:59.316611 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:49:59.316842 master-0 kubenswrapper[7744]: I0220 14:49:59.316606 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:49:59.480007 master-0 kubenswrapper[7744]: I0220 14:49:59.479954 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-bk9bp"] Feb 20 14:49:59.481716 master-0 kubenswrapper[7744]: I0220 14:49:59.481680 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.486145 master-0 kubenswrapper[7744]: I0220 14:49:59.486093 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 20 14:49:59.487895 master-0 kubenswrapper[7744]: I0220 14:49:59.486329 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 20 14:49:59.487895 master-0 kubenswrapper[7744]: I0220 14:49:59.486481 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-7m26s" Feb 20 14:49:59.492731 master-0 kubenswrapper[7744]: I0220 14:49:59.492460 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4"] Feb 20 14:49:59.494055 master-0 kubenswrapper[7744]: I0220 14:49:59.493794 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 14:49:59.497052 master-0 kubenswrapper[7744]: I0220 14:49:59.497006 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-cpp79" Feb 20 14:49:59.498783 master-0 kubenswrapper[7744]: I0220 14:49:59.498746 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 20 14:49:59.499028 master-0 kubenswrapper[7744]: I0220 14:49:59.499002 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 20 14:49:59.508794 master-0 kubenswrapper[7744]: I0220 14:49:59.508552 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4"] Feb 20 14:49:59.518405 master-0 kubenswrapper[7744]: I0220 14:49:59.518340 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-stlhz"] Feb 20 14:49:59.519546 master-0 kubenswrapper[7744]: I0220 14:49:59.519515 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.524354 master-0 kubenswrapper[7744]: I0220 14:49:59.524304 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 20 14:49:59.524354 master-0 kubenswrapper[7744]: I0220 14:49:59.524332 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 20 14:49:59.524766 master-0 kubenswrapper[7744]: I0220 14:49:59.524541 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-jtt44" Feb 20 14:49:59.524766 master-0 kubenswrapper[7744]: I0220 14:49:59.524599 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 20 14:49:59.550098 master-0 kubenswrapper[7744]: I0220 14:49:59.550042 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-stlhz"] Feb 20 14:49:59.562512 master-0 kubenswrapper[7744]: I0220 14:49:59.562446 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-tls\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.562512 master-0 kubenswrapper[7744]: I0220 14:49:59.562499 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-sys\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.562733 master-0 kubenswrapper[7744]: I0220 14:49:59.562543 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/99fe3b99-0b40-4887-bcc8-59caa515b99f-metrics-client-ca\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.562733 master-0 kubenswrapper[7744]: I0220 14:49:59.562619 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-root\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.562733 master-0 kubenswrapper[7744]: I0220 14:49:59.562710 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-wtmp\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.562733 master-0 kubenswrapper[7744]: I0220 14:49:59.562731 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-textfile\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.562963 master-0 kubenswrapper[7744]: I0220 14:49:59.562780 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkc7z\" (UniqueName: \"kubernetes.io/projected/99fe3b99-0b40-4887-bcc8-59caa515b99f-kube-api-access-dkc7z\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.562963 master-0 kubenswrapper[7744]: I0220 14:49:59.562797 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.664166 master-0 kubenswrapper[7744]: I0220 14:49:59.664059 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 14:49:59.664166 master-0 kubenswrapper[7744]: I0220 14:49:59.664114 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-tls\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.664166 master-0 kubenswrapper[7744]: I0220 14:49:59.664133 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-sys\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.664391 master-0 kubenswrapper[7744]: I0220 14:49:59.664219 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/99fe3b99-0b40-4887-bcc8-59caa515b99f-metrics-client-ca\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.664391 master-0 kubenswrapper[7744]: E0220 14:49:59.664304 7744 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Feb 20 14:49:59.664391 master-0 kubenswrapper[7744]: I0220 14:49:59.664363 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-sys\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.664391 master-0 kubenswrapper[7744]: I0220 14:49:59.664330 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a39c5481-961c-4ac2-8c5b-a2c0165f4188-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 14:49:59.664500 master-0 kubenswrapper[7744]: E0220 14:49:59.664406 7744 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-tls podName:99fe3b99-0b40-4887-bcc8-59caa515b99f nodeName:}" failed. No retries permitted until 2026-02-20 14:50:00.164379581 +0000 UTC m=+199.366579601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-tls") pod "node-exporter-bk9bp" (UID: "99fe3b99-0b40-4887-bcc8-59caa515b99f") : secret "node-exporter-tls" not found Feb 20 14:49:59.664500 master-0 kubenswrapper[7744]: I0220 14:49:59.664438 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-root\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.664500 master-0 kubenswrapper[7744]: I0220 14:49:59.664471 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/c0b78aa6-7bc8-4221-81f5-bf62a7110380-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.664593 master-0 kubenswrapper[7744]: I0220 14:49:59.664532 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl7tw\" (UniqueName: \"kubernetes.io/projected/a39c5481-961c-4ac2-8c5b-a2c0165f4188-kube-api-access-tl7tw\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 14:49:59.664593 master-0 kubenswrapper[7744]: I0220 14:49:59.664559 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.664663 master-0 kubenswrapper[7744]: I0220 14:49:59.664626 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-wtmp\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.664695 master-0 kubenswrapper[7744]: I0220 14:49:59.664665 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-textfile\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.664695 master-0 kubenswrapper[7744]: I0220 14:49:59.664682 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-root\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.664753 master-0 kubenswrapper[7744]: I0220 14:49:59.664706 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.664788 master-0 kubenswrapper[7744]: I0220 14:49:59.664762 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkc7z\" (UniqueName: \"kubernetes.io/projected/99fe3b99-0b40-4887-bcc8-59caa515b99f-kube-api-access-dkc7z\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.664820 master-0 kubenswrapper[7744]: I0220 14:49:59.664792 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.664851 master-0 kubenswrapper[7744]: I0220 14:49:59.664818 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-wtmp\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.664851 master-0 kubenswrapper[7744]: I0220 14:49:59.664820 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 14:49:59.667068 master-0 kubenswrapper[7744]: I0220 14:49:59.665156 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/99fe3b99-0b40-4887-bcc8-59caa515b99f-metrics-client-ca\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.667068 master-0 kubenswrapper[7744]: I0220 14:49:59.665357 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.667068 master-0 kubenswrapper[7744]: I0220 14:49:59.665411 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhzk6\" (UniqueName: \"kubernetes.io/projected/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-api-access-lhzk6\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.667068 master-0 kubenswrapper[7744]: I0220 14:49:59.665452 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.670943 master-0 kubenswrapper[7744]: I0220 14:49:59.667555 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-textfile\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.674948 master-0 kubenswrapper[7744]: I0220 14:49:59.673512 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.707687 master-0 kubenswrapper[7744]: I0220 14:49:59.707644 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkc7z\" (UniqueName: \"kubernetes.io/projected/99fe3b99-0b40-4887-bcc8-59caa515b99f-kube-api-access-dkc7z\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:49:59.767187 master-0 kubenswrapper[7744]: I0220 14:49:59.767115 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.767392 master-0 kubenswrapper[7744]: I0220 14:49:59.767312 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 14:49:59.767392 master-0 kubenswrapper[7744]: I0220 14:49:59.767353 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.767951 master-0 kubenswrapper[7744]: I0220 14:49:59.767506 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhzk6\" (UniqueName: \"kubernetes.io/projected/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-api-access-lhzk6\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.767951 master-0 kubenswrapper[7744]: I0220 14:49:59.767602 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.767951 master-0 kubenswrapper[7744]: I0220 14:49:59.767642 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 14:49:59.767951 master-0 kubenswrapper[7744]: I0220 14:49:59.767724 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a39c5481-961c-4ac2-8c5b-a2c0165f4188-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 14:49:59.767951 master-0 kubenswrapper[7744]: I0220 14:49:59.767748 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/c0b78aa6-7bc8-4221-81f5-bf62a7110380-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.767951 master-0 kubenswrapper[7744]: I0220 14:49:59.767784 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl7tw\" (UniqueName: \"kubernetes.io/projected/a39c5481-961c-4ac2-8c5b-a2c0165f4188-kube-api-access-tl7tw\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 14:49:59.767951 master-0 kubenswrapper[7744]: I0220 14:49:59.767802 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.769118 master-0 kubenswrapper[7744]: I0220 14:49:59.768547 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a39c5481-961c-4ac2-8c5b-a2c0165f4188-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 14:49:59.769118 master-0 kubenswrapper[7744]: I0220 14:49:59.768647 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.769118 master-0 kubenswrapper[7744]: I0220 14:49:59.769064 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.769243 master-0 kubenswrapper[7744]: I0220 14:49:59.769190 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/c0b78aa6-7bc8-4221-81f5-bf62a7110380-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.776570 master-0 kubenswrapper[7744]: I0220 14:49:59.776528 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.790064 master-0 kubenswrapper[7744]: I0220 14:49:59.790015 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.790546 master-0 kubenswrapper[7744]: I0220 14:49:59.790507 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 14:49:59.790975 master-0 kubenswrapper[7744]: I0220 14:49:59.790937 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 14:49:59.791265 master-0 kubenswrapper[7744]: I0220 14:49:59.791237 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhzk6\" (UniqueName: \"kubernetes.io/projected/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-api-access-lhzk6\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:49:59.791531 master-0 kubenswrapper[7744]: I0220 14:49:59.791497 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl7tw\" (UniqueName: \"kubernetes.io/projected/a39c5481-961c-4ac2-8c5b-a2c0165f4188-kube-api-access-tl7tw\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 14:49:59.821443 master-0 kubenswrapper[7744]: I0220 14:49:59.821395 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 14:49:59.861983 master-0 kubenswrapper[7744]: I0220 14:49:59.861931 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 14:50:00.106258 master-0 kubenswrapper[7744]: I0220 14:50:00.106077 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 14:50:00.163069 master-0 kubenswrapper[7744]: I0220 14:50:00.153269 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-stlhz"] Feb 20 14:50:00.173667 master-0 kubenswrapper[7744]: I0220 14:50:00.173630 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-tls\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:50:00.177911 master-0 kubenswrapper[7744]: I0220 14:50:00.177866 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-tls\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:50:00.263688 master-0 kubenswrapper[7744]: I0220 14:50:00.263475 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 14:50:00.263688 master-0 kubenswrapper[7744]: I0220 14:50:00.263532 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 14:50:00.280821 master-0 kubenswrapper[7744]: I0220 14:50:00.280683 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4"] Feb 20 14:50:00.288038 master-0 kubenswrapper[7744]: W0220 14:50:00.287940 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda39c5481_961c_4ac2_8c5b_a2c0165f4188.slice/crio-4e9788fdd4565e3a230622830adb39ca18b14112a272177c052904a2d24b6cd0 WatchSource:0}: Error finding container 4e9788fdd4565e3a230622830adb39ca18b14112a272177c052904a2d24b6cd0: Status 404 returned error can't find the container with id 4e9788fdd4565e3a230622830adb39ca18b14112a272177c052904a2d24b6cd0 Feb 20 14:50:00.315156 master-0 kubenswrapper[7744]: I0220 14:50:00.315091 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:50:00.317163 master-0 kubenswrapper[7744]: I0220 14:50:00.317103 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:00.317163 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:00.317163 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:00.317163 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:00.317468 master-0 kubenswrapper[7744]: I0220 14:50:00.317162 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:00.406948 master-0 kubenswrapper[7744]: I0220 14:50:00.406886 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 14:50:01.047416 master-0 kubenswrapper[7744]: I0220 14:50:01.047350 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" event={"ID":"c0b78aa6-7bc8-4221-81f5-bf62a7110380","Type":"ContainerStarted","Data":"7cd291b9260d8474da6db1ea27593954a0b8a80d92876d3da551d5f4c38e22a4"} Feb 20 14:50:01.055971 master-0 kubenswrapper[7744]: I0220 14:50:01.050356 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" event={"ID":"a39c5481-961c-4ac2-8c5b-a2c0165f4188","Type":"ContainerStarted","Data":"ab0f49b2f7d1b009ca5ecc4b169081ae95aaaf6b5ee65b7672e3618ef61d1e7f"} Feb 20 14:50:01.055971 master-0 kubenswrapper[7744]: I0220 14:50:01.050407 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" event={"ID":"a39c5481-961c-4ac2-8c5b-a2c0165f4188","Type":"ContainerStarted","Data":"49c14eb0ee80e8816f81743c980057c7a3f0930b6f7facd4c8a07a25d04b2a16"} Feb 20 14:50:01.055971 master-0 kubenswrapper[7744]: I0220 14:50:01.050425 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" event={"ID":"a39c5481-961c-4ac2-8c5b-a2c0165f4188","Type":"ContainerStarted","Data":"4e9788fdd4565e3a230622830adb39ca18b14112a272177c052904a2d24b6cd0"} Feb 20 14:50:01.055971 master-0 kubenswrapper[7744]: I0220 14:50:01.052135 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-bk9bp" event={"ID":"99fe3b99-0b40-4887-bcc8-59caa515b99f","Type":"ContainerStarted","Data":"2d789ae2430f40a62d0c76334dce72b1228320484eb36b8f7f3663eb8534eb42"} Feb 20 14:50:01.317544 master-0 kubenswrapper[7744]: I0220 14:50:01.317341 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:01.317544 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:01.317544 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:01.317544 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:01.317544 master-0 kubenswrapper[7744]: I0220 14:50:01.317447 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:01.320409 master-0 kubenswrapper[7744]: I0220 14:50:01.320342 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z4wzg" podUID="93786626-fac4-48f0-bf72-992bc39f4a82" containerName="registry-server" probeResult="failure" output=< Feb 20 14:50:01.320409 master-0 kubenswrapper[7744]: timeout: failed to connect service ":50051" within 1s Feb 20 14:50:01.320409 master-0 kubenswrapper[7744]: > Feb 20 14:50:02.317310 master-0 kubenswrapper[7744]: I0220 14:50:02.317258 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:02.317310 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:02.317310 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:02.317310 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:02.317608 master-0 kubenswrapper[7744]: I0220 14:50:02.317331 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:03.079192 master-0 kubenswrapper[7744]: I0220 14:50:03.079013 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" event={"ID":"c0b78aa6-7bc8-4221-81f5-bf62a7110380","Type":"ContainerStarted","Data":"64603a3ea96fd717e2035494db040660bbfe5a6894d3b87f6bdcaa17f69d7f5c"} Feb 20 14:50:03.079192 master-0 kubenswrapper[7744]: I0220 14:50:03.079091 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" event={"ID":"c0b78aa6-7bc8-4221-81f5-bf62a7110380","Type":"ContainerStarted","Data":"b4e6e35a13489e6753b258d26ed5a83ff62c7c8c4f879f0771edf0596a055016"} Feb 20 14:50:03.079192 master-0 kubenswrapper[7744]: I0220 14:50:03.079118 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" event={"ID":"c0b78aa6-7bc8-4221-81f5-bf62a7110380","Type":"ContainerStarted","Data":"fdffe43b1b08d49ea8314e914701c72141f7d81a9b8fe2dd80fb3d7e5d551135"} Feb 20 14:50:03.089016 master-0 kubenswrapper[7744]: I0220 14:50:03.088887 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" event={"ID":"a39c5481-961c-4ac2-8c5b-a2c0165f4188","Type":"ContainerStarted","Data":"17e8999646d64007d1bd58d640bc79694199b53000fbeaec2e4a35a48342c1a5"} Feb 20 14:50:03.098160 master-0 kubenswrapper[7744]: I0220 14:50:03.098065 7744 generic.go:334] "Generic (PLEG): container finished" podID="99fe3b99-0b40-4887-bcc8-59caa515b99f" containerID="ac76df8cb547ae36da1275aa8fb2cdc86502281cca8b0c482befd5640340a0ca" exitCode=0 Feb 20 14:50:03.098355 master-0 kubenswrapper[7744]: I0220 14:50:03.098159 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-bk9bp" event={"ID":"99fe3b99-0b40-4887-bcc8-59caa515b99f","Type":"ContainerDied","Data":"ac76df8cb547ae36da1275aa8fb2cdc86502281cca8b0c482befd5640340a0ca"} Feb 20 14:50:03.316886 master-0 kubenswrapper[7744]: I0220 14:50:03.316644 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:03.316886 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:03.316886 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:03.316886 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:03.317201 master-0 kubenswrapper[7744]: I0220 14:50:03.316917 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:03.671282 master-0 kubenswrapper[7744]: I0220 14:50:03.669466 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" podStartSLOduration=2.669697935 podStartE2EDuration="4.668779296s" podCreationTimestamp="2026-02-20 14:49:59 +0000 UTC" firstStartedPulling="2026-02-20 14:50:00.168325471 +0000 UTC m=+199.370525391" lastFinishedPulling="2026-02-20 14:50:02.167406822 +0000 UTC m=+201.369606752" observedRunningTime="2026-02-20 14:50:03.38263994 +0000 UTC m=+202.584839900" watchObservedRunningTime="2026-02-20 14:50:03.668779296 +0000 UTC m=+202.870979236" Feb 20 14:50:04.086445 master-0 kubenswrapper[7744]: I0220 14:50:04.084521 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" podStartSLOduration=3.503136999 podStartE2EDuration="5.084496154s" podCreationTimestamp="2026-02-20 14:49:59 +0000 UTC" firstStartedPulling="2026-02-20 14:50:00.591709627 +0000 UTC m=+199.793909547" lastFinishedPulling="2026-02-20 14:50:02.173068742 +0000 UTC m=+201.375268702" observedRunningTime="2026-02-20 14:50:04.082564786 +0000 UTC m=+203.284764766" watchObservedRunningTime="2026-02-20 14:50:04.084496154 +0000 UTC m=+203.286696114" Feb 20 14:50:04.108645 master-0 kubenswrapper[7744]: I0220 14:50:04.108026 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-bk9bp" event={"ID":"99fe3b99-0b40-4887-bcc8-59caa515b99f","Type":"ContainerStarted","Data":"249c8f01deec61596704fed74ef02874dd0095cff46bbbc1facf51120bbc8333"} Feb 20 14:50:04.108645 master-0 kubenswrapper[7744]: I0220 14:50:04.108113 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-bk9bp" event={"ID":"99fe3b99-0b40-4887-bcc8-59caa515b99f","Type":"ContainerStarted","Data":"62df2fe99b665d08438eea218ad5ac1857eaea573fdf0c22507e4202d78adb51"} Feb 20 14:50:04.316877 master-0 kubenswrapper[7744]: I0220 14:50:04.316641 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:04.316877 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:04.316877 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:04.316877 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:04.316877 master-0 kubenswrapper[7744]: I0220 14:50:04.316721 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:04.427055 master-0 kubenswrapper[7744]: I0220 14:50:04.426903 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-bk9bp" podStartSLOduration=3.706876536 podStartE2EDuration="5.426872225s" podCreationTimestamp="2026-02-20 14:49:59 +0000 UTC" firstStartedPulling="2026-02-20 14:50:00.451611987 +0000 UTC m=+199.653811937" lastFinishedPulling="2026-02-20 14:50:02.171607706 +0000 UTC m=+201.373807626" observedRunningTime="2026-02-20 14:50:04.423149794 +0000 UTC m=+203.625349774" watchObservedRunningTime="2026-02-20 14:50:04.426872225 +0000 UTC m=+203.629072185" Feb 20 14:50:05.316089 master-0 kubenswrapper[7744]: I0220 14:50:05.316021 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:05.316089 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:05.316089 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:05.316089 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:05.316089 master-0 kubenswrapper[7744]: I0220 14:50:05.316095 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:05.454094 master-0 kubenswrapper[7744]: I0220 14:50:05.454002 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-9bcdd7684-kz2z2"] Feb 20 14:50:05.455040 master-0 kubenswrapper[7744]: I0220 14:50:05.455000 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.457515 master-0 kubenswrapper[7744]: I0220 14:50:05.457452 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-dm2ds" Feb 20 14:50:05.457662 master-0 kubenswrapper[7744]: I0220 14:50:05.457629 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7pkl9jqft06ca" Feb 20 14:50:05.458102 master-0 kubenswrapper[7744]: I0220 14:50:05.458066 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 20 14:50:05.459081 master-0 kubenswrapper[7744]: I0220 14:50:05.459047 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 20 14:50:05.459281 master-0 kubenswrapper[7744]: I0220 14:50:05.459080 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 20 14:50:05.461242 master-0 kubenswrapper[7744]: I0220 14:50:05.461216 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 20 14:50:05.517119 master-0 kubenswrapper[7744]: I0220 14:50:05.517063 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-9bcdd7684-kz2z2"] Feb 20 14:50:05.555453 master-0 kubenswrapper[7744]: I0220 14:50:05.555388 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-audit-log\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.555817 master-0 kubenswrapper[7744]: I0220 14:50:05.555797 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zppr\" (UniqueName: \"kubernetes.io/projected/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-kube-api-access-9zppr\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.556037 master-0 kubenswrapper[7744]: I0220 14:50:05.556004 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-client-certs\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.556197 master-0 kubenswrapper[7744]: I0220 14:50:05.556180 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-server-tls\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.556344 master-0 kubenswrapper[7744]: I0220 14:50:05.556325 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.556623 master-0 kubenswrapper[7744]: I0220 14:50:05.556562 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-client-ca-bundle\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.556714 master-0 kubenswrapper[7744]: I0220 14:50:05.556672 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-metrics-server-audit-profiles\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.658195 master-0 kubenswrapper[7744]: I0220 14:50:05.658042 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zppr\" (UniqueName: \"kubernetes.io/projected/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-kube-api-access-9zppr\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.658195 master-0 kubenswrapper[7744]: I0220 14:50:05.658142 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-client-certs\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.658195 master-0 kubenswrapper[7744]: I0220 14:50:05.658200 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-server-tls\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.658562 master-0 kubenswrapper[7744]: I0220 14:50:05.658226 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.658562 master-0 kubenswrapper[7744]: I0220 14:50:05.658258 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-client-ca-bundle\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.658562 master-0 kubenswrapper[7744]: I0220 14:50:05.658287 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-metrics-server-audit-profiles\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.658562 master-0 kubenswrapper[7744]: I0220 14:50:05.658337 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-audit-log\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.659795 master-0 kubenswrapper[7744]: I0220 14:50:05.659740 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-audit-log\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.660193 master-0 kubenswrapper[7744]: I0220 14:50:05.660148 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.660894 master-0 kubenswrapper[7744]: I0220 14:50:05.660841 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-metrics-server-audit-profiles\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.663880 master-0 kubenswrapper[7744]: I0220 14:50:05.663814 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-client-certs\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.664787 master-0 kubenswrapper[7744]: I0220 14:50:05.664749 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-client-ca-bundle\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.665044 master-0 kubenswrapper[7744]: I0220 14:50:05.664799 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-server-tls\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.685598 master-0 kubenswrapper[7744]: I0220 14:50:05.685553 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zppr\" (UniqueName: \"kubernetes.io/projected/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-kube-api-access-9zppr\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:05.791486 master-0 kubenswrapper[7744]: I0220 14:50:05.791415 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:06.290106 master-0 kubenswrapper[7744]: I0220 14:50:06.290040 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-9bcdd7684-kz2z2"] Feb 20 14:50:06.317752 master-0 kubenswrapper[7744]: I0220 14:50:06.317621 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:06.317752 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:06.317752 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:06.317752 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:06.318699 master-0 kubenswrapper[7744]: I0220 14:50:06.317794 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:07.135811 master-0 kubenswrapper[7744]: I0220 14:50:07.135726 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" event={"ID":"bdd203e0-3dd9-4e9d-81f1-46f60d235e38","Type":"ContainerStarted","Data":"3209ad8e141d4f4023abb0b8711dc267473b98fd78163c32b9a46c610babe186"} Feb 20 14:50:07.316617 master-0 kubenswrapper[7744]: I0220 14:50:07.316514 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:07.316617 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:07.316617 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:07.316617 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:07.316617 master-0 kubenswrapper[7744]: I0220 14:50:07.316605 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:08.320178 master-0 kubenswrapper[7744]: I0220 14:50:08.317792 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:08.320178 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:08.320178 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:08.320178 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:08.320178 master-0 kubenswrapper[7744]: I0220 14:50:08.317866 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:09.153061 master-0 kubenswrapper[7744]: I0220 14:50:09.152985 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" event={"ID":"bdd203e0-3dd9-4e9d-81f1-46f60d235e38","Type":"ContainerStarted","Data":"6de3357e6e18954512d073202b91b501ca58384ea08b18ec75d08c4929c63531"} Feb 20 14:50:09.187651 master-0 kubenswrapper[7744]: I0220 14:50:09.187546 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" podStartSLOduration=2.02015657 podStartE2EDuration="4.187517715s" podCreationTimestamp="2026-02-20 14:50:05 +0000 UTC" firstStartedPulling="2026-02-20 14:50:06.302843675 +0000 UTC m=+205.505043645" lastFinishedPulling="2026-02-20 14:50:08.47020487 +0000 UTC m=+207.672404790" observedRunningTime="2026-02-20 14:50:09.181590619 +0000 UTC m=+208.383790559" watchObservedRunningTime="2026-02-20 14:50:09.187517715 +0000 UTC m=+208.389717665" Feb 20 14:50:09.316060 master-0 kubenswrapper[7744]: I0220 14:50:09.315989 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:09.316060 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:09.316060 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:09.316060 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:09.316473 master-0 kubenswrapper[7744]: I0220 14:50:09.316075 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:10.317288 master-0 kubenswrapper[7744]: I0220 14:50:10.317159 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:10.317288 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:10.317288 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:10.317288 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:10.317288 master-0 kubenswrapper[7744]: I0220 14:50:10.317244 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:10.334327 master-0 kubenswrapper[7744]: I0220 14:50:10.334263 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 14:50:10.380821 master-0 kubenswrapper[7744]: I0220 14:50:10.380727 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 14:50:11.316501 master-0 kubenswrapper[7744]: I0220 14:50:11.316383 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:11.316501 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:11.316501 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:11.316501 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:11.316995 master-0 kubenswrapper[7744]: I0220 14:50:11.316552 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:12.317485 master-0 kubenswrapper[7744]: I0220 14:50:12.317421 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:12.317485 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:12.317485 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:12.317485 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:12.318529 master-0 kubenswrapper[7744]: I0220 14:50:12.318060 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:13.317459 master-0 kubenswrapper[7744]: I0220 14:50:13.317296 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:13.317459 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:13.317459 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:13.317459 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:13.317459 master-0 kubenswrapper[7744]: I0220 14:50:13.317459 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:14.315995 master-0 kubenswrapper[7744]: I0220 14:50:14.315899 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:14.315995 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:14.315995 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:14.315995 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:14.316830 master-0 kubenswrapper[7744]: I0220 14:50:14.316021 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:15.320414 master-0 kubenswrapper[7744]: I0220 14:50:15.320332 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:15.320414 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:15.320414 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:15.320414 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:15.321147 master-0 kubenswrapper[7744]: I0220 14:50:15.320438 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:16.316478 master-0 kubenswrapper[7744]: I0220 14:50:16.316402 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:16.316478 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:16.316478 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:16.316478 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:16.316961 master-0 kubenswrapper[7744]: I0220 14:50:16.316501 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:17.316857 master-0 kubenswrapper[7744]: I0220 14:50:17.316798 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:17.316857 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:17.316857 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:17.316857 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:17.318016 master-0 kubenswrapper[7744]: I0220 14:50:17.317971 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:18.317295 master-0 kubenswrapper[7744]: I0220 14:50:18.317170 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:18.317295 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:18.317295 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:18.317295 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:18.318452 master-0 kubenswrapper[7744]: I0220 14:50:18.317328 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:19.317025 master-0 kubenswrapper[7744]: I0220 14:50:19.316885 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:19.317025 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:19.317025 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:19.317025 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:19.317706 master-0 kubenswrapper[7744]: I0220 14:50:19.317060 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:20.317391 master-0 kubenswrapper[7744]: I0220 14:50:20.317292 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:20.317391 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:20.317391 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:20.317391 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:20.317391 master-0 kubenswrapper[7744]: I0220 14:50:20.317383 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:21.319503 master-0 kubenswrapper[7744]: I0220 14:50:21.319429 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:21.319503 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:21.319503 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:21.319503 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:21.320150 master-0 kubenswrapper[7744]: I0220 14:50:21.319516 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:22.316687 master-0 kubenswrapper[7744]: I0220 14:50:22.316593 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:22.316687 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:22.316687 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:22.316687 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:22.316687 master-0 kubenswrapper[7744]: I0220 14:50:22.316682 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:23.317326 master-0 kubenswrapper[7744]: I0220 14:50:23.317234 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:23.317326 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:23.317326 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:23.317326 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:23.318336 master-0 kubenswrapper[7744]: I0220 14:50:23.317379 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:24.316981 master-0 kubenswrapper[7744]: I0220 14:50:24.316838 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:24.316981 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:24.316981 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:24.316981 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:24.316981 master-0 kubenswrapper[7744]: I0220 14:50:24.316917 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:25.317519 master-0 kubenswrapper[7744]: I0220 14:50:25.317410 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:25.317519 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:25.317519 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:25.317519 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:25.317519 master-0 kubenswrapper[7744]: I0220 14:50:25.317516 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:25.792639 master-0 kubenswrapper[7744]: I0220 14:50:25.792486 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:25.792639 master-0 kubenswrapper[7744]: I0220 14:50:25.792584 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:26.317139 master-0 kubenswrapper[7744]: I0220 14:50:26.317012 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:26.317139 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:26.317139 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:26.317139 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:26.323791 master-0 kubenswrapper[7744]: I0220 14:50:26.317132 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:27.317583 master-0 kubenswrapper[7744]: I0220 14:50:27.317478 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:27.317583 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:27.317583 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:27.317583 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:27.318666 master-0 kubenswrapper[7744]: I0220 14:50:27.317578 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:28.316919 master-0 kubenswrapper[7744]: I0220 14:50:28.316815 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:28.316919 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:28.316919 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:28.316919 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:28.317370 master-0 kubenswrapper[7744]: I0220 14:50:28.316955 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:29.316492 master-0 kubenswrapper[7744]: I0220 14:50:29.316420 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:29.316492 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:29.316492 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:29.316492 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:29.317567 master-0 kubenswrapper[7744]: I0220 14:50:29.316509 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:30.317086 master-0 kubenswrapper[7744]: I0220 14:50:30.317019 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:30.317086 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:30.317086 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:30.317086 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:30.318239 master-0 kubenswrapper[7744]: I0220 14:50:30.318119 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:31.315607 master-0 kubenswrapper[7744]: I0220 14:50:31.315517 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:31.315607 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:31.315607 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:31.315607 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:31.316157 master-0 kubenswrapper[7744]: I0220 14:50:31.315604 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:32.317339 master-0 kubenswrapper[7744]: I0220 14:50:32.317231 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:32.317339 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:32.317339 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:32.317339 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:32.318317 master-0 kubenswrapper[7744]: I0220 14:50:32.317349 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:33.316790 master-0 kubenswrapper[7744]: I0220 14:50:33.316699 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:33.316790 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:33.316790 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:33.316790 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:33.317264 master-0 kubenswrapper[7744]: I0220 14:50:33.316786 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:34.316866 master-0 kubenswrapper[7744]: I0220 14:50:34.316773 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:34.316866 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:34.316866 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:34.316866 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:34.317907 master-0 kubenswrapper[7744]: I0220 14:50:34.316870 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:35.317492 master-0 kubenswrapper[7744]: I0220 14:50:35.317415 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:35.317492 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:35.317492 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:35.317492 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:35.318038 master-0 kubenswrapper[7744]: I0220 14:50:35.317511 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:36.317366 master-0 kubenswrapper[7744]: I0220 14:50:36.317270 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:36.317366 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:36.317366 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:36.317366 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:36.318316 master-0 kubenswrapper[7744]: I0220 14:50:36.317362 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:37.317407 master-0 kubenswrapper[7744]: I0220 14:50:37.317267 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:37.317407 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:37.317407 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:37.317407 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:37.317407 master-0 kubenswrapper[7744]: I0220 14:50:37.317372 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:38.318298 master-0 kubenswrapper[7744]: I0220 14:50:38.318243 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:38.318298 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:38.318298 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:38.318298 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:38.318839 master-0 kubenswrapper[7744]: I0220 14:50:38.318327 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:39.317561 master-0 kubenswrapper[7744]: I0220 14:50:39.317452 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:39.317561 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:39.317561 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:39.317561 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:39.319204 master-0 kubenswrapper[7744]: I0220 14:50:39.317575 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:40.317045 master-0 kubenswrapper[7744]: I0220 14:50:40.316454 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:40.317045 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:40.317045 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:40.317045 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:40.317045 master-0 kubenswrapper[7744]: I0220 14:50:40.316540 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:41.316757 master-0 kubenswrapper[7744]: I0220 14:50:41.316678 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:41.316757 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:41.316757 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:41.316757 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:41.317869 master-0 kubenswrapper[7744]: I0220 14:50:41.316785 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:42.317134 master-0 kubenswrapper[7744]: I0220 14:50:42.317056 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:42.317134 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:42.317134 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:42.317134 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:42.318178 master-0 kubenswrapper[7744]: I0220 14:50:42.317351 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:43.317383 master-0 kubenswrapper[7744]: I0220 14:50:43.317316 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:43.317383 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:43.317383 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:43.317383 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:43.318180 master-0 kubenswrapper[7744]: I0220 14:50:43.317427 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:44.316953 master-0 kubenswrapper[7744]: I0220 14:50:44.316812 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:44.316953 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:44.316953 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:44.316953 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:44.317361 master-0 kubenswrapper[7744]: I0220 14:50:44.316918 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:45.316886 master-0 kubenswrapper[7744]: I0220 14:50:45.316810 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:45.316886 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:45.316886 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:45.316886 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:45.317627 master-0 kubenswrapper[7744]: I0220 14:50:45.316954 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:45.801014 master-0 kubenswrapper[7744]: I0220 14:50:45.800900 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:45.808581 master-0 kubenswrapper[7744]: I0220 14:50:45.808523 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 14:50:46.318302 master-0 kubenswrapper[7744]: I0220 14:50:46.318218 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:46.318302 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:46.318302 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:46.318302 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:46.319436 master-0 kubenswrapper[7744]: I0220 14:50:46.318337 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:47.317639 master-0 kubenswrapper[7744]: I0220 14:50:47.317520 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:47.317639 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:47.317639 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:47.317639 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:47.317639 master-0 kubenswrapper[7744]: I0220 14:50:47.317634 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:48.316191 master-0 kubenswrapper[7744]: I0220 14:50:48.316128 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:48.316191 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:48.316191 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:48.316191 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:48.316191 master-0 kubenswrapper[7744]: I0220 14:50:48.316199 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:49.318014 master-0 kubenswrapper[7744]: I0220 14:50:49.317665 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:49.318014 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:49.318014 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:49.318014 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:49.318014 master-0 kubenswrapper[7744]: I0220 14:50:49.317766 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:50.317424 master-0 kubenswrapper[7744]: I0220 14:50:50.317294 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:50.317424 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:50.317424 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:50.317424 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:50.318882 master-0 kubenswrapper[7744]: I0220 14:50:50.317835 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:51.319253 master-0 kubenswrapper[7744]: I0220 14:50:51.319161 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:51.319253 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:51.319253 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:51.319253 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:51.320267 master-0 kubenswrapper[7744]: I0220 14:50:51.319267 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:52.317121 master-0 kubenswrapper[7744]: I0220 14:50:52.317031 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:52.317121 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:52.317121 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:52.317121 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:52.317647 master-0 kubenswrapper[7744]: I0220 14:50:52.317148 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:53.316572 master-0 kubenswrapper[7744]: I0220 14:50:53.316452 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:53.316572 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:53.316572 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:53.316572 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:53.318069 master-0 kubenswrapper[7744]: I0220 14:50:53.316583 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:54.317665 master-0 kubenswrapper[7744]: I0220 14:50:54.317557 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:54.317665 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:54.317665 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:54.317665 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:54.318648 master-0 kubenswrapper[7744]: I0220 14:50:54.317699 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:55.317171 master-0 kubenswrapper[7744]: I0220 14:50:55.317095 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:55.317171 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:55.317171 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:55.317171 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:55.317681 master-0 kubenswrapper[7744]: I0220 14:50:55.317201 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:56.317255 master-0 kubenswrapper[7744]: I0220 14:50:56.317181 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:56.317255 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:56.317255 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:56.317255 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:56.318251 master-0 kubenswrapper[7744]: I0220 14:50:56.318052 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:57.317036 master-0 kubenswrapper[7744]: I0220 14:50:57.316883 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:57.317036 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:57.317036 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:57.317036 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:57.317036 master-0 kubenswrapper[7744]: I0220 14:50:57.317013 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:58.317743 master-0 kubenswrapper[7744]: I0220 14:50:58.317624 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:58.317743 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:58.317743 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:58.317743 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:58.318612 master-0 kubenswrapper[7744]: I0220 14:50:58.317800 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:50:59.317321 master-0 kubenswrapper[7744]: I0220 14:50:59.317209 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:50:59.317321 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:50:59.317321 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:50:59.317321 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:50:59.317321 master-0 kubenswrapper[7744]: I0220 14:50:59.317310 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:00.317023 master-0 kubenswrapper[7744]: I0220 14:51:00.316229 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:00.317023 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:00.317023 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:00.317023 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:00.317023 master-0 kubenswrapper[7744]: I0220 14:51:00.316317 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:01.317075 master-0 kubenswrapper[7744]: I0220 14:51:01.316920 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:01.317075 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:01.317075 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:01.317075 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:01.318009 master-0 kubenswrapper[7744]: I0220 14:51:01.317121 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:02.316880 master-0 kubenswrapper[7744]: I0220 14:51:02.316812 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:02.316880 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:02.316880 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:02.316880 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:02.318015 master-0 kubenswrapper[7744]: I0220 14:51:02.316887 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:03.316611 master-0 kubenswrapper[7744]: I0220 14:51:03.316524 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:03.316611 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:03.316611 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:03.316611 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:03.317089 master-0 kubenswrapper[7744]: I0220 14:51:03.316638 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:04.316254 master-0 kubenswrapper[7744]: I0220 14:51:04.316088 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:04.316254 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:04.316254 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:04.316254 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:04.317313 master-0 kubenswrapper[7744]: I0220 14:51:04.316274 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:05.317075 master-0 kubenswrapper[7744]: I0220 14:51:05.316989 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:05.317075 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:05.317075 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:05.317075 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:05.318300 master-0 kubenswrapper[7744]: I0220 14:51:05.317087 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:06.316831 master-0 kubenswrapper[7744]: I0220 14:51:06.316753 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:06.316831 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:06.316831 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:06.316831 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:06.317860 master-0 kubenswrapper[7744]: I0220 14:51:06.316849 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:07.317823 master-0 kubenswrapper[7744]: I0220 14:51:07.317732 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:07.317823 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:07.317823 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:07.317823 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:07.318986 master-0 kubenswrapper[7744]: I0220 14:51:07.317830 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:08.317458 master-0 kubenswrapper[7744]: I0220 14:51:08.317361 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:08.317458 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:08.317458 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:08.317458 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:08.317908 master-0 kubenswrapper[7744]: I0220 14:51:08.317471 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:09.317539 master-0 kubenswrapper[7744]: I0220 14:51:09.317391 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:09.317539 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:09.317539 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:09.317539 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:09.318562 master-0 kubenswrapper[7744]: I0220 14:51:09.317554 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:10.317988 master-0 kubenswrapper[7744]: I0220 14:51:10.317908 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:10.317988 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:10.317988 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:10.317988 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:10.317988 master-0 kubenswrapper[7744]: I0220 14:51:10.317993 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:11.317847 master-0 kubenswrapper[7744]: I0220 14:51:11.317742 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:11.317847 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:11.317847 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:11.317847 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:11.318990 master-0 kubenswrapper[7744]: I0220 14:51:11.317915 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:12.317357 master-0 kubenswrapper[7744]: I0220 14:51:12.317262 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:12.317357 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:12.317357 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:12.317357 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:12.317759 master-0 kubenswrapper[7744]: I0220 14:51:12.317354 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:13.317407 master-0 kubenswrapper[7744]: I0220 14:51:13.317298 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:13.317407 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:13.317407 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:13.317407 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:13.318537 master-0 kubenswrapper[7744]: I0220 14:51:13.317434 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:14.318347 master-0 kubenswrapper[7744]: I0220 14:51:14.318278 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:14.318347 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:14.318347 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:14.318347 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:14.319505 master-0 kubenswrapper[7744]: I0220 14:51:14.319454 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:15.317376 master-0 kubenswrapper[7744]: I0220 14:51:15.317267 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:15.317376 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:15.317376 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:15.317376 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:15.317915 master-0 kubenswrapper[7744]: I0220 14:51:15.317377 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:16.316521 master-0 kubenswrapper[7744]: I0220 14:51:16.316411 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:16.316521 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:16.316521 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:16.316521 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:16.317565 master-0 kubenswrapper[7744]: I0220 14:51:16.316536 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:17.317884 master-0 kubenswrapper[7744]: I0220 14:51:17.317727 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:17.317884 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:17.317884 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:17.317884 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:17.319053 master-0 kubenswrapper[7744]: I0220 14:51:17.317892 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:18.316661 master-0 kubenswrapper[7744]: I0220 14:51:18.316541 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:18.316661 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:18.316661 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:18.316661 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:18.317233 master-0 kubenswrapper[7744]: I0220 14:51:18.316696 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:19.318084 master-0 kubenswrapper[7744]: I0220 14:51:19.317900 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:19.318084 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:19.318084 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:19.318084 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:19.318084 master-0 kubenswrapper[7744]: I0220 14:51:19.318044 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:20.318103 master-0 kubenswrapper[7744]: I0220 14:51:20.317613 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:20.318103 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:20.318103 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:20.318103 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:20.318103 master-0 kubenswrapper[7744]: I0220 14:51:20.317679 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:21.316435 master-0 kubenswrapper[7744]: I0220 14:51:21.316365 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:21.316435 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:21.316435 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:21.316435 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:21.316856 master-0 kubenswrapper[7744]: I0220 14:51:21.316463 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:22.316970 master-0 kubenswrapper[7744]: I0220 14:51:22.316805 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:22.316970 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:22.316970 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:22.316970 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:22.316970 master-0 kubenswrapper[7744]: I0220 14:51:22.316912 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:23.317353 master-0 kubenswrapper[7744]: I0220 14:51:23.317239 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:23.317353 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:23.317353 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:23.317353 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:23.318510 master-0 kubenswrapper[7744]: I0220 14:51:23.317362 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:24.317109 master-0 kubenswrapper[7744]: I0220 14:51:24.317003 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:24.317109 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:24.317109 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:24.317109 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:24.317109 master-0 kubenswrapper[7744]: I0220 14:51:24.317086 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:25.316868 master-0 kubenswrapper[7744]: I0220 14:51:25.316779 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:25.316868 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:25.316868 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:25.316868 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:25.317272 master-0 kubenswrapper[7744]: I0220 14:51:25.316878 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:26.317558 master-0 kubenswrapper[7744]: I0220 14:51:26.317417 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:26.317558 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:26.317558 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:26.317558 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:26.317558 master-0 kubenswrapper[7744]: I0220 14:51:26.317514 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:27.316818 master-0 kubenswrapper[7744]: I0220 14:51:27.316705 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:27.316818 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:27.316818 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:27.316818 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:27.317452 master-0 kubenswrapper[7744]: I0220 14:51:27.316811 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:28.317553 master-0 kubenswrapper[7744]: I0220 14:51:28.317439 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:28.317553 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:28.317553 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:28.317553 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:28.318580 master-0 kubenswrapper[7744]: I0220 14:51:28.317547 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:29.317137 master-0 kubenswrapper[7744]: I0220 14:51:29.317029 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:29.317137 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:29.317137 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:29.317137 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:29.318482 master-0 kubenswrapper[7744]: I0220 14:51:29.317147 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:30.317402 master-0 kubenswrapper[7744]: I0220 14:51:30.317292 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:30.317402 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:30.317402 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:30.317402 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:30.318455 master-0 kubenswrapper[7744]: I0220 14:51:30.317407 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:31.317220 master-0 kubenswrapper[7744]: I0220 14:51:31.316827 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:31.317220 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:31.317220 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:31.317220 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:31.317646 master-0 kubenswrapper[7744]: I0220 14:51:31.317250 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:32.317025 master-0 kubenswrapper[7744]: I0220 14:51:32.316897 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:32.317025 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:32.317025 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:32.317025 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:32.317911 master-0 kubenswrapper[7744]: I0220 14:51:32.317119 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:33.316490 master-0 kubenswrapper[7744]: I0220 14:51:33.316431 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:33.316490 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:33.316490 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:33.316490 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:33.316802 master-0 kubenswrapper[7744]: I0220 14:51:33.316501 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:34.315787 master-0 kubenswrapper[7744]: I0220 14:51:34.315734 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:34.315787 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:34.315787 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:34.315787 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:34.316325 master-0 kubenswrapper[7744]: I0220 14:51:34.315795 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:35.317690 master-0 kubenswrapper[7744]: I0220 14:51:35.317576 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:35.317690 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:35.317690 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:35.317690 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:35.318911 master-0 kubenswrapper[7744]: I0220 14:51:35.317687 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:36.317499 master-0 kubenswrapper[7744]: I0220 14:51:36.317430 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:36.317499 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:36.317499 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:36.317499 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:36.318483 master-0 kubenswrapper[7744]: I0220 14:51:36.317517 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:37.316675 master-0 kubenswrapper[7744]: I0220 14:51:37.316584 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:37.316675 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:37.316675 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:37.316675 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:37.317182 master-0 kubenswrapper[7744]: I0220 14:51:37.316678 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:38.316842 master-0 kubenswrapper[7744]: I0220 14:51:38.316749 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:38.316842 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:38.316842 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:38.316842 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:38.316842 master-0 kubenswrapper[7744]: I0220 14:51:38.316841 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:38.899797 master-0 kubenswrapper[7744]: I0220 14:51:38.899652 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-fjtrw_4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/ingress-operator/0.log" Feb 20 14:51:38.899797 master-0 kubenswrapper[7744]: I0220 14:51:38.899761 7744 generic.go:334] "Generic (PLEG): container finished" podID="4b6a656c-40d6-4c63-9c6f-ac943eae4c9a" containerID="d3902c23a65d809f06a7ebdcc4af6b01c4d6059cec90ec0825ac32ffd942466d" exitCode=1 Feb 20 14:51:38.899797 master-0 kubenswrapper[7744]: I0220 14:51:38.899814 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" event={"ID":"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a","Type":"ContainerDied","Data":"d3902c23a65d809f06a7ebdcc4af6b01c4d6059cec90ec0825ac32ffd942466d"} Feb 20 14:51:38.900770 master-0 kubenswrapper[7744]: I0220 14:51:38.900726 7744 scope.go:117] "RemoveContainer" containerID="d3902c23a65d809f06a7ebdcc4af6b01c4d6059cec90ec0825ac32ffd942466d" Feb 20 14:51:39.316552 master-0 kubenswrapper[7744]: I0220 14:51:39.316435 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:39.316552 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:39.316552 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:39.316552 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:39.316552 master-0 kubenswrapper[7744]: I0220 14:51:39.316520 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:39.908265 master-0 kubenswrapper[7744]: I0220 14:51:39.908190 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-fjtrw_4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/ingress-operator/0.log" Feb 20 14:51:39.908580 master-0 kubenswrapper[7744]: I0220 14:51:39.908275 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" event={"ID":"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a","Type":"ContainerStarted","Data":"714dbfc4fc378943ded0e9dbde4bd4d13b8c24e9f4a8b6486b4468145db42e05"} Feb 20 14:51:40.317436 master-0 kubenswrapper[7744]: I0220 14:51:40.317372 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:40.317436 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:40.317436 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:40.317436 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:40.318472 master-0 kubenswrapper[7744]: I0220 14:51:40.317456 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:41.139523 master-0 kubenswrapper[7744]: I0220 14:51:41.139429 7744 scope.go:117] "RemoveContainer" containerID="7d3284edf21995a27c89886a69cea12c9862e571d30d2baf2f5e1bce4a1984d8" Feb 20 14:51:41.317025 master-0 kubenswrapper[7744]: I0220 14:51:41.316825 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:41.317025 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:41.317025 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:41.317025 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:41.317025 master-0 kubenswrapper[7744]: I0220 14:51:41.316960 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:42.317616 master-0 kubenswrapper[7744]: I0220 14:51:42.317388 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:42.317616 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:42.317616 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:42.317616 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:42.317616 master-0 kubenswrapper[7744]: I0220 14:51:42.317475 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:43.316725 master-0 kubenswrapper[7744]: I0220 14:51:43.316633 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:43.316725 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:43.316725 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:43.316725 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:43.317205 master-0 kubenswrapper[7744]: I0220 14:51:43.316733 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:44.317257 master-0 kubenswrapper[7744]: I0220 14:51:44.317145 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:44.317257 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:44.317257 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:44.317257 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:44.317257 master-0 kubenswrapper[7744]: I0220 14:51:44.317249 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:45.317292 master-0 kubenswrapper[7744]: I0220 14:51:45.317190 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:45.317292 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:45.317292 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:45.317292 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:45.318254 master-0 kubenswrapper[7744]: I0220 14:51:45.317314 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:46.316493 master-0 kubenswrapper[7744]: I0220 14:51:46.316426 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:46.316493 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:46.316493 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:46.316493 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:46.317150 master-0 kubenswrapper[7744]: I0220 14:51:46.316527 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:47.317432 master-0 kubenswrapper[7744]: I0220 14:51:47.317317 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:47.317432 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:47.317432 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:47.317432 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:47.317432 master-0 kubenswrapper[7744]: I0220 14:51:47.317416 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:48.317024 master-0 kubenswrapper[7744]: I0220 14:51:48.316612 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:48.317024 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:48.317024 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:48.317024 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:48.317024 master-0 kubenswrapper[7744]: I0220 14:51:48.316675 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:49.316419 master-0 kubenswrapper[7744]: I0220 14:51:49.316357 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:49.316419 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:49.316419 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:49.316419 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:49.316845 master-0 kubenswrapper[7744]: I0220 14:51:49.316441 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:50.083713 master-0 kubenswrapper[7744]: I0220 14:51:50.083602 7744 patch_prober.go:28] interesting pod/machine-config-daemon-ztgdm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 14:51:50.083713 master-0 kubenswrapper[7744]: I0220 14:51:50.083701 7744 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" podUID="d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 14:51:50.318722 master-0 kubenswrapper[7744]: I0220 14:51:50.318642 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:50.318722 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:50.318722 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:50.318722 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:50.319453 master-0 kubenswrapper[7744]: I0220 14:51:50.319399 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:51.317789 master-0 kubenswrapper[7744]: I0220 14:51:51.317709 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:51.317789 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:51.317789 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:51.317789 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:51.318421 master-0 kubenswrapper[7744]: I0220 14:51:51.317803 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:52.317020 master-0 kubenswrapper[7744]: I0220 14:51:52.316911 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:52.317020 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:52.317020 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:52.317020 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:52.317681 master-0 kubenswrapper[7744]: I0220 14:51:52.317047 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:53.317207 master-0 kubenswrapper[7744]: I0220 14:51:53.317127 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:51:53.317207 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:51:53.317207 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:51:53.317207 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:51:53.318127 master-0 kubenswrapper[7744]: I0220 14:51:53.317225 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:51:53.318127 master-0 kubenswrapper[7744]: I0220 14:51:53.317297 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:51:53.318127 master-0 kubenswrapper[7744]: I0220 14:51:53.318094 7744 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"f3242db03cac46e4568d01c2eb90056f6c103228ea7040c2d234fdcf31ba865d"} pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" containerMessage="Container router failed startup probe, will be restarted" Feb 20 14:51:53.318327 master-0 kubenswrapper[7744]: I0220 14:51:53.318160 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" containerID="cri-o://f3242db03cac46e4568d01c2eb90056f6c103228ea7040c2d234fdcf31ba865d" gracePeriod=3600 Feb 20 14:52:20.084396 master-0 kubenswrapper[7744]: I0220 14:52:20.084306 7744 patch_prober.go:28] interesting pod/machine-config-daemon-ztgdm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 20 14:52:20.085402 master-0 kubenswrapper[7744]: I0220 14:52:20.084402 7744 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" podUID="d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 20 14:52:40.410484 master-0 kubenswrapper[7744]: I0220 14:52:40.410384 7744 generic.go:334] "Generic (PLEG): container finished" podID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerID="f3242db03cac46e4568d01c2eb90056f6c103228ea7040c2d234fdcf31ba865d" exitCode=0 Feb 20 14:52:40.410484 master-0 kubenswrapper[7744]: I0220 14:52:40.410461 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" event={"ID":"5f55b652-bef8-4f50-9d1d-9d0a340c1dea","Type":"ContainerDied","Data":"f3242db03cac46e4568d01c2eb90056f6c103228ea7040c2d234fdcf31ba865d"} Feb 20 14:52:40.410484 master-0 kubenswrapper[7744]: I0220 14:52:40.410508 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" event={"ID":"5f55b652-bef8-4f50-9d1d-9d0a340c1dea","Type":"ContainerStarted","Data":"b8c9ab75c341608bbd631623c30a262c8f71065b35633a99f02888aa224f7c9c"} Feb 20 14:52:41.314282 master-0 kubenswrapper[7744]: I0220 14:52:41.314212 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:52:41.318836 master-0 kubenswrapper[7744]: I0220 14:52:41.318766 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:41.318836 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:41.318836 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:41.318836 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:41.319177 master-0 kubenswrapper[7744]: I0220 14:52:41.318854 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:42.317839 master-0 kubenswrapper[7744]: I0220 14:52:42.317648 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:42.317839 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:42.317839 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:42.317839 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:42.318865 master-0 kubenswrapper[7744]: I0220 14:52:42.317845 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:43.316879 master-0 kubenswrapper[7744]: I0220 14:52:43.316805 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:43.316879 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:43.316879 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:43.316879 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:43.317396 master-0 kubenswrapper[7744]: I0220 14:52:43.316898 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:44.317292 master-0 kubenswrapper[7744]: I0220 14:52:44.317176 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:44.317292 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:44.317292 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:44.317292 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:44.318506 master-0 kubenswrapper[7744]: I0220 14:52:44.317293 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:45.317946 master-0 kubenswrapper[7744]: I0220 14:52:45.317851 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:45.317946 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:45.317946 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:45.317946 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:45.318701 master-0 kubenswrapper[7744]: I0220 14:52:45.317969 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:46.316642 master-0 kubenswrapper[7744]: I0220 14:52:46.316539 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:46.316642 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:46.316642 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:46.316642 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:46.317280 master-0 kubenswrapper[7744]: I0220 14:52:46.316651 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:47.317264 master-0 kubenswrapper[7744]: I0220 14:52:47.317158 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:47.317264 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:47.317264 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:47.317264 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:47.318247 master-0 kubenswrapper[7744]: I0220 14:52:47.317275 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:48.317616 master-0 kubenswrapper[7744]: I0220 14:52:48.317538 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:48.317616 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:48.317616 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:48.317616 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:48.319346 master-0 kubenswrapper[7744]: I0220 14:52:48.317641 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:49.317167 master-0 kubenswrapper[7744]: I0220 14:52:49.317019 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:49.317167 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:49.317167 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:49.317167 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:49.317167 master-0 kubenswrapper[7744]: I0220 14:52:49.317128 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:50.314004 master-0 kubenswrapper[7744]: I0220 14:52:50.313913 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:52:50.317976 master-0 kubenswrapper[7744]: I0220 14:52:50.317876 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:50.317976 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:50.317976 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:50.317976 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:50.318478 master-0 kubenswrapper[7744]: I0220 14:52:50.317972 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:51.316595 master-0 kubenswrapper[7744]: I0220 14:52:51.316453 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:51.316595 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:51.316595 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:51.316595 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:51.317718 master-0 kubenswrapper[7744]: I0220 14:52:51.316592 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:52.316632 master-0 kubenswrapper[7744]: I0220 14:52:52.316509 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:52.316632 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:52.316632 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:52.316632 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:52.316632 master-0 kubenswrapper[7744]: I0220 14:52:52.316619 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:53.318321 master-0 kubenswrapper[7744]: I0220 14:52:53.318248 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:53.318321 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:53.318321 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:53.318321 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:53.319312 master-0 kubenswrapper[7744]: I0220 14:52:53.318335 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:54.317157 master-0 kubenswrapper[7744]: I0220 14:52:54.317051 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:54.317157 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:54.317157 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:54.317157 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:54.317583 master-0 kubenswrapper[7744]: I0220 14:52:54.317155 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:55.317816 master-0 kubenswrapper[7744]: I0220 14:52:55.317747 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:55.317816 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:55.317816 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:55.317816 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:55.317816 master-0 kubenswrapper[7744]: I0220 14:52:55.317843 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:56.317531 master-0 kubenswrapper[7744]: I0220 14:52:56.317428 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:56.317531 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:56.317531 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:56.317531 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:56.317531 master-0 kubenswrapper[7744]: I0220 14:52:56.317503 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:57.316663 master-0 kubenswrapper[7744]: I0220 14:52:57.316564 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:57.316663 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:57.316663 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:57.316663 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:57.317204 master-0 kubenswrapper[7744]: I0220 14:52:57.316672 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:58.317228 master-0 kubenswrapper[7744]: I0220 14:52:58.317150 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:58.317228 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:58.317228 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:58.317228 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:58.318321 master-0 kubenswrapper[7744]: I0220 14:52:58.317255 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:52:59.317574 master-0 kubenswrapper[7744]: I0220 14:52:59.317501 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:52:59.317574 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:52:59.317574 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:52:59.317574 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:52:59.318601 master-0 kubenswrapper[7744]: I0220 14:52:59.317592 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:00.318834 master-0 kubenswrapper[7744]: I0220 14:53:00.318742 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:00.318834 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:00.318834 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:00.318834 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:00.319776 master-0 kubenswrapper[7744]: I0220 14:53:00.318839 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:01.317241 master-0 kubenswrapper[7744]: I0220 14:53:01.317160 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:01.317241 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:01.317241 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:01.317241 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:01.317604 master-0 kubenswrapper[7744]: I0220 14:53:01.317283 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:02.316573 master-0 kubenswrapper[7744]: I0220 14:53:02.316518 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:02.316573 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:02.316573 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:02.316573 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:02.317821 master-0 kubenswrapper[7744]: I0220 14:53:02.317771 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:03.316432 master-0 kubenswrapper[7744]: I0220 14:53:03.316295 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:03.316432 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:03.316432 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:03.316432 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:03.316432 master-0 kubenswrapper[7744]: I0220 14:53:03.316414 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:04.316620 master-0 kubenswrapper[7744]: I0220 14:53:04.316525 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:04.316620 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:04.316620 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:04.316620 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:04.316620 master-0 kubenswrapper[7744]: I0220 14:53:04.316605 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:05.317026 master-0 kubenswrapper[7744]: I0220 14:53:05.316903 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:05.317026 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:05.317026 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:05.317026 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:05.317592 master-0 kubenswrapper[7744]: I0220 14:53:05.317057 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:06.317191 master-0 kubenswrapper[7744]: I0220 14:53:06.317093 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:06.317191 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:06.317191 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:06.317191 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:06.318412 master-0 kubenswrapper[7744]: I0220 14:53:06.317322 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:07.317486 master-0 kubenswrapper[7744]: I0220 14:53:07.317368 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:07.317486 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:07.317486 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:07.317486 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:07.318611 master-0 kubenswrapper[7744]: I0220 14:53:07.317519 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:08.317789 master-0 kubenswrapper[7744]: I0220 14:53:08.317657 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:08.317789 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:08.317789 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:08.317789 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:08.317789 master-0 kubenswrapper[7744]: I0220 14:53:08.317740 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:09.316731 master-0 kubenswrapper[7744]: I0220 14:53:09.316619 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:09.316731 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:09.316731 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:09.316731 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:09.316731 master-0 kubenswrapper[7744]: I0220 14:53:09.316715 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:10.317540 master-0 kubenswrapper[7744]: I0220 14:53:10.317477 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:10.317540 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:10.317540 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:10.317540 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:10.318498 master-0 kubenswrapper[7744]: I0220 14:53:10.317561 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:11.317211 master-0 kubenswrapper[7744]: I0220 14:53:11.317015 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:11.317211 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:11.317211 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:11.317211 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:11.317211 master-0 kubenswrapper[7744]: I0220 14:53:11.317127 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:12.318374 master-0 kubenswrapper[7744]: I0220 14:53:12.318297 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:12.318374 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:12.318374 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:12.318374 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:12.319559 master-0 kubenswrapper[7744]: I0220 14:53:12.319172 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:13.317378 master-0 kubenswrapper[7744]: I0220 14:53:13.317300 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:13.317378 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:13.317378 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:13.317378 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:13.317837 master-0 kubenswrapper[7744]: I0220 14:53:13.317381 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:14.317048 master-0 kubenswrapper[7744]: I0220 14:53:14.316873 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:14.317048 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:14.317048 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:14.317048 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:14.318222 master-0 kubenswrapper[7744]: I0220 14:53:14.317067 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:15.317279 master-0 kubenswrapper[7744]: I0220 14:53:15.317134 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:15.317279 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:15.317279 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:15.317279 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:15.317279 master-0 kubenswrapper[7744]: I0220 14:53:15.317257 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:16.316118 master-0 kubenswrapper[7744]: I0220 14:53:16.316064 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:16.316118 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:16.316118 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:16.316118 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:16.316118 master-0 kubenswrapper[7744]: I0220 14:53:16.316115 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:17.317087 master-0 kubenswrapper[7744]: I0220 14:53:17.317011 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:17.317087 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:17.317087 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:17.317087 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:17.318011 master-0 kubenswrapper[7744]: I0220 14:53:17.317113 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:18.317391 master-0 kubenswrapper[7744]: I0220 14:53:18.317329 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:18.317391 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:18.317391 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:18.317391 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:18.318450 master-0 kubenswrapper[7744]: I0220 14:53:18.318404 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:19.317484 master-0 kubenswrapper[7744]: I0220 14:53:19.317308 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:19.317484 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:19.317484 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:19.317484 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:19.318963 master-0 kubenswrapper[7744]: I0220 14:53:19.318140 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:20.318084 master-0 kubenswrapper[7744]: I0220 14:53:20.317898 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:20.318084 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:20.318084 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:20.318084 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:20.319288 master-0 kubenswrapper[7744]: I0220 14:53:20.318115 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:21.317279 master-0 kubenswrapper[7744]: I0220 14:53:21.317148 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:21.317279 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:21.317279 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:21.317279 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:21.317279 master-0 kubenswrapper[7744]: I0220 14:53:21.317263 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:22.317359 master-0 kubenswrapper[7744]: I0220 14:53:22.317125 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:22.317359 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:22.317359 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:22.317359 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:22.317359 master-0 kubenswrapper[7744]: I0220 14:53:22.317221 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:23.316838 master-0 kubenswrapper[7744]: I0220 14:53:23.316640 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:23.316838 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:23.316838 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:23.316838 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:23.317469 master-0 kubenswrapper[7744]: I0220 14:53:23.316886 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:24.317274 master-0 kubenswrapper[7744]: I0220 14:53:24.317145 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:24.317274 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:24.317274 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:24.317274 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:24.317274 master-0 kubenswrapper[7744]: I0220 14:53:24.317247 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:25.317859 master-0 kubenswrapper[7744]: I0220 14:53:25.317747 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:25.317859 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:25.317859 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:25.317859 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:25.319475 master-0 kubenswrapper[7744]: I0220 14:53:25.317917 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:26.317102 master-0 kubenswrapper[7744]: I0220 14:53:26.317002 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:26.317102 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:26.317102 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:26.317102 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:26.318703 master-0 kubenswrapper[7744]: I0220 14:53:26.317141 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:27.317679 master-0 kubenswrapper[7744]: I0220 14:53:27.317588 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:27.317679 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:27.317679 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:27.317679 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:27.319229 master-0 kubenswrapper[7744]: I0220 14:53:27.317681 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:28.316912 master-0 kubenswrapper[7744]: I0220 14:53:28.316788 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:28.316912 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:28.316912 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:28.316912 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:28.316912 master-0 kubenswrapper[7744]: I0220 14:53:28.316885 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:29.317721 master-0 kubenswrapper[7744]: I0220 14:53:29.317622 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:29.317721 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:29.317721 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:29.317721 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:29.318747 master-0 kubenswrapper[7744]: I0220 14:53:29.317736 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:30.317914 master-0 kubenswrapper[7744]: I0220 14:53:30.317832 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:30.317914 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:30.317914 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:30.317914 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:30.317914 master-0 kubenswrapper[7744]: I0220 14:53:30.317912 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:31.317630 master-0 kubenswrapper[7744]: I0220 14:53:31.317547 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:31.317630 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:31.317630 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:31.317630 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:31.317978 master-0 kubenswrapper[7744]: I0220 14:53:31.317647 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:32.317019 master-0 kubenswrapper[7744]: I0220 14:53:32.316958 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:32.317019 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:32.317019 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:32.317019 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:32.318683 master-0 kubenswrapper[7744]: I0220 14:53:32.318170 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:33.317559 master-0 kubenswrapper[7744]: I0220 14:53:33.317463 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:33.317559 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:33.317559 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:33.317559 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:33.318022 master-0 kubenswrapper[7744]: I0220 14:53:33.317577 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:34.317492 master-0 kubenswrapper[7744]: I0220 14:53:34.317386 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:34.317492 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:34.317492 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:34.317492 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:34.317957 master-0 kubenswrapper[7744]: I0220 14:53:34.317496 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:35.317584 master-0 kubenswrapper[7744]: I0220 14:53:35.317456 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:35.317584 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:35.317584 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:35.317584 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:35.317584 master-0 kubenswrapper[7744]: I0220 14:53:35.317555 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:36.317301 master-0 kubenswrapper[7744]: I0220 14:53:36.317167 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:36.317301 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:36.317301 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:36.317301 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:36.317581 master-0 kubenswrapper[7744]: I0220 14:53:36.317305 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:37.316574 master-0 kubenswrapper[7744]: I0220 14:53:37.316486 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:37.316574 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:37.316574 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:37.316574 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:37.316574 master-0 kubenswrapper[7744]: I0220 14:53:37.316545 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:38.317690 master-0 kubenswrapper[7744]: I0220 14:53:38.317624 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:38.317690 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:38.317690 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:38.317690 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:38.318322 master-0 kubenswrapper[7744]: I0220 14:53:38.317705 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:39.317757 master-0 kubenswrapper[7744]: I0220 14:53:39.317682 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:39.317757 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:39.317757 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:39.317757 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:39.318704 master-0 kubenswrapper[7744]: I0220 14:53:39.317778 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:40.316535 master-0 kubenswrapper[7744]: I0220 14:53:40.316416 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:40.316535 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:40.316535 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:40.316535 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:40.317212 master-0 kubenswrapper[7744]: I0220 14:53:40.316541 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:40.898970 master-0 kubenswrapper[7744]: I0220 14:53:40.898880 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5qlzq"] Feb 20 14:53:40.900414 master-0 kubenswrapper[7744]: I0220 14:53:40.900358 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5qlzq" Feb 20 14:53:40.904693 master-0 kubenswrapper[7744]: I0220 14:53:40.904594 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 20 14:53:40.904814 master-0 kubenswrapper[7744]: I0220 14:53:40.904678 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 20 14:53:40.905059 master-0 kubenswrapper[7744]: I0220 14:53:40.904595 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-c2dd6" Feb 20 14:53:40.908093 master-0 kubenswrapper[7744]: I0220 14:53:40.908036 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 20 14:53:40.925521 master-0 kubenswrapper[7744]: I0220 14:53:40.925424 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5qlzq"] Feb 20 14:53:41.034153 master-0 kubenswrapper[7744]: I0220 14:53:41.034084 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc9pl\" (UniqueName: \"kubernetes.io/projected/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-kube-api-access-lc9pl\") pod \"ingress-canary-5qlzq\" (UID: \"4e7cac87-2eaa-4dad-b2dc-c8ed0557c665\") " pod="openshift-ingress-canary/ingress-canary-5qlzq" Feb 20 14:53:41.034153 master-0 kubenswrapper[7744]: I0220 14:53:41.034146 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-cert\") pod \"ingress-canary-5qlzq\" (UID: \"4e7cac87-2eaa-4dad-b2dc-c8ed0557c665\") " pod="openshift-ingress-canary/ingress-canary-5qlzq" Feb 20 14:53:41.136073 master-0 kubenswrapper[7744]: I0220 14:53:41.136003 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc9pl\" (UniqueName: \"kubernetes.io/projected/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-kube-api-access-lc9pl\") pod \"ingress-canary-5qlzq\" (UID: \"4e7cac87-2eaa-4dad-b2dc-c8ed0557c665\") " pod="openshift-ingress-canary/ingress-canary-5qlzq" Feb 20 14:53:41.136253 master-0 kubenswrapper[7744]: I0220 14:53:41.136081 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-cert\") pod \"ingress-canary-5qlzq\" (UID: \"4e7cac87-2eaa-4dad-b2dc-c8ed0557c665\") " pod="openshift-ingress-canary/ingress-canary-5qlzq" Feb 20 14:53:41.138757 master-0 kubenswrapper[7744]: I0220 14:53:41.138718 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 20 14:53:41.153486 master-0 kubenswrapper[7744]: I0220 14:53:41.153181 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-cert\") pod \"ingress-canary-5qlzq\" (UID: \"4e7cac87-2eaa-4dad-b2dc-c8ed0557c665\") " pod="openshift-ingress-canary/ingress-canary-5qlzq" Feb 20 14:53:41.161129 master-0 kubenswrapper[7744]: I0220 14:53:41.161092 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 20 14:53:41.172309 master-0 kubenswrapper[7744]: I0220 14:53:41.170751 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 20 14:53:41.222351 master-0 kubenswrapper[7744]: I0220 14:53:41.222269 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc9pl\" (UniqueName: \"kubernetes.io/projected/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-kube-api-access-lc9pl\") pod \"ingress-canary-5qlzq\" (UID: \"4e7cac87-2eaa-4dad-b2dc-c8ed0557c665\") " pod="openshift-ingress-canary/ingress-canary-5qlzq" Feb 20 14:53:41.251668 master-0 kubenswrapper[7744]: I0220 14:53:41.251575 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-c2dd6" Feb 20 14:53:41.259694 master-0 kubenswrapper[7744]: I0220 14:53:41.259617 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5qlzq" Feb 20 14:53:41.318117 master-0 kubenswrapper[7744]: I0220 14:53:41.318053 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:41.318117 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:41.318117 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:41.318117 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:41.318527 master-0 kubenswrapper[7744]: I0220 14:53:41.318140 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:41.767169 master-0 kubenswrapper[7744]: I0220 14:53:41.766340 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5qlzq"] Feb 20 14:53:41.776206 master-0 kubenswrapper[7744]: W0220 14:53:41.776136 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e7cac87_2eaa_4dad_b2dc_c8ed0557c665.slice/crio-013da989dc1e60fa75e3d1e3955a83dece7eed7353880205a9acd5aa5c2d4d69 WatchSource:0}: Error finding container 013da989dc1e60fa75e3d1e3955a83dece7eed7353880205a9acd5aa5c2d4d69: Status 404 returned error can't find the container with id 013da989dc1e60fa75e3d1e3955a83dece7eed7353880205a9acd5aa5c2d4d69 Feb 20 14:53:41.925849 master-0 kubenswrapper[7744]: I0220 14:53:41.925763 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5qlzq" event={"ID":"4e7cac87-2eaa-4dad-b2dc-c8ed0557c665","Type":"ContainerStarted","Data":"013da989dc1e60fa75e3d1e3955a83dece7eed7353880205a9acd5aa5c2d4d69"} Feb 20 14:53:41.928518 master-0 kubenswrapper[7744]: I0220 14:53:41.928453 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-fjtrw_4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/ingress-operator/1.log" Feb 20 14:53:41.930176 master-0 kubenswrapper[7744]: I0220 14:53:41.930120 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-fjtrw_4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/ingress-operator/0.log" Feb 20 14:53:41.930344 master-0 kubenswrapper[7744]: I0220 14:53:41.930205 7744 generic.go:334] "Generic (PLEG): container finished" podID="4b6a656c-40d6-4c63-9c6f-ac943eae4c9a" containerID="714dbfc4fc378943ded0e9dbde4bd4d13b8c24e9f4a8b6486b4468145db42e05" exitCode=1 Feb 20 14:53:41.930344 master-0 kubenswrapper[7744]: I0220 14:53:41.930254 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" event={"ID":"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a","Type":"ContainerDied","Data":"714dbfc4fc378943ded0e9dbde4bd4d13b8c24e9f4a8b6486b4468145db42e05"} Feb 20 14:53:41.930344 master-0 kubenswrapper[7744]: I0220 14:53:41.930304 7744 scope.go:117] "RemoveContainer" containerID="d3902c23a65d809f06a7ebdcc4af6b01c4d6059cec90ec0825ac32ffd942466d" Feb 20 14:53:41.931173 master-0 kubenswrapper[7744]: I0220 14:53:41.931119 7744 scope.go:117] "RemoveContainer" containerID="714dbfc4fc378943ded0e9dbde4bd4d13b8c24e9f4a8b6486b4468145db42e05" Feb 20 14:53:41.931616 master-0 kubenswrapper[7744]: E0220 14:53:41.931554 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-fjtrw_openshift-ingress-operator(4b6a656c-40d6-4c63-9c6f-ac943eae4c9a)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" podUID="4b6a656c-40d6-4c63-9c6f-ac943eae4c9a" Feb 20 14:53:42.317246 master-0 kubenswrapper[7744]: I0220 14:53:42.317112 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:42.317246 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:42.317246 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:42.317246 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:42.317701 master-0 kubenswrapper[7744]: I0220 14:53:42.317265 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:42.939526 master-0 kubenswrapper[7744]: I0220 14:53:42.939442 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5qlzq" event={"ID":"4e7cac87-2eaa-4dad-b2dc-c8ed0557c665","Type":"ContainerStarted","Data":"98dbd2fe6ba8be6befbd533cc0cc2296e7e140b533b1c3130dfcc27e3db2bb67"} Feb 20 14:53:42.942138 master-0 kubenswrapper[7744]: I0220 14:53:42.942084 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-fjtrw_4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/ingress-operator/1.log" Feb 20 14:53:43.318044 master-0 kubenswrapper[7744]: I0220 14:53:43.317911 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:43.318044 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:43.318044 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:43.318044 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:43.318893 master-0 kubenswrapper[7744]: I0220 14:53:43.318794 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:44.317562 master-0 kubenswrapper[7744]: I0220 14:53:44.317461 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:44.317562 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:44.317562 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:44.317562 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:44.319150 master-0 kubenswrapper[7744]: I0220 14:53:44.317549 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:45.317968 master-0 kubenswrapper[7744]: I0220 14:53:45.316236 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:45.317968 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:45.317968 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:45.317968 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:45.317968 master-0 kubenswrapper[7744]: I0220 14:53:45.316457 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:46.317571 master-0 kubenswrapper[7744]: I0220 14:53:46.317478 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:46.317571 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:46.317571 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:46.317571 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:46.318678 master-0 kubenswrapper[7744]: I0220 14:53:46.317589 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:47.317041 master-0 kubenswrapper[7744]: I0220 14:53:47.316963 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:47.317041 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:47.317041 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:47.317041 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:47.317041 master-0 kubenswrapper[7744]: I0220 14:53:47.317035 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:48.317595 master-0 kubenswrapper[7744]: I0220 14:53:48.317546 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:48.317595 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:48.317595 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:48.317595 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:48.318297 master-0 kubenswrapper[7744]: I0220 14:53:48.317610 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:49.316672 master-0 kubenswrapper[7744]: I0220 14:53:49.316590 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:49.316672 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:49.316672 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:49.316672 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:49.317125 master-0 kubenswrapper[7744]: I0220 14:53:49.316677 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:50.316992 master-0 kubenswrapper[7744]: I0220 14:53:50.316853 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:50.316992 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:50.316992 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:50.316992 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:50.318190 master-0 kubenswrapper[7744]: I0220 14:53:50.316974 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:51.318328 master-0 kubenswrapper[7744]: I0220 14:53:51.318019 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:51.318328 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:51.318328 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:51.318328 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:51.318328 master-0 kubenswrapper[7744]: I0220 14:53:51.318143 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:52.317910 master-0 kubenswrapper[7744]: I0220 14:53:52.317117 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:52.317910 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:52.317910 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:52.317910 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:52.319488 master-0 kubenswrapper[7744]: I0220 14:53:52.317491 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:53.316882 master-0 kubenswrapper[7744]: I0220 14:53:53.316821 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:53.316882 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:53.316882 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:53.316882 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:53.317578 master-0 kubenswrapper[7744]: I0220 14:53:53.317535 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:54.319692 master-0 kubenswrapper[7744]: I0220 14:53:54.318818 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:54.319692 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:54.319692 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:54.319692 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:54.319692 master-0 kubenswrapper[7744]: I0220 14:53:54.318886 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:55.038082 master-0 kubenswrapper[7744]: I0220 14:53:55.038022 7744 scope.go:117] "RemoveContainer" containerID="714dbfc4fc378943ded0e9dbde4bd4d13b8c24e9f4a8b6486b4468145db42e05" Feb 20 14:53:55.071239 master-0 kubenswrapper[7744]: I0220 14:53:55.071131 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5qlzq" podStartSLOduration=15.071106998 podStartE2EDuration="15.071106998s" podCreationTimestamp="2026-02-20 14:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:53:42.967186295 +0000 UTC m=+422.169386275" watchObservedRunningTime="2026-02-20 14:53:55.071106998 +0000 UTC m=+434.273306948" Feb 20 14:53:55.317219 master-0 kubenswrapper[7744]: I0220 14:53:55.317031 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:55.317219 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:55.317219 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:55.317219 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:55.317219 master-0 kubenswrapper[7744]: I0220 14:53:55.317128 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:56.049736 master-0 kubenswrapper[7744]: I0220 14:53:56.049672 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-fjtrw_4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/ingress-operator/1.log" Feb 20 14:53:56.050415 master-0 kubenswrapper[7744]: I0220 14:53:56.050363 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" event={"ID":"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a","Type":"ContainerStarted","Data":"ba86653512a4222e60f99c9a0811e8150bea75c06b16f3bd7d165d8b4d82ace0"} Feb 20 14:53:56.318095 master-0 kubenswrapper[7744]: I0220 14:53:56.317890 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:56.318095 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:56.318095 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:56.318095 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:56.318095 master-0 kubenswrapper[7744]: I0220 14:53:56.318026 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:57.317256 master-0 kubenswrapper[7744]: I0220 14:53:57.317156 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:57.317256 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:57.317256 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:57.317256 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:57.318355 master-0 kubenswrapper[7744]: I0220 14:53:57.317274 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:58.316648 master-0 kubenswrapper[7744]: I0220 14:53:58.316531 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:58.316648 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:58.316648 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:58.316648 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:58.316648 master-0 kubenswrapper[7744]: I0220 14:53:58.316601 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:53:59.317571 master-0 kubenswrapper[7744]: I0220 14:53:59.317512 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:53:59.317571 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:53:59.317571 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:53:59.317571 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:53:59.318600 master-0 kubenswrapper[7744]: I0220 14:53:59.318553 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:00.316758 master-0 kubenswrapper[7744]: I0220 14:54:00.316414 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:00.316758 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:00.316758 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:00.316758 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:00.316758 master-0 kubenswrapper[7744]: I0220 14:54:00.316659 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:01.317053 master-0 kubenswrapper[7744]: I0220 14:54:01.316986 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:01.317053 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:01.317053 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:01.317053 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:01.317961 master-0 kubenswrapper[7744]: I0220 14:54:01.317888 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:02.316442 master-0 kubenswrapper[7744]: I0220 14:54:02.316350 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:02.316442 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:02.316442 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:02.316442 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:02.317705 master-0 kubenswrapper[7744]: I0220 14:54:02.316442 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:03.316887 master-0 kubenswrapper[7744]: I0220 14:54:03.316785 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:03.316887 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:03.316887 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:03.316887 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:03.317959 master-0 kubenswrapper[7744]: I0220 14:54:03.316892 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:04.317183 master-0 kubenswrapper[7744]: I0220 14:54:04.317071 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:04.317183 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:04.317183 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:04.317183 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:04.317183 master-0 kubenswrapper[7744]: I0220 14:54:04.317155 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:05.316197 master-0 kubenswrapper[7744]: I0220 14:54:05.316135 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:05.316197 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:05.316197 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:05.316197 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:05.316611 master-0 kubenswrapper[7744]: I0220 14:54:05.316204 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:06.316216 master-0 kubenswrapper[7744]: I0220 14:54:06.316130 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:06.316216 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:06.316216 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:06.316216 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:06.316216 master-0 kubenswrapper[7744]: I0220 14:54:06.316209 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:07.316986 master-0 kubenswrapper[7744]: I0220 14:54:07.316312 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:07.316986 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:07.316986 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:07.316986 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:07.316986 master-0 kubenswrapper[7744]: I0220 14:54:07.316427 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:08.317240 master-0 kubenswrapper[7744]: I0220 14:54:08.317162 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:08.317240 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:08.317240 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:08.317240 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:08.317240 master-0 kubenswrapper[7744]: I0220 14:54:08.317248 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:09.317774 master-0 kubenswrapper[7744]: I0220 14:54:09.317703 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:09.317774 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:09.317774 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:09.317774 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:09.319220 master-0 kubenswrapper[7744]: I0220 14:54:09.317780 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:09.804363 master-0 kubenswrapper[7744]: I0220 14:54:09.804297 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69686f5989-xkhpg"] Feb 20 14:54:09.804576 master-0 kubenswrapper[7744]: I0220 14:54:09.804510 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" podUID="26c30461-efe3-4999-9698-f3c478c71fa0" containerName="controller-manager" containerID="cri-o://d8497a234a46a68aecc54d147968178bed221b9af852b897918c6819201a92bf" gracePeriod=30 Feb 20 14:54:09.827270 master-0 kubenswrapper[7744]: I0220 14:54:09.826374 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj"] Feb 20 14:54:09.827270 master-0 kubenswrapper[7744]: I0220 14:54:09.826596 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" podUID="a5aae2e2-7323-4927-a5ca-645e2a8b7bf9" containerName="route-controller-manager" containerID="cri-o://229362d733da42839f2135af7287e0625de00b859da8c4afba1d54667a79cddd" gracePeriod=30 Feb 20 14:54:10.181688 master-0 kubenswrapper[7744]: I0220 14:54:10.177918 7744 generic.go:334] "Generic (PLEG): container finished" podID="a5aae2e2-7323-4927-a5ca-645e2a8b7bf9" containerID="229362d733da42839f2135af7287e0625de00b859da8c4afba1d54667a79cddd" exitCode=0 Feb 20 14:54:10.181688 master-0 kubenswrapper[7744]: I0220 14:54:10.177965 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" event={"ID":"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9","Type":"ContainerDied","Data":"229362d733da42839f2135af7287e0625de00b859da8c4afba1d54667a79cddd"} Feb 20 14:54:10.181688 master-0 kubenswrapper[7744]: I0220 14:54:10.179279 7744 generic.go:334] "Generic (PLEG): container finished" podID="26c30461-efe3-4999-9698-f3c478c71fa0" containerID="d8497a234a46a68aecc54d147968178bed221b9af852b897918c6819201a92bf" exitCode=0 Feb 20 14:54:10.181688 master-0 kubenswrapper[7744]: I0220 14:54:10.179315 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" event={"ID":"26c30461-efe3-4999-9698-f3c478c71fa0","Type":"ContainerDied","Data":"d8497a234a46a68aecc54d147968178bed221b9af852b897918c6819201a92bf"} Feb 20 14:54:10.322120 master-0 kubenswrapper[7744]: I0220 14:54:10.322045 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:10.322120 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:10.322120 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:10.322120 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:10.323407 master-0 kubenswrapper[7744]: I0220 14:54:10.322138 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:10.369742 master-0 kubenswrapper[7744]: I0220 14:54:10.369691 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:54:10.378583 master-0 kubenswrapper[7744]: I0220 14:54:10.378543 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:54:10.522946 master-0 kubenswrapper[7744]: I0220 14:54:10.522853 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9v89\" (UniqueName: \"kubernetes.io/projected/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-kube-api-access-c9v89\") pod \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\" (UID: \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\") " Feb 20 14:54:10.523998 master-0 kubenswrapper[7744]: I0220 14:54:10.523314 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-proxy-ca-bundles\") pod \"26c30461-efe3-4999-9698-f3c478c71fa0\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " Feb 20 14:54:10.524308 master-0 kubenswrapper[7744]: I0220 14:54:10.524022 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-client-ca\") pod \"26c30461-efe3-4999-9698-f3c478c71fa0\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " Feb 20 14:54:10.524308 master-0 kubenswrapper[7744]: I0220 14:54:10.524048 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "26c30461-efe3-4999-9698-f3c478c71fa0" (UID: "26c30461-efe3-4999-9698-f3c478c71fa0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:54:10.524308 master-0 kubenswrapper[7744]: I0220 14:54:10.524061 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26c30461-efe3-4999-9698-f3c478c71fa0-serving-cert\") pod \"26c30461-efe3-4999-9698-f3c478c71fa0\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " Feb 20 14:54:10.524308 master-0 kubenswrapper[7744]: I0220 14:54:10.524139 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-client-ca\") pod \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\" (UID: \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\") " Feb 20 14:54:10.524308 master-0 kubenswrapper[7744]: I0220 14:54:10.524179 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-config\") pod \"26c30461-efe3-4999-9698-f3c478c71fa0\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " Feb 20 14:54:10.524308 master-0 kubenswrapper[7744]: I0220 14:54:10.524243 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-config\") pod \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\" (UID: \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\") " Feb 20 14:54:10.524695 master-0 kubenswrapper[7744]: I0220 14:54:10.524657 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-client-ca" (OuterVolumeSpecName: "client-ca") pod "26c30461-efe3-4999-9698-f3c478c71fa0" (UID: "26c30461-efe3-4999-9698-f3c478c71fa0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:54:10.524857 master-0 kubenswrapper[7744]: I0220 14:54:10.524803 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-config" (OuterVolumeSpecName: "config") pod "26c30461-efe3-4999-9698-f3c478c71fa0" (UID: "26c30461-efe3-4999-9698-f3c478c71fa0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:54:10.525109 master-0 kubenswrapper[7744]: I0220 14:54:10.525024 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-client-ca" (OuterVolumeSpecName: "client-ca") pod "a5aae2e2-7323-4927-a5ca-645e2a8b7bf9" (UID: "a5aae2e2-7323-4927-a5ca-645e2a8b7bf9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:54:10.525235 master-0 kubenswrapper[7744]: I0220 14:54:10.524314 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxxc8\" (UniqueName: \"kubernetes.io/projected/26c30461-efe3-4999-9698-f3c478c71fa0-kube-api-access-gxxc8\") pod \"26c30461-efe3-4999-9698-f3c478c71fa0\" (UID: \"26c30461-efe3-4999-9698-f3c478c71fa0\") " Feb 20 14:54:10.525318 master-0 kubenswrapper[7744]: I0220 14:54:10.525224 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-config" (OuterVolumeSpecName: "config") pod "a5aae2e2-7323-4927-a5ca-645e2a8b7bf9" (UID: "a5aae2e2-7323-4927-a5ca-645e2a8b7bf9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:54:10.525318 master-0 kubenswrapper[7744]: I0220 14:54:10.525275 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-serving-cert\") pod \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\" (UID: \"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9\") " Feb 20 14:54:10.526000 master-0 kubenswrapper[7744]: I0220 14:54:10.525955 7744 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:10.526000 master-0 kubenswrapper[7744]: I0220 14:54:10.525992 7744 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:10.526154 master-0 kubenswrapper[7744]: I0220 14:54:10.526014 7744 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-config\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:10.526154 master-0 kubenswrapper[7744]: I0220 14:54:10.526036 7744 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-config\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:10.526154 master-0 kubenswrapper[7744]: I0220 14:54:10.526055 7744 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26c30461-efe3-4999-9698-f3c478c71fa0-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:10.527112 master-0 kubenswrapper[7744]: I0220 14:54:10.527055 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-kube-api-access-c9v89" (OuterVolumeSpecName: "kube-api-access-c9v89") pod "a5aae2e2-7323-4927-a5ca-645e2a8b7bf9" (UID: "a5aae2e2-7323-4927-a5ca-645e2a8b7bf9"). InnerVolumeSpecName "kube-api-access-c9v89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:54:10.527520 master-0 kubenswrapper[7744]: I0220 14:54:10.527469 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26c30461-efe3-4999-9698-f3c478c71fa0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "26c30461-efe3-4999-9698-f3c478c71fa0" (UID: "26c30461-efe3-4999-9698-f3c478c71fa0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 14:54:10.528748 master-0 kubenswrapper[7744]: I0220 14:54:10.528706 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c30461-efe3-4999-9698-f3c478c71fa0-kube-api-access-gxxc8" (OuterVolumeSpecName: "kube-api-access-gxxc8") pod "26c30461-efe3-4999-9698-f3c478c71fa0" (UID: "26c30461-efe3-4999-9698-f3c478c71fa0"). InnerVolumeSpecName "kube-api-access-gxxc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:54:10.530016 master-0 kubenswrapper[7744]: I0220 14:54:10.529873 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a5aae2e2-7323-4927-a5ca-645e2a8b7bf9" (UID: "a5aae2e2-7323-4927-a5ca-645e2a8b7bf9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 14:54:10.627801 master-0 kubenswrapper[7744]: I0220 14:54:10.627638 7744 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26c30461-efe3-4999-9698-f3c478c71fa0-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:10.627801 master-0 kubenswrapper[7744]: I0220 14:54:10.627749 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxxc8\" (UniqueName: \"kubernetes.io/projected/26c30461-efe3-4999-9698-f3c478c71fa0-kube-api-access-gxxc8\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:10.627801 master-0 kubenswrapper[7744]: I0220 14:54:10.627784 7744 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:10.627801 master-0 kubenswrapper[7744]: I0220 14:54:10.627810 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9v89\" (UniqueName: \"kubernetes.io/projected/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9-kube-api-access-c9v89\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:11.189744 master-0 kubenswrapper[7744]: I0220 14:54:11.189681 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" Feb 20 14:54:11.189744 master-0 kubenswrapper[7744]: I0220 14:54:11.189701 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69686f5989-xkhpg" event={"ID":"26c30461-efe3-4999-9698-f3c478c71fa0","Type":"ContainerDied","Data":"3a519667a7ca7f07e39cecb256979b5861f401752064e994ca5bfe33609b5b72"} Feb 20 14:54:11.190024 master-0 kubenswrapper[7744]: I0220 14:54:11.189810 7744 scope.go:117] "RemoveContainer" containerID="d8497a234a46a68aecc54d147968178bed221b9af852b897918c6819201a92bf" Feb 20 14:54:11.195650 master-0 kubenswrapper[7744]: I0220 14:54:11.195595 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" event={"ID":"a5aae2e2-7323-4927-a5ca-645e2a8b7bf9","Type":"ContainerDied","Data":"d6248ee06a48f06e478ef5063a098128ee8d0223c87106accc15750d4a9a382d"} Feb 20 14:54:11.195736 master-0 kubenswrapper[7744]: I0220 14:54:11.195711 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj" Feb 20 14:54:11.217234 master-0 kubenswrapper[7744]: I0220 14:54:11.217175 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69686f5989-xkhpg"] Feb 20 14:54:11.217327 master-0 kubenswrapper[7744]: I0220 14:54:11.217311 7744 scope.go:117] "RemoveContainer" containerID="229362d733da42839f2135af7287e0625de00b859da8c4afba1d54667a79cddd" Feb 20 14:54:11.223420 master-0 kubenswrapper[7744]: I0220 14:54:11.223370 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-69686f5989-xkhpg"] Feb 20 14:54:11.240124 master-0 kubenswrapper[7744]: I0220 14:54:11.240059 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj"] Feb 20 14:54:11.253538 master-0 kubenswrapper[7744]: I0220 14:54:11.253455 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c6947b888-mrmnj"] Feb 20 14:54:11.316702 master-0 kubenswrapper[7744]: I0220 14:54:11.316635 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:11.316702 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:11.316702 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:11.316702 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:11.316904 master-0 kubenswrapper[7744]: I0220 14:54:11.316738 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:11.390861 master-0 kubenswrapper[7744]: I0220 14:54:11.390671 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5"] Feb 20 14:54:11.391647 master-0 kubenswrapper[7744]: E0220 14:54:11.391616 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c30461-efe3-4999-9698-f3c478c71fa0" containerName="controller-manager" Feb 20 14:54:11.391726 master-0 kubenswrapper[7744]: I0220 14:54:11.391685 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c30461-efe3-4999-9698-f3c478c71fa0" containerName="controller-manager" Feb 20 14:54:11.391726 master-0 kubenswrapper[7744]: E0220 14:54:11.391709 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5aae2e2-7323-4927-a5ca-645e2a8b7bf9" containerName="route-controller-manager" Feb 20 14:54:11.391866 master-0 kubenswrapper[7744]: I0220 14:54:11.391723 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5aae2e2-7323-4927-a5ca-645e2a8b7bf9" containerName="route-controller-manager" Feb 20 14:54:11.392659 master-0 kubenswrapper[7744]: I0220 14:54:11.392607 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c30461-efe3-4999-9698-f3c478c71fa0" containerName="controller-manager" Feb 20 14:54:11.392783 master-0 kubenswrapper[7744]: I0220 14:54:11.392694 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5aae2e2-7323-4927-a5ca-645e2a8b7bf9" containerName="route-controller-manager" Feb 20 14:54:11.393525 master-0 kubenswrapper[7744]: I0220 14:54:11.393491 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:54:11.396434 master-0 kubenswrapper[7744]: I0220 14:54:11.396344 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-2w8rc" Feb 20 14:54:11.397057 master-0 kubenswrapper[7744]: I0220 14:54:11.397008 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 20 14:54:11.397536 master-0 kubenswrapper[7744]: I0220 14:54:11.397496 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 20 14:54:11.397903 master-0 kubenswrapper[7744]: I0220 14:54:11.397839 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 20 14:54:11.398474 master-0 kubenswrapper[7744]: I0220 14:54:11.398418 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 20 14:54:11.400002 master-0 kubenswrapper[7744]: I0220 14:54:11.399907 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-647657fcb-w9586"] Feb 20 14:54:11.400457 master-0 kubenswrapper[7744]: I0220 14:54:11.400395 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 20 14:54:11.402061 master-0 kubenswrapper[7744]: I0220 14:54:11.401913 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:11.404645 master-0 kubenswrapper[7744]: I0220 14:54:11.404601 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-9zvh6" Feb 20 14:54:11.405494 master-0 kubenswrapper[7744]: I0220 14:54:11.405438 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 14:54:11.406366 master-0 kubenswrapper[7744]: I0220 14:54:11.406327 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 14:54:11.407157 master-0 kubenswrapper[7744]: I0220 14:54:11.407123 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 14:54:11.407431 master-0 kubenswrapper[7744]: I0220 14:54:11.407404 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 14:54:11.407752 master-0 kubenswrapper[7744]: I0220 14:54:11.407697 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5"] Feb 20 14:54:11.418031 master-0 kubenswrapper[7744]: I0220 14:54:11.417981 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 14:54:11.421957 master-0 kubenswrapper[7744]: I0220 14:54:11.419229 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 14:54:11.422244 master-0 kubenswrapper[7744]: I0220 14:54:11.422178 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-647657fcb-w9586"] Feb 20 14:54:11.544814 master-0 kubenswrapper[7744]: I0220 14:54:11.544742 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-config\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:11.545019 master-0 kubenswrapper[7744]: I0220 14:54:11.544824 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-client-ca\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:11.545107 master-0 kubenswrapper[7744]: I0220 14:54:11.545071 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxjcq\" (UniqueName: \"kubernetes.io/projected/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-kube-api-access-wxjcq\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:54:11.545290 master-0 kubenswrapper[7744]: I0220 14:54:11.545247 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-serving-cert\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:54:11.545365 master-0 kubenswrapper[7744]: I0220 14:54:11.545293 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-proxy-ca-bundles\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:11.545365 master-0 kubenswrapper[7744]: I0220 14:54:11.545356 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf18981-b755-4b11-8793-38bc5e2e755b-serving-cert\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:11.545567 master-0 kubenswrapper[7744]: I0220 14:54:11.545501 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr5wk\" (UniqueName: \"kubernetes.io/projected/bdf18981-b755-4b11-8793-38bc5e2e755b-kube-api-access-wr5wk\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:11.545634 master-0 kubenswrapper[7744]: I0220 14:54:11.545607 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-client-ca\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:54:11.545710 master-0 kubenswrapper[7744]: I0220 14:54:11.545688 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-config\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:54:11.647770 master-0 kubenswrapper[7744]: I0220 14:54:11.647628 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-serving-cert\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:54:11.647770 master-0 kubenswrapper[7744]: I0220 14:54:11.647704 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-proxy-ca-bundles\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:11.647770 master-0 kubenswrapper[7744]: I0220 14:54:11.647766 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf18981-b755-4b11-8793-38bc5e2e755b-serving-cert\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:11.648092 master-0 kubenswrapper[7744]: I0220 14:54:11.647813 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr5wk\" (UniqueName: \"kubernetes.io/projected/bdf18981-b755-4b11-8793-38bc5e2e755b-kube-api-access-wr5wk\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:11.648092 master-0 kubenswrapper[7744]: I0220 14:54:11.647856 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-client-ca\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:54:11.648179 master-0 kubenswrapper[7744]: I0220 14:54:11.648103 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-config\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:54:11.648680 master-0 kubenswrapper[7744]: I0220 14:54:11.648629 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-config\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:11.648734 master-0 kubenswrapper[7744]: I0220 14:54:11.648707 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-client-ca\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:11.648815 master-0 kubenswrapper[7744]: I0220 14:54:11.648778 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxjcq\" (UniqueName: \"kubernetes.io/projected/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-kube-api-access-wxjcq\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:54:11.650151 master-0 kubenswrapper[7744]: I0220 14:54:11.650096 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-client-ca\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:11.651248 master-0 kubenswrapper[7744]: I0220 14:54:11.650427 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-client-ca\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:54:11.651248 master-0 kubenswrapper[7744]: I0220 14:54:11.650559 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-proxy-ca-bundles\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:11.651248 master-0 kubenswrapper[7744]: I0220 14:54:11.651200 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-config\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:54:11.651441 master-0 kubenswrapper[7744]: I0220 14:54:11.651407 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-config\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:11.653426 master-0 kubenswrapper[7744]: I0220 14:54:11.653265 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf18981-b755-4b11-8793-38bc5e2e755b-serving-cert\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:11.654897 master-0 kubenswrapper[7744]: I0220 14:54:11.654845 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-serving-cert\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:54:12.167062 master-0 kubenswrapper[7744]: I0220 14:54:12.166986 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxjcq\" (UniqueName: \"kubernetes.io/projected/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-kube-api-access-wxjcq\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:54:12.170958 master-0 kubenswrapper[7744]: I0220 14:54:12.170856 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr5wk\" (UniqueName: \"kubernetes.io/projected/bdf18981-b755-4b11-8793-38bc5e2e755b-kube-api-access-wr5wk\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:12.321362 master-0 kubenswrapper[7744]: I0220 14:54:12.321296 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:12.321362 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:12.321362 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:12.321362 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:12.321768 master-0 kubenswrapper[7744]: I0220 14:54:12.321375 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:12.343740 master-0 kubenswrapper[7744]: I0220 14:54:12.343682 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:54:12.385906 master-0 kubenswrapper[7744]: I0220 14:54:12.385846 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:12.679025 master-0 kubenswrapper[7744]: I0220 14:54:12.678555 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 20 14:54:12.679569 master-0 kubenswrapper[7744]: I0220 14:54:12.679372 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 14:54:12.681836 master-0 kubenswrapper[7744]: I0220 14:54:12.681482 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-w4mx6" Feb 20 14:54:12.685833 master-0 kubenswrapper[7744]: I0220 14:54:12.682274 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 20 14:54:12.691506 master-0 kubenswrapper[7744]: I0220 14:54:12.689403 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 20 14:54:12.792859 master-0 kubenswrapper[7744]: I0220 14:54:12.792787 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5"] Feb 20 14:54:12.801523 master-0 kubenswrapper[7744]: W0220 14:54:12.801443 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63d49b12_8d51_4d97_9f06_ca4c5bf10dcd.slice/crio-a3b80d783578c7d5bcce0396d10b0b7507567b7ddeed1d7dec131680bd38e6da WatchSource:0}: Error finding container a3b80d783578c7d5bcce0396d10b0b7507567b7ddeed1d7dec131680bd38e6da: Status 404 returned error can't find the container with id a3b80d783578c7d5bcce0396d10b0b7507567b7ddeed1d7dec131680bd38e6da Feb 20 14:54:12.874091 master-0 kubenswrapper[7744]: I0220 14:54:12.870848 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b5633b4-0459-499d-8d50-ec6f3b35348e-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"9b5633b4-0459-499d-8d50-ec6f3b35348e\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 14:54:12.874091 master-0 kubenswrapper[7744]: I0220 14:54:12.870950 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b5633b4-0459-499d-8d50-ec6f3b35348e-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"9b5633b4-0459-499d-8d50-ec6f3b35348e\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 14:54:12.874091 master-0 kubenswrapper[7744]: I0220 14:54:12.870972 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b5633b4-0459-499d-8d50-ec6f3b35348e-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"9b5633b4-0459-499d-8d50-ec6f3b35348e\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 14:54:12.912390 master-0 kubenswrapper[7744]: W0220 14:54:12.912340 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdf18981_b755_4b11_8793_38bc5e2e755b.slice/crio-88c6fd1112c1b3efe31f79a2dc6cd9198555dc6b1c7c6547da60005b56efbb9b WatchSource:0}: Error finding container 88c6fd1112c1b3efe31f79a2dc6cd9198555dc6b1c7c6547da60005b56efbb9b: Status 404 returned error can't find the container with id 88c6fd1112c1b3efe31f79a2dc6cd9198555dc6b1c7c6547da60005b56efbb9b Feb 20 14:54:12.914172 master-0 kubenswrapper[7744]: I0220 14:54:12.914109 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-647657fcb-w9586"] Feb 20 14:54:12.971917 master-0 kubenswrapper[7744]: I0220 14:54:12.971858 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b5633b4-0459-499d-8d50-ec6f3b35348e-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"9b5633b4-0459-499d-8d50-ec6f3b35348e\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 14:54:12.972124 master-0 kubenswrapper[7744]: I0220 14:54:12.971964 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b5633b4-0459-499d-8d50-ec6f3b35348e-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"9b5633b4-0459-499d-8d50-ec6f3b35348e\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 14:54:12.972124 master-0 kubenswrapper[7744]: I0220 14:54:12.972085 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b5633b4-0459-499d-8d50-ec6f3b35348e-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"9b5633b4-0459-499d-8d50-ec6f3b35348e\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 14:54:12.972268 master-0 kubenswrapper[7744]: I0220 14:54:12.972216 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b5633b4-0459-499d-8d50-ec6f3b35348e-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"9b5633b4-0459-499d-8d50-ec6f3b35348e\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 14:54:12.972658 master-0 kubenswrapper[7744]: I0220 14:54:12.972570 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b5633b4-0459-499d-8d50-ec6f3b35348e-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"9b5633b4-0459-499d-8d50-ec6f3b35348e\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 14:54:13.000630 master-0 kubenswrapper[7744]: I0220 14:54:13.000596 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b5633b4-0459-499d-8d50-ec6f3b35348e-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"9b5633b4-0459-499d-8d50-ec6f3b35348e\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 14:54:13.017910 master-0 kubenswrapper[7744]: I0220 14:54:13.017838 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 14:54:13.044284 master-0 kubenswrapper[7744]: I0220 14:54:13.044235 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26c30461-efe3-4999-9698-f3c478c71fa0" path="/var/lib/kubelet/pods/26c30461-efe3-4999-9698-f3c478c71fa0/volumes" Feb 20 14:54:13.044978 master-0 kubenswrapper[7744]: I0220 14:54:13.044951 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5aae2e2-7323-4927-a5ca-645e2a8b7bf9" path="/var/lib/kubelet/pods/a5aae2e2-7323-4927-a5ca-645e2a8b7bf9/volumes" Feb 20 14:54:13.225688 master-0 kubenswrapper[7744]: I0220 14:54:13.225524 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" event={"ID":"bdf18981-b755-4b11-8793-38bc5e2e755b","Type":"ContainerStarted","Data":"71a3faa6e2a13b4bcadc91647966380b556ee1824a73e0209af007ec80d749b3"} Feb 20 14:54:13.225688 master-0 kubenswrapper[7744]: I0220 14:54:13.225599 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" event={"ID":"bdf18981-b755-4b11-8793-38bc5e2e755b","Type":"ContainerStarted","Data":"88c6fd1112c1b3efe31f79a2dc6cd9198555dc6b1c7c6547da60005b56efbb9b"} Feb 20 14:54:13.226056 master-0 kubenswrapper[7744]: I0220 14:54:13.225881 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:13.227259 master-0 kubenswrapper[7744]: I0220 14:54:13.227181 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" event={"ID":"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd","Type":"ContainerStarted","Data":"ce6cf48b03cf7ea4bb59cbc88338b3797dd3cd5289e6bbf78ef6ac04abd04f98"} Feb 20 14:54:13.227741 master-0 kubenswrapper[7744]: I0220 14:54:13.227261 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" event={"ID":"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd","Type":"ContainerStarted","Data":"a3b80d783578c7d5bcce0396d10b0b7507567b7ddeed1d7dec131680bd38e6da"} Feb 20 14:54:13.227741 master-0 kubenswrapper[7744]: I0220 14:54:13.227432 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:54:13.239427 master-0 kubenswrapper[7744]: I0220 14:54:13.239333 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:54:13.249431 master-0 kubenswrapper[7744]: I0220 14:54:13.249354 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" podStartSLOduration=4.249334925 podStartE2EDuration="4.249334925s" podCreationTimestamp="2026-02-20 14:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:54:13.246567286 +0000 UTC m=+452.448767206" watchObservedRunningTime="2026-02-20 14:54:13.249334925 +0000 UTC m=+452.451534845" Feb 20 14:54:13.274007 master-0 kubenswrapper[7744]: I0220 14:54:13.273895 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" podStartSLOduration=4.27387884 podStartE2EDuration="4.27387884s" podCreationTimestamp="2026-02-20 14:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:54:13.27344271 +0000 UTC m=+452.475642630" watchObservedRunningTime="2026-02-20 14:54:13.27387884 +0000 UTC m=+452.476078760" Feb 20 14:54:13.318068 master-0 kubenswrapper[7744]: I0220 14:54:13.317970 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:13.318068 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:13.318068 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:13.318068 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:13.318068 master-0 kubenswrapper[7744]: I0220 14:54:13.318022 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:13.402776 master-0 kubenswrapper[7744]: I0220 14:54:13.402737 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:54:13.506602 master-0 kubenswrapper[7744]: I0220 14:54:13.506557 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 20 14:54:14.239774 master-0 kubenswrapper[7744]: I0220 14:54:14.239687 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"9b5633b4-0459-499d-8d50-ec6f3b35348e","Type":"ContainerStarted","Data":"426d2fed4175d2a5893097b98d0c41809a34de5615fe9d14773afa0232ad7999"} Feb 20 14:54:14.239774 master-0 kubenswrapper[7744]: I0220 14:54:14.239763 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"9b5633b4-0459-499d-8d50-ec6f3b35348e","Type":"ContainerStarted","Data":"d13112859731d18686a15159c8e3489e6bdb627690008a2934c19aa43100750a"} Feb 20 14:54:14.270540 master-0 kubenswrapper[7744]: I0220 14:54:14.270408 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=2.270379494 podStartE2EDuration="2.270379494s" podCreationTimestamp="2026-02-20 14:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:54:14.263819002 +0000 UTC m=+453.466019022" watchObservedRunningTime="2026-02-20 14:54:14.270379494 +0000 UTC m=+453.472579444" Feb 20 14:54:14.316619 master-0 kubenswrapper[7744]: I0220 14:54:14.316532 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:14.316619 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:14.316619 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:14.316619 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:14.316619 master-0 kubenswrapper[7744]: I0220 14:54:14.316597 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:15.318217 master-0 kubenswrapper[7744]: I0220 14:54:15.318091 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:15.318217 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:15.318217 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:15.318217 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:15.319174 master-0 kubenswrapper[7744]: I0220 14:54:15.318227 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:16.317074 master-0 kubenswrapper[7744]: I0220 14:54:16.316972 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:16.317074 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:16.317074 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:16.317074 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:16.317074 master-0 kubenswrapper[7744]: I0220 14:54:16.317073 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:17.316874 master-0 kubenswrapper[7744]: I0220 14:54:17.316788 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:17.316874 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:17.316874 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:17.316874 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:17.317553 master-0 kubenswrapper[7744]: I0220 14:54:17.316899 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:18.316597 master-0 kubenswrapper[7744]: I0220 14:54:18.316533 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:18.316597 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:18.316597 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:18.316597 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:18.316868 master-0 kubenswrapper[7744]: I0220 14:54:18.316622 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:19.083868 master-0 kubenswrapper[7744]: I0220 14:54:19.083709 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 20 14:54:19.084417 master-0 kubenswrapper[7744]: I0220 14:54:19.084118 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="9b5633b4-0459-499d-8d50-ec6f3b35348e" containerName="installer" containerID="cri-o://426d2fed4175d2a5893097b98d0c41809a34de5615fe9d14773afa0232ad7999" gracePeriod=30 Feb 20 14:54:19.316479 master-0 kubenswrapper[7744]: I0220 14:54:19.316428 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:19.316479 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:19.316479 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:19.316479 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:19.316799 master-0 kubenswrapper[7744]: I0220 14:54:19.316513 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:20.318945 master-0 kubenswrapper[7744]: I0220 14:54:20.318167 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:20.318945 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:20.318945 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:20.318945 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:20.324599 master-0 kubenswrapper[7744]: I0220 14:54:20.318248 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:21.317190 master-0 kubenswrapper[7744]: I0220 14:54:21.317038 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:21.317190 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:21.317190 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:21.317190 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:21.317190 master-0 kubenswrapper[7744]: I0220 14:54:21.317149 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:22.316647 master-0 kubenswrapper[7744]: I0220 14:54:22.316562 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:22.316647 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:22.316647 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:22.316647 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:22.317674 master-0 kubenswrapper[7744]: I0220 14:54:22.316658 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:23.317780 master-0 kubenswrapper[7744]: I0220 14:54:23.317637 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:23.317780 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:23.317780 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:23.317780 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:23.317780 master-0 kubenswrapper[7744]: I0220 14:54:23.317746 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:23.677128 master-0 kubenswrapper[7744]: I0220 14:54:23.676983 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 20 14:54:23.677744 master-0 kubenswrapper[7744]: I0220 14:54:23.677699 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 14:54:23.694549 master-0 kubenswrapper[7744]: I0220 14:54:23.694469 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 20 14:54:23.799169 master-0 kubenswrapper[7744]: I0220 14:54:23.799076 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-2b9n2"] Feb 20 14:54:23.799991 master-0 kubenswrapper[7744]: I0220 14:54:23.799963 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:23.802114 master-0 kubenswrapper[7744]: I0220 14:54:23.802065 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-brpdw" Feb 20 14:54:23.802264 master-0 kubenswrapper[7744]: I0220 14:54:23.802067 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 20 14:54:23.853344 master-0 kubenswrapper[7744]: I0220 14:54:23.853291 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b364609-1de8-433c-9e6f-1efabc3e5108-kube-api-access\") pod \"installer-2-master-0\" (UID: \"8b364609-1de8-433c-9e6f-1efabc3e5108\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 14:54:23.853561 master-0 kubenswrapper[7744]: I0220 14:54:23.853519 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b364609-1de8-433c-9e6f-1efabc3e5108-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"8b364609-1de8-433c-9e6f-1efabc3e5108\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 14:54:23.853779 master-0 kubenswrapper[7744]: I0220 14:54:23.853728 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b364609-1de8-433c-9e6f-1efabc3e5108-var-lock\") pod \"installer-2-master-0\" (UID: \"8b364609-1de8-433c-9e6f-1efabc3e5108\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 14:54:23.955637 master-0 kubenswrapper[7744]: I0220 14:54:23.955484 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/50a42db6-cb4a-4290-bff8-44fdb2801256-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-2b9n2\" (UID: \"50a42db6-cb4a-4290-bff8-44fdb2801256\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:23.955637 master-0 kubenswrapper[7744]: I0220 14:54:23.955564 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/50a42db6-cb4a-4290-bff8-44fdb2801256-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-2b9n2\" (UID: \"50a42db6-cb4a-4290-bff8-44fdb2801256\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:23.955917 master-0 kubenswrapper[7744]: I0220 14:54:23.955838 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b364609-1de8-433c-9e6f-1efabc3e5108-kube-api-access\") pod \"installer-2-master-0\" (UID: \"8b364609-1de8-433c-9e6f-1efabc3e5108\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 14:54:23.955917 master-0 kubenswrapper[7744]: I0220 14:54:23.955913 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqh8z\" (UniqueName: \"kubernetes.io/projected/50a42db6-cb4a-4290-bff8-44fdb2801256-kube-api-access-pqh8z\") pod \"cni-sysctl-allowlist-ds-2b9n2\" (UID: \"50a42db6-cb4a-4290-bff8-44fdb2801256\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:23.956138 master-0 kubenswrapper[7744]: I0220 14:54:23.956106 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b364609-1de8-433c-9e6f-1efabc3e5108-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"8b364609-1de8-433c-9e6f-1efabc3e5108\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 14:54:23.956204 master-0 kubenswrapper[7744]: I0220 14:54:23.956170 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/50a42db6-cb4a-4290-bff8-44fdb2801256-ready\") pod \"cni-sysctl-allowlist-ds-2b9n2\" (UID: \"50a42db6-cb4a-4290-bff8-44fdb2801256\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:23.956365 master-0 kubenswrapper[7744]: I0220 14:54:23.956309 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b364609-1de8-433c-9e6f-1efabc3e5108-var-lock\") pod \"installer-2-master-0\" (UID: \"8b364609-1de8-433c-9e6f-1efabc3e5108\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 14:54:23.956436 master-0 kubenswrapper[7744]: I0220 14:54:23.956347 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b364609-1de8-433c-9e6f-1efabc3e5108-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"8b364609-1de8-433c-9e6f-1efabc3e5108\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 14:54:23.956436 master-0 kubenswrapper[7744]: I0220 14:54:23.956391 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b364609-1de8-433c-9e6f-1efabc3e5108-var-lock\") pod \"installer-2-master-0\" (UID: \"8b364609-1de8-433c-9e6f-1efabc3e5108\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 14:54:23.989876 master-0 kubenswrapper[7744]: I0220 14:54:23.989824 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b364609-1de8-433c-9e6f-1efabc3e5108-kube-api-access\") pod \"installer-2-master-0\" (UID: \"8b364609-1de8-433c-9e6f-1efabc3e5108\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 14:54:24.000526 master-0 kubenswrapper[7744]: I0220 14:54:24.000467 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 14:54:24.058025 master-0 kubenswrapper[7744]: I0220 14:54:24.057296 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/50a42db6-cb4a-4290-bff8-44fdb2801256-ready\") pod \"cni-sysctl-allowlist-ds-2b9n2\" (UID: \"50a42db6-cb4a-4290-bff8-44fdb2801256\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:24.058025 master-0 kubenswrapper[7744]: I0220 14:54:24.057492 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/50a42db6-cb4a-4290-bff8-44fdb2801256-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-2b9n2\" (UID: \"50a42db6-cb4a-4290-bff8-44fdb2801256\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:24.058025 master-0 kubenswrapper[7744]: I0220 14:54:24.057532 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/50a42db6-cb4a-4290-bff8-44fdb2801256-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-2b9n2\" (UID: \"50a42db6-cb4a-4290-bff8-44fdb2801256\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:24.058025 master-0 kubenswrapper[7744]: I0220 14:54:24.057655 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqh8z\" (UniqueName: \"kubernetes.io/projected/50a42db6-cb4a-4290-bff8-44fdb2801256-kube-api-access-pqh8z\") pod \"cni-sysctl-allowlist-ds-2b9n2\" (UID: \"50a42db6-cb4a-4290-bff8-44fdb2801256\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:24.058025 master-0 kubenswrapper[7744]: I0220 14:54:24.057724 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/50a42db6-cb4a-4290-bff8-44fdb2801256-ready\") pod \"cni-sysctl-allowlist-ds-2b9n2\" (UID: \"50a42db6-cb4a-4290-bff8-44fdb2801256\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:24.058025 master-0 kubenswrapper[7744]: I0220 14:54:24.057946 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/50a42db6-cb4a-4290-bff8-44fdb2801256-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-2b9n2\" (UID: \"50a42db6-cb4a-4290-bff8-44fdb2801256\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:24.058513 master-0 kubenswrapper[7744]: I0220 14:54:24.058100 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/50a42db6-cb4a-4290-bff8-44fdb2801256-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-2b9n2\" (UID: \"50a42db6-cb4a-4290-bff8-44fdb2801256\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:24.072269 master-0 kubenswrapper[7744]: I0220 14:54:24.072203 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqh8z\" (UniqueName: \"kubernetes.io/projected/50a42db6-cb4a-4290-bff8-44fdb2801256-kube-api-access-pqh8z\") pod \"cni-sysctl-allowlist-ds-2b9n2\" (UID: \"50a42db6-cb4a-4290-bff8-44fdb2801256\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:24.114976 master-0 kubenswrapper[7744]: I0220 14:54:24.114343 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:24.316561 master-0 kubenswrapper[7744]: I0220 14:54:24.316440 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:24.316561 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:24.316561 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:24.316561 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:24.316561 master-0 kubenswrapper[7744]: I0220 14:54:24.316512 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:24.318041 master-0 kubenswrapper[7744]: I0220 14:54:24.318014 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" event={"ID":"50a42db6-cb4a-4290-bff8-44fdb2801256","Type":"ContainerStarted","Data":"0b37d26cfb7c6a3acfa7c13c6fa717c94bd0470cdde0665adfa25b31b9af9f84"} Feb 20 14:54:24.408823 master-0 kubenswrapper[7744]: I0220 14:54:24.408771 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 20 14:54:24.416277 master-0 kubenswrapper[7744]: W0220 14:54:24.416246 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8b364609_1de8_433c_9e6f_1efabc3e5108.slice/crio-d1a2f15f275677ac6a3dfe8413f6825d7a461776c50e30b3dddbc6cd284746da WatchSource:0}: Error finding container d1a2f15f275677ac6a3dfe8413f6825d7a461776c50e30b3dddbc6cd284746da: Status 404 returned error can't find the container with id d1a2f15f275677ac6a3dfe8413f6825d7a461776c50e30b3dddbc6cd284746da Feb 20 14:54:24.887214 master-0 kubenswrapper[7744]: I0220 14:54:24.880342 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 20 14:54:24.887214 master-0 kubenswrapper[7744]: I0220 14:54:24.881548 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 20 14:54:24.887214 master-0 kubenswrapper[7744]: I0220 14:54:24.883966 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-fjspz" Feb 20 14:54:24.887214 master-0 kubenswrapper[7744]: I0220 14:54:24.884119 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 20 14:54:24.901145 master-0 kubenswrapper[7744]: I0220 14:54:24.901029 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 20 14:54:25.072874 master-0 kubenswrapper[7744]: I0220 14:54:25.072750 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c8741d7-c96b-41cc-80cb-81683bb68480-var-lock\") pod \"installer-4-master-0\" (UID: \"5c8741d7-c96b-41cc-80cb-81683bb68480\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 20 14:54:25.073132 master-0 kubenswrapper[7744]: I0220 14:54:25.072950 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c8741d7-c96b-41cc-80cb-81683bb68480-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5c8741d7-c96b-41cc-80cb-81683bb68480\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 20 14:54:25.073181 master-0 kubenswrapper[7744]: I0220 14:54:25.073126 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c8741d7-c96b-41cc-80cb-81683bb68480-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5c8741d7-c96b-41cc-80cb-81683bb68480\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 20 14:54:25.174896 master-0 kubenswrapper[7744]: I0220 14:54:25.174707 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c8741d7-c96b-41cc-80cb-81683bb68480-var-lock\") pod \"installer-4-master-0\" (UID: \"5c8741d7-c96b-41cc-80cb-81683bb68480\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 20 14:54:25.175356 master-0 kubenswrapper[7744]: I0220 14:54:25.175241 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c8741d7-c96b-41cc-80cb-81683bb68480-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5c8741d7-c96b-41cc-80cb-81683bb68480\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 20 14:54:25.175539 master-0 kubenswrapper[7744]: I0220 14:54:25.175482 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c8741d7-c96b-41cc-80cb-81683bb68480-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5c8741d7-c96b-41cc-80cb-81683bb68480\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 20 14:54:25.175682 master-0 kubenswrapper[7744]: I0220 14:54:25.175629 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c8741d7-c96b-41cc-80cb-81683bb68480-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5c8741d7-c96b-41cc-80cb-81683bb68480\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 20 14:54:25.175762 master-0 kubenswrapper[7744]: I0220 14:54:25.175701 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c8741d7-c96b-41cc-80cb-81683bb68480-var-lock\") pod \"installer-4-master-0\" (UID: \"5c8741d7-c96b-41cc-80cb-81683bb68480\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 20 14:54:25.203752 master-0 kubenswrapper[7744]: I0220 14:54:25.203675 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c8741d7-c96b-41cc-80cb-81683bb68480-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5c8741d7-c96b-41cc-80cb-81683bb68480\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 20 14:54:25.224600 master-0 kubenswrapper[7744]: I0220 14:54:25.224520 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 20 14:54:25.317492 master-0 kubenswrapper[7744]: I0220 14:54:25.317391 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:25.317492 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:25.317492 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:25.317492 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:25.317492 master-0 kubenswrapper[7744]: I0220 14:54:25.317482 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:25.330315 master-0 kubenswrapper[7744]: I0220 14:54:25.330211 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"8b364609-1de8-433c-9e6f-1efabc3e5108","Type":"ContainerStarted","Data":"a0abd59224ad60643808d9b0fe94522edc62b60705e1b94c1b053cb0899402f2"} Feb 20 14:54:25.330315 master-0 kubenswrapper[7744]: I0220 14:54:25.330288 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"8b364609-1de8-433c-9e6f-1efabc3e5108","Type":"ContainerStarted","Data":"d1a2f15f275677ac6a3dfe8413f6825d7a461776c50e30b3dddbc6cd284746da"} Feb 20 14:54:25.335223 master-0 kubenswrapper[7744]: I0220 14:54:25.335120 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" event={"ID":"50a42db6-cb4a-4290-bff8-44fdb2801256","Type":"ContainerStarted","Data":"e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86"} Feb 20 14:54:25.394813 master-0 kubenswrapper[7744]: I0220 14:54:25.394679 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.394642209 podStartE2EDuration="2.394642209s" podCreationTimestamp="2026-02-20 14:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:54:25.355165615 +0000 UTC m=+464.557365545" watchObservedRunningTime="2026-02-20 14:54:25.394642209 +0000 UTC m=+464.596842169" Feb 20 14:54:25.397089 master-0 kubenswrapper[7744]: I0220 14:54:25.397010 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" podStartSLOduration=2.396909265 podStartE2EDuration="2.396909265s" podCreationTimestamp="2026-02-20 14:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:54:25.385697298 +0000 UTC m=+464.587897228" watchObservedRunningTime="2026-02-20 14:54:25.396909265 +0000 UTC m=+464.599109225" Feb 20 14:54:25.750440 master-0 kubenswrapper[7744]: I0220 14:54:25.750345 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 20 14:54:25.764251 master-0 kubenswrapper[7744]: W0220 14:54:25.764087 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5c8741d7_c96b_41cc_80cb_81683bb68480.slice/crio-eef1aa66846c305d37d9496640c02851ab1df6ad78f667da48c6c7b15695dd4f WatchSource:0}: Error finding container eef1aa66846c305d37d9496640c02851ab1df6ad78f667da48c6c7b15695dd4f: Status 404 returned error can't find the container with id eef1aa66846c305d37d9496640c02851ab1df6ad78f667da48c6c7b15695dd4f Feb 20 14:54:26.317160 master-0 kubenswrapper[7744]: I0220 14:54:26.316995 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:26.317160 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:26.317160 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:26.317160 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:26.317160 master-0 kubenswrapper[7744]: I0220 14:54:26.317089 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:26.347393 master-0 kubenswrapper[7744]: I0220 14:54:26.347316 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5c8741d7-c96b-41cc-80cb-81683bb68480","Type":"ContainerStarted","Data":"eef1aa66846c305d37d9496640c02851ab1df6ad78f667da48c6c7b15695dd4f"} Feb 20 14:54:26.348021 master-0 kubenswrapper[7744]: I0220 14:54:26.347986 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:26.383964 master-0 kubenswrapper[7744]: I0220 14:54:26.383867 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:26.816218 master-0 kubenswrapper[7744]: I0220 14:54:26.816132 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-2b9n2"] Feb 20 14:54:27.317515 master-0 kubenswrapper[7744]: I0220 14:54:27.317445 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:27.317515 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:27.317515 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:27.317515 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:27.317901 master-0 kubenswrapper[7744]: I0220 14:54:27.317534 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:27.358159 master-0 kubenswrapper[7744]: I0220 14:54:27.358083 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5c8741d7-c96b-41cc-80cb-81683bb68480","Type":"ContainerStarted","Data":"cff869feeda154776fdb80bde49136ec0b5b04dcf06768e009678b70576a1603"} Feb 20 14:54:27.375617 master-0 kubenswrapper[7744]: I0220 14:54:27.375511 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=3.375487668 podStartE2EDuration="3.375487668s" podCreationTimestamp="2026-02-20 14:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:54:27.374844643 +0000 UTC m=+466.577044603" watchObservedRunningTime="2026-02-20 14:54:27.375487668 +0000 UTC m=+466.577687598" Feb 20 14:54:28.317277 master-0 kubenswrapper[7744]: I0220 14:54:28.317195 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:28.317277 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:28.317277 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:28.317277 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:28.317773 master-0 kubenswrapper[7744]: I0220 14:54:28.317294 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:28.369944 master-0 kubenswrapper[7744]: I0220 14:54:28.369810 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" podUID="50a42db6-cb4a-4290-bff8-44fdb2801256" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86" gracePeriod=30 Feb 20 14:54:29.317444 master-0 kubenswrapper[7744]: I0220 14:54:29.317373 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:29.317444 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:29.317444 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:29.317444 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:29.318059 master-0 kubenswrapper[7744]: I0220 14:54:29.318006 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:30.325360 master-0 kubenswrapper[7744]: I0220 14:54:30.325268 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:30.325360 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:30.325360 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:30.325360 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:30.326460 master-0 kubenswrapper[7744]: I0220 14:54:30.325403 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:31.316730 master-0 kubenswrapper[7744]: I0220 14:54:31.316608 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:31.316730 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:31.316730 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:31.316730 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:31.316730 master-0 kubenswrapper[7744]: I0220 14:54:31.316722 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:32.316911 master-0 kubenswrapper[7744]: I0220 14:54:32.316811 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:32.316911 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:32.316911 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:32.316911 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:32.318008 master-0 kubenswrapper[7744]: I0220 14:54:32.316971 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:33.316630 master-0 kubenswrapper[7744]: I0220 14:54:33.316525 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:33.316630 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:33.316630 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:33.316630 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:33.317613 master-0 kubenswrapper[7744]: I0220 14:54:33.316636 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:33.391215 master-0 kubenswrapper[7744]: I0220 14:54:33.391131 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb"] Feb 20 14:54:33.392752 master-0 kubenswrapper[7744]: I0220 14:54:33.392709 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" Feb 20 14:54:33.396248 master-0 kubenswrapper[7744]: I0220 14:54:33.396167 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-qb2q7" Feb 20 14:54:33.419916 master-0 kubenswrapper[7744]: I0220 14:54:33.418884 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb"] Feb 20 14:54:33.496944 master-0 kubenswrapper[7744]: I0220 14:54:33.496707 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/49defec6-a225-47ab-99ff-7a846f23eb00-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-7j5jb\" (UID: \"49defec6-a225-47ab-99ff-7a846f23eb00\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" Feb 20 14:54:33.496944 master-0 kubenswrapper[7744]: I0220 14:54:33.496849 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k94cb\" (UniqueName: \"kubernetes.io/projected/49defec6-a225-47ab-99ff-7a846f23eb00-kube-api-access-k94cb\") pod \"multus-admission-controller-5f54bf67d4-7j5jb\" (UID: \"49defec6-a225-47ab-99ff-7a846f23eb00\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" Feb 20 14:54:33.598630 master-0 kubenswrapper[7744]: I0220 14:54:33.598519 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/49defec6-a225-47ab-99ff-7a846f23eb00-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-7j5jb\" (UID: \"49defec6-a225-47ab-99ff-7a846f23eb00\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" Feb 20 14:54:33.598630 master-0 kubenswrapper[7744]: I0220 14:54:33.598598 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k94cb\" (UniqueName: \"kubernetes.io/projected/49defec6-a225-47ab-99ff-7a846f23eb00-kube-api-access-k94cb\") pod \"multus-admission-controller-5f54bf67d4-7j5jb\" (UID: \"49defec6-a225-47ab-99ff-7a846f23eb00\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" Feb 20 14:54:33.601471 master-0 kubenswrapper[7744]: I0220 14:54:33.601428 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/49defec6-a225-47ab-99ff-7a846f23eb00-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-7j5jb\" (UID: \"49defec6-a225-47ab-99ff-7a846f23eb00\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" Feb 20 14:54:33.612583 master-0 kubenswrapper[7744]: I0220 14:54:33.612541 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k94cb\" (UniqueName: \"kubernetes.io/projected/49defec6-a225-47ab-99ff-7a846f23eb00-kube-api-access-k94cb\") pod \"multus-admission-controller-5f54bf67d4-7j5jb\" (UID: \"49defec6-a225-47ab-99ff-7a846f23eb00\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" Feb 20 14:54:33.724046 master-0 kubenswrapper[7744]: I0220 14:54:33.723968 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" Feb 20 14:54:33.988172 master-0 kubenswrapper[7744]: I0220 14:54:33.988103 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb"] Feb 20 14:54:33.993321 master-0 kubenswrapper[7744]: W0220 14:54:33.993230 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49defec6_a225_47ab_99ff_7a846f23eb00.slice/crio-ec64844e3e46d42ec4c570bb811039de046f41f872bc256c338ea6312e07ba0d WatchSource:0}: Error finding container ec64844e3e46d42ec4c570bb811039de046f41f872bc256c338ea6312e07ba0d: Status 404 returned error can't find the container with id ec64844e3e46d42ec4c570bb811039de046f41f872bc256c338ea6312e07ba0d Feb 20 14:54:34.120825 master-0 kubenswrapper[7744]: E0220 14:54:34.120700 7744 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 14:54:34.123467 master-0 kubenswrapper[7744]: E0220 14:54:34.123420 7744 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 14:54:34.125665 master-0 kubenswrapper[7744]: E0220 14:54:34.125568 7744 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 14:54:34.125843 master-0 kubenswrapper[7744]: E0220 14:54:34.125667 7744 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" podUID="50a42db6-cb4a-4290-bff8-44fdb2801256" containerName="kube-multus-additional-cni-plugins" Feb 20 14:54:34.316751 master-0 kubenswrapper[7744]: I0220 14:54:34.316684 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:34.316751 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:34.316751 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:34.316751 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:34.317755 master-0 kubenswrapper[7744]: I0220 14:54:34.316771 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:34.422348 master-0 kubenswrapper[7744]: I0220 14:54:34.422139 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" event={"ID":"49defec6-a225-47ab-99ff-7a846f23eb00","Type":"ContainerStarted","Data":"01fa54e3fedf15625b874769be8058628ecbf8d9c1e1408b5cd8a41440ab8cfc"} Feb 20 14:54:34.422348 master-0 kubenswrapper[7744]: I0220 14:54:34.422237 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" event={"ID":"49defec6-a225-47ab-99ff-7a846f23eb00","Type":"ContainerStarted","Data":"ec64844e3e46d42ec4c570bb811039de046f41f872bc256c338ea6312e07ba0d"} Feb 20 14:54:35.317370 master-0 kubenswrapper[7744]: I0220 14:54:35.317266 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:35.317370 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:35.317370 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:35.317370 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:35.318301 master-0 kubenswrapper[7744]: I0220 14:54:35.317393 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:35.432498 master-0 kubenswrapper[7744]: I0220 14:54:35.432429 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" event={"ID":"49defec6-a225-47ab-99ff-7a846f23eb00","Type":"ContainerStarted","Data":"dba9c42dcf7fcdb0a76b8629c779f71b81357a7ea8751c9b83573eb252b5a3d1"} Feb 20 14:54:35.469683 master-0 kubenswrapper[7744]: I0220 14:54:35.469559 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" podStartSLOduration=2.469526205 podStartE2EDuration="2.469526205s" podCreationTimestamp="2026-02-20 14:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:54:35.461391465 +0000 UTC m=+474.663591445" watchObservedRunningTime="2026-02-20 14:54:35.469526205 +0000 UTC m=+474.671726165" Feb 20 14:54:35.535119 master-0 kubenswrapper[7744]: I0220 14:54:35.535002 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x"] Feb 20 14:54:35.536015 master-0 kubenswrapper[7744]: I0220 14:54:35.535961 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" podUID="a1fb2774-6dd7-4429-9df3-4ddfcdaac939" containerName="kube-rbac-proxy" containerID="cri-o://8e0b13405e2daee0a5927d9bce9075eb42d4f3573a0155428e0f7790d97b9deb" gracePeriod=30 Feb 20 14:54:35.538302 master-0 kubenswrapper[7744]: I0220 14:54:35.535889 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" podUID="a1fb2774-6dd7-4429-9df3-4ddfcdaac939" containerName="multus-admission-controller" containerID="cri-o://4ddd9de39788fb0527d2c46757c0e3580e52a5c156ee1c89170d5a4e7024de06" gracePeriod=30 Feb 20 14:54:35.660719 master-0 kubenswrapper[7744]: I0220 14:54:35.659481 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 20 14:54:35.661792 master-0 kubenswrapper[7744]: I0220 14:54:35.661743 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 20 14:54:35.671126 master-0 kubenswrapper[7744]: I0220 14:54:35.667391 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 20 14:54:35.671126 master-0 kubenswrapper[7744]: I0220 14:54:35.667732 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-94bs8" Feb 20 14:54:35.671814 master-0 kubenswrapper[7744]: I0220 14:54:35.671757 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 20 14:54:35.838150 master-0 kubenswrapper[7744]: I0220 14:54:35.838052 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6285323-3e75-4d44-ad05-98890c097dd2-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b6285323-3e75-4d44-ad05-98890c097dd2\") " pod="openshift-etcd/installer-2-master-0" Feb 20 14:54:35.838556 master-0 kubenswrapper[7744]: I0220 14:54:35.838493 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b6285323-3e75-4d44-ad05-98890c097dd2-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"b6285323-3e75-4d44-ad05-98890c097dd2\") " pod="openshift-etcd/installer-2-master-0" Feb 20 14:54:35.838675 master-0 kubenswrapper[7744]: I0220 14:54:35.838633 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b6285323-3e75-4d44-ad05-98890c097dd2-var-lock\") pod \"installer-2-master-0\" (UID: \"b6285323-3e75-4d44-ad05-98890c097dd2\") " pod="openshift-etcd/installer-2-master-0" Feb 20 14:54:35.940779 master-0 kubenswrapper[7744]: I0220 14:54:35.940438 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b6285323-3e75-4d44-ad05-98890c097dd2-var-lock\") pod \"installer-2-master-0\" (UID: \"b6285323-3e75-4d44-ad05-98890c097dd2\") " pod="openshift-etcd/installer-2-master-0" Feb 20 14:54:35.940779 master-0 kubenswrapper[7744]: I0220 14:54:35.940607 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6285323-3e75-4d44-ad05-98890c097dd2-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b6285323-3e75-4d44-ad05-98890c097dd2\") " pod="openshift-etcd/installer-2-master-0" Feb 20 14:54:35.941223 master-0 kubenswrapper[7744]: I0220 14:54:35.940849 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b6285323-3e75-4d44-ad05-98890c097dd2-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"b6285323-3e75-4d44-ad05-98890c097dd2\") " pod="openshift-etcd/installer-2-master-0" Feb 20 14:54:35.941223 master-0 kubenswrapper[7744]: I0220 14:54:35.940802 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b6285323-3e75-4d44-ad05-98890c097dd2-var-lock\") pod \"installer-2-master-0\" (UID: \"b6285323-3e75-4d44-ad05-98890c097dd2\") " pod="openshift-etcd/installer-2-master-0" Feb 20 14:54:35.941223 master-0 kubenswrapper[7744]: I0220 14:54:35.941040 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b6285323-3e75-4d44-ad05-98890c097dd2-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"b6285323-3e75-4d44-ad05-98890c097dd2\") " pod="openshift-etcd/installer-2-master-0" Feb 20 14:54:35.972709 master-0 kubenswrapper[7744]: I0220 14:54:35.972624 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6285323-3e75-4d44-ad05-98890c097dd2-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b6285323-3e75-4d44-ad05-98890c097dd2\") " pod="openshift-etcd/installer-2-master-0" Feb 20 14:54:36.005328 master-0 kubenswrapper[7744]: I0220 14:54:36.005278 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 20 14:54:36.317553 master-0 kubenswrapper[7744]: I0220 14:54:36.317482 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:36.317553 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:36.317553 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:36.317553 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:36.318192 master-0 kubenswrapper[7744]: I0220 14:54:36.317576 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:36.444019 master-0 kubenswrapper[7744]: I0220 14:54:36.443858 7744 generic.go:334] "Generic (PLEG): container finished" podID="a1fb2774-6dd7-4429-9df3-4ddfcdaac939" containerID="8e0b13405e2daee0a5927d9bce9075eb42d4f3573a0155428e0f7790d97b9deb" exitCode=0 Feb 20 14:54:36.444019 master-0 kubenswrapper[7744]: I0220 14:54:36.443959 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" event={"ID":"a1fb2774-6dd7-4429-9df3-4ddfcdaac939","Type":"ContainerDied","Data":"8e0b13405e2daee0a5927d9bce9075eb42d4f3573a0155428e0f7790d97b9deb"} Feb 20 14:54:36.560781 master-0 kubenswrapper[7744]: I0220 14:54:36.560737 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 20 14:54:37.315191 master-0 kubenswrapper[7744]: I0220 14:54:37.315157 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:37.315191 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:37.315191 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:37.315191 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:37.315516 master-0 kubenswrapper[7744]: I0220 14:54:37.315491 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:37.457431 master-0 kubenswrapper[7744]: I0220 14:54:37.457254 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b6285323-3e75-4d44-ad05-98890c097dd2","Type":"ContainerStarted","Data":"e0e54afa304c07256ca81f12b5ac712d5ac8488390931a330fe4a44a3c9b790d"} Feb 20 14:54:37.457431 master-0 kubenswrapper[7744]: I0220 14:54:37.457344 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b6285323-3e75-4d44-ad05-98890c097dd2","Type":"ContainerStarted","Data":"a18ba6fef141df70b03fa378f8e3dafed41e947f342e811cb930b80a2236b753"} Feb 20 14:54:37.492292 master-0 kubenswrapper[7744]: I0220 14:54:37.491916 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.491897409 podStartE2EDuration="2.491897409s" podCreationTimestamp="2026-02-20 14:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:54:37.485781138 +0000 UTC m=+476.687981068" watchObservedRunningTime="2026-02-20 14:54:37.491897409 +0000 UTC m=+476.694097339" Feb 20 14:54:38.317320 master-0 kubenswrapper[7744]: I0220 14:54:38.317219 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:38.317320 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:38.317320 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:38.317320 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:38.317730 master-0 kubenswrapper[7744]: I0220 14:54:38.317313 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:38.682221 master-0 kubenswrapper[7744]: I0220 14:54:38.682025 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 20 14:54:38.682987 master-0 kubenswrapper[7744]: I0220 14:54:38.682434 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="8b364609-1de8-433c-9e6f-1efabc3e5108" containerName="installer" containerID="cri-o://a0abd59224ad60643808d9b0fe94522edc62b60705e1b94c1b053cb0899402f2" gracePeriod=30 Feb 20 14:54:39.202148 master-0 kubenswrapper[7744]: I0220 14:54:39.201966 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_8b364609-1de8-433c-9e6f-1efabc3e5108/installer/0.log" Feb 20 14:54:39.202148 master-0 kubenswrapper[7744]: I0220 14:54:39.202044 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 14:54:39.298111 master-0 kubenswrapper[7744]: I0220 14:54:39.297989 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b364609-1de8-433c-9e6f-1efabc3e5108-kube-api-access\") pod \"8b364609-1de8-433c-9e6f-1efabc3e5108\" (UID: \"8b364609-1de8-433c-9e6f-1efabc3e5108\") " Feb 20 14:54:39.298442 master-0 kubenswrapper[7744]: I0220 14:54:39.298171 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b364609-1de8-433c-9e6f-1efabc3e5108-kubelet-dir\") pod \"8b364609-1de8-433c-9e6f-1efabc3e5108\" (UID: \"8b364609-1de8-433c-9e6f-1efabc3e5108\") " Feb 20 14:54:39.298442 master-0 kubenswrapper[7744]: I0220 14:54:39.298267 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b364609-1de8-433c-9e6f-1efabc3e5108-var-lock\") pod \"8b364609-1de8-433c-9e6f-1efabc3e5108\" (UID: \"8b364609-1de8-433c-9e6f-1efabc3e5108\") " Feb 20 14:54:39.298602 master-0 kubenswrapper[7744]: I0220 14:54:39.298444 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b364609-1de8-433c-9e6f-1efabc3e5108-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8b364609-1de8-433c-9e6f-1efabc3e5108" (UID: "8b364609-1de8-433c-9e6f-1efabc3e5108"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:54:39.298602 master-0 kubenswrapper[7744]: I0220 14:54:39.298566 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b364609-1de8-433c-9e6f-1efabc3e5108-var-lock" (OuterVolumeSpecName: "var-lock") pod "8b364609-1de8-433c-9e6f-1efabc3e5108" (UID: "8b364609-1de8-433c-9e6f-1efabc3e5108"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:54:39.298830 master-0 kubenswrapper[7744]: I0220 14:54:39.298779 7744 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b364609-1de8-433c-9e6f-1efabc3e5108-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:39.298830 master-0 kubenswrapper[7744]: I0220 14:54:39.298813 7744 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b364609-1de8-433c-9e6f-1efabc3e5108-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:39.302438 master-0 kubenswrapper[7744]: I0220 14:54:39.302353 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b364609-1de8-433c-9e6f-1efabc3e5108-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8b364609-1de8-433c-9e6f-1efabc3e5108" (UID: "8b364609-1de8-433c-9e6f-1efabc3e5108"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:54:39.316275 master-0 kubenswrapper[7744]: I0220 14:54:39.316203 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:39.316275 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:39.316275 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:39.316275 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:39.316580 master-0 kubenswrapper[7744]: I0220 14:54:39.316274 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:39.401069 master-0 kubenswrapper[7744]: I0220 14:54:39.400986 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b364609-1de8-433c-9e6f-1efabc3e5108-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:39.478392 master-0 kubenswrapper[7744]: I0220 14:54:39.478201 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_8b364609-1de8-433c-9e6f-1efabc3e5108/installer/0.log" Feb 20 14:54:39.478392 master-0 kubenswrapper[7744]: I0220 14:54:39.478312 7744 generic.go:334] "Generic (PLEG): container finished" podID="8b364609-1de8-433c-9e6f-1efabc3e5108" containerID="a0abd59224ad60643808d9b0fe94522edc62b60705e1b94c1b053cb0899402f2" exitCode=1 Feb 20 14:54:39.478876 master-0 kubenswrapper[7744]: I0220 14:54:39.478393 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"8b364609-1de8-433c-9e6f-1efabc3e5108","Type":"ContainerDied","Data":"a0abd59224ad60643808d9b0fe94522edc62b60705e1b94c1b053cb0899402f2"} Feb 20 14:54:39.478876 master-0 kubenswrapper[7744]: I0220 14:54:39.478440 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 20 14:54:39.478876 master-0 kubenswrapper[7744]: I0220 14:54:39.478483 7744 scope.go:117] "RemoveContainer" containerID="a0abd59224ad60643808d9b0fe94522edc62b60705e1b94c1b053cb0899402f2" Feb 20 14:54:39.478876 master-0 kubenswrapper[7744]: I0220 14:54:39.478464 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"8b364609-1de8-433c-9e6f-1efabc3e5108","Type":"ContainerDied","Data":"d1a2f15f275677ac6a3dfe8413f6825d7a461776c50e30b3dddbc6cd284746da"} Feb 20 14:54:39.505369 master-0 kubenswrapper[7744]: I0220 14:54:39.505278 7744 scope.go:117] "RemoveContainer" containerID="a0abd59224ad60643808d9b0fe94522edc62b60705e1b94c1b053cb0899402f2" Feb 20 14:54:39.506591 master-0 kubenswrapper[7744]: E0220 14:54:39.506474 7744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0abd59224ad60643808d9b0fe94522edc62b60705e1b94c1b053cb0899402f2\": container with ID starting with a0abd59224ad60643808d9b0fe94522edc62b60705e1b94c1b053cb0899402f2 not found: ID does not exist" containerID="a0abd59224ad60643808d9b0fe94522edc62b60705e1b94c1b053cb0899402f2" Feb 20 14:54:39.506739 master-0 kubenswrapper[7744]: I0220 14:54:39.506569 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0abd59224ad60643808d9b0fe94522edc62b60705e1b94c1b053cb0899402f2"} err="failed to get container status \"a0abd59224ad60643808d9b0fe94522edc62b60705e1b94c1b053cb0899402f2\": rpc error: code = NotFound desc = could not find container \"a0abd59224ad60643808d9b0fe94522edc62b60705e1b94c1b053cb0899402f2\": container with ID starting with a0abd59224ad60643808d9b0fe94522edc62b60705e1b94c1b053cb0899402f2 not found: ID does not exist" Feb 20 14:54:39.547821 master-0 kubenswrapper[7744]: I0220 14:54:39.547383 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 20 14:54:39.564245 master-0 kubenswrapper[7744]: I0220 14:54:39.564195 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 20 14:54:39.570181 master-0 kubenswrapper[7744]: I0220 14:54:39.570102 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 20 14:54:39.570521 master-0 kubenswrapper[7744]: E0220 14:54:39.570473 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b364609-1de8-433c-9e6f-1efabc3e5108" containerName="installer" Feb 20 14:54:39.570521 master-0 kubenswrapper[7744]: I0220 14:54:39.570505 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b364609-1de8-433c-9e6f-1efabc3e5108" containerName="installer" Feb 20 14:54:39.570775 master-0 kubenswrapper[7744]: I0220 14:54:39.570735 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b364609-1de8-433c-9e6f-1efabc3e5108" containerName="installer" Feb 20 14:54:39.571465 master-0 kubenswrapper[7744]: I0220 14:54:39.571383 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 14:54:39.575356 master-0 kubenswrapper[7744]: I0220 14:54:39.573694 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 20 14:54:39.575356 master-0 kubenswrapper[7744]: I0220 14:54:39.573694 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-j2nvb" Feb 20 14:54:39.582864 master-0 kubenswrapper[7744]: I0220 14:54:39.582784 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 20 14:54:39.715492 master-0 kubenswrapper[7744]: I0220 14:54:39.715353 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3753e8e6-e86c-4841-bc82-ce5321b5583f-var-lock\") pod \"installer-2-master-0\" (UID: \"3753e8e6-e86c-4841-bc82-ce5321b5583f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 14:54:39.716357 master-0 kubenswrapper[7744]: I0220 14:54:39.715600 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3753e8e6-e86c-4841-bc82-ce5321b5583f-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3753e8e6-e86c-4841-bc82-ce5321b5583f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 14:54:39.716357 master-0 kubenswrapper[7744]: I0220 14:54:39.715638 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3753e8e6-e86c-4841-bc82-ce5321b5583f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3753e8e6-e86c-4841-bc82-ce5321b5583f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 14:54:39.817283 master-0 kubenswrapper[7744]: I0220 14:54:39.817174 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3753e8e6-e86c-4841-bc82-ce5321b5583f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3753e8e6-e86c-4841-bc82-ce5321b5583f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 14:54:39.817283 master-0 kubenswrapper[7744]: I0220 14:54:39.817255 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3753e8e6-e86c-4841-bc82-ce5321b5583f-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3753e8e6-e86c-4841-bc82-ce5321b5583f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 14:54:39.817640 master-0 kubenswrapper[7744]: I0220 14:54:39.817410 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3753e8e6-e86c-4841-bc82-ce5321b5583f-var-lock\") pod \"installer-2-master-0\" (UID: \"3753e8e6-e86c-4841-bc82-ce5321b5583f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 14:54:39.817640 master-0 kubenswrapper[7744]: I0220 14:54:39.817438 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3753e8e6-e86c-4841-bc82-ce5321b5583f-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3753e8e6-e86c-4841-bc82-ce5321b5583f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 14:54:39.817640 master-0 kubenswrapper[7744]: I0220 14:54:39.817596 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3753e8e6-e86c-4841-bc82-ce5321b5583f-var-lock\") pod \"installer-2-master-0\" (UID: \"3753e8e6-e86c-4841-bc82-ce5321b5583f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 14:54:39.846960 master-0 kubenswrapper[7744]: I0220 14:54:39.846868 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3753e8e6-e86c-4841-bc82-ce5321b5583f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3753e8e6-e86c-4841-bc82-ce5321b5583f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 14:54:39.947127 master-0 kubenswrapper[7744]: I0220 14:54:39.947001 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 14:54:40.317436 master-0 kubenswrapper[7744]: I0220 14:54:40.317334 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:54:40.317436 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:54:40.317436 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:54:40.317436 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:54:40.317991 master-0 kubenswrapper[7744]: I0220 14:54:40.317438 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:54:40.317991 master-0 kubenswrapper[7744]: I0220 14:54:40.317529 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:54:40.319402 master-0 kubenswrapper[7744]: I0220 14:54:40.319336 7744 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"b8c9ab75c341608bbd631623c30a262c8f71065b35633a99f02888aa224f7c9c"} pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" containerMessage="Container router failed startup probe, will be restarted" Feb 20 14:54:40.319521 master-0 kubenswrapper[7744]: I0220 14:54:40.319449 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" containerID="cri-o://b8c9ab75c341608bbd631623c30a262c8f71065b35633a99f02888aa224f7c9c" gracePeriod=3600 Feb 20 14:54:40.457798 master-0 kubenswrapper[7744]: I0220 14:54:40.457039 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 20 14:54:40.469618 master-0 kubenswrapper[7744]: W0220 14:54:40.468688 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3753e8e6_e86c_4841_bc82_ce5321b5583f.slice/crio-fc878293ad9742e0e6e203c029346a7ba8ae6bc45e7cd5df0e7da3c99e545909 WatchSource:0}: Error finding container fc878293ad9742e0e6e203c029346a7ba8ae6bc45e7cd5df0e7da3c99e545909: Status 404 returned error can't find the container with id fc878293ad9742e0e6e203c029346a7ba8ae6bc45e7cd5df0e7da3c99e545909 Feb 20 14:54:40.498686 master-0 kubenswrapper[7744]: I0220 14:54:40.498623 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"3753e8e6-e86c-4841-bc82-ce5321b5583f","Type":"ContainerStarted","Data":"fc878293ad9742e0e6e203c029346a7ba8ae6bc45e7cd5df0e7da3c99e545909"} Feb 20 14:54:41.048503 master-0 kubenswrapper[7744]: I0220 14:54:41.048460 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b364609-1de8-433c-9e6f-1efabc3e5108" path="/var/lib/kubelet/pods/8b364609-1de8-433c-9e6f-1efabc3e5108/volumes" Feb 20 14:54:41.511069 master-0 kubenswrapper[7744]: I0220 14:54:41.510969 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"3753e8e6-e86c-4841-bc82-ce5321b5583f","Type":"ContainerStarted","Data":"e9f5283b1593036f2c2506fd9fc4fbab1721fa59aa90a252424d56b7bd732b24"} Feb 20 14:54:41.544429 master-0 kubenswrapper[7744]: I0220 14:54:41.544296 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=2.544263264 podStartE2EDuration="2.544263264s" podCreationTimestamp="2026-02-20 14:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:54:41.537872146 +0000 UTC m=+480.740072126" watchObservedRunningTime="2026-02-20 14:54:41.544263264 +0000 UTC m=+480.746463224" Feb 20 14:54:42.897183 master-0 kubenswrapper[7744]: I0220 14:54:42.892970 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 20 14:54:42.901259 master-0 kubenswrapper[7744]: I0220 14:54:42.901177 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 14:54:42.920332 master-0 kubenswrapper[7744]: I0220 14:54:42.919804 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 20 14:54:42.971880 master-0 kubenswrapper[7744]: I0220 14:54:42.971832 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 14:54:42.972102 master-0 kubenswrapper[7744]: I0220 14:54:42.971912 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-kube-api-access\") pod \"installer-3-master-0\" (UID: \"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 14:54:42.972102 master-0 kubenswrapper[7744]: I0220 14:54:42.971965 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-var-lock\") pod \"installer-3-master-0\" (UID: \"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 14:54:43.073259 master-0 kubenswrapper[7744]: I0220 14:54:43.073139 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 14:54:43.073515 master-0 kubenswrapper[7744]: I0220 14:54:43.073298 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 14:54:43.073515 master-0 kubenswrapper[7744]: I0220 14:54:43.073354 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-kube-api-access\") pod \"installer-3-master-0\" (UID: \"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 14:54:43.073515 master-0 kubenswrapper[7744]: I0220 14:54:43.073411 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-var-lock\") pod \"installer-3-master-0\" (UID: \"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 14:54:43.073718 master-0 kubenswrapper[7744]: I0220 14:54:43.073587 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-var-lock\") pod \"installer-3-master-0\" (UID: \"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 14:54:43.088541 master-0 kubenswrapper[7744]: I0220 14:54:43.088486 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-kube-api-access\") pod \"installer-3-master-0\" (UID: \"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 14:54:43.230823 master-0 kubenswrapper[7744]: I0220 14:54:43.230652 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 14:54:43.783980 master-0 kubenswrapper[7744]: I0220 14:54:43.783011 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 20 14:54:43.791281 master-0 kubenswrapper[7744]: W0220 14:54:43.791032 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3ef51d3b_cd8b_4f34_961e_8daebbed3ca6.slice/crio-00c49d62b94564e456b20bb8a4dbb2c93a1fe2806ab8327bbf14d442fc57441b WatchSource:0}: Error finding container 00c49d62b94564e456b20bb8a4dbb2c93a1fe2806ab8327bbf14d442fc57441b: Status 404 returned error can't find the container with id 00c49d62b94564e456b20bb8a4dbb2c93a1fe2806ab8327bbf14d442fc57441b Feb 20 14:54:44.119577 master-0 kubenswrapper[7744]: E0220 14:54:44.118886 7744 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 14:54:44.122088 master-0 kubenswrapper[7744]: E0220 14:54:44.122018 7744 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 14:54:44.124728 master-0 kubenswrapper[7744]: E0220 14:54:44.124659 7744 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 14:54:44.124868 master-0 kubenswrapper[7744]: E0220 14:54:44.124726 7744 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" podUID="50a42db6-cb4a-4290-bff8-44fdb2801256" containerName="kube-multus-additional-cni-plugins" Feb 20 14:54:44.540043 master-0 kubenswrapper[7744]: I0220 14:54:44.539961 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6","Type":"ContainerStarted","Data":"992d06369bcdfc83fe57ae6d1c5dce1f2cfa2163b4588fe5df6d49020418c795"} Feb 20 14:54:44.540524 master-0 kubenswrapper[7744]: I0220 14:54:44.540060 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6","Type":"ContainerStarted","Data":"00c49d62b94564e456b20bb8a4dbb2c93a1fe2806ab8327bbf14d442fc57441b"} Feb 20 14:54:44.570014 master-0 kubenswrapper[7744]: I0220 14:54:44.569784 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.569759416 podStartE2EDuration="2.569759416s" podCreationTimestamp="2026-02-20 14:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:54:44.566139046 +0000 UTC m=+483.768338996" watchObservedRunningTime="2026-02-20 14:54:44.569759416 +0000 UTC m=+483.771959376" Feb 20 14:54:45.551403 master-0 kubenswrapper[7744]: I0220 14:54:45.551308 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_9b5633b4-0459-499d-8d50-ec6f3b35348e/installer/0.log" Feb 20 14:54:45.551403 master-0 kubenswrapper[7744]: I0220 14:54:45.551398 7744 generic.go:334] "Generic (PLEG): container finished" podID="9b5633b4-0459-499d-8d50-ec6f3b35348e" containerID="426d2fed4175d2a5893097b98d0c41809a34de5615fe9d14773afa0232ad7999" exitCode=1 Feb 20 14:54:45.552287 master-0 kubenswrapper[7744]: I0220 14:54:45.552125 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"9b5633b4-0459-499d-8d50-ec6f3b35348e","Type":"ContainerDied","Data":"426d2fed4175d2a5893097b98d0c41809a34de5615fe9d14773afa0232ad7999"} Feb 20 14:54:45.656835 master-0 kubenswrapper[7744]: I0220 14:54:45.656761 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_9b5633b4-0459-499d-8d50-ec6f3b35348e/installer/0.log" Feb 20 14:54:45.657124 master-0 kubenswrapper[7744]: I0220 14:54:45.656854 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 14:54:45.815877 master-0 kubenswrapper[7744]: I0220 14:54:45.815674 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b5633b4-0459-499d-8d50-ec6f3b35348e-kubelet-dir\") pod \"9b5633b4-0459-499d-8d50-ec6f3b35348e\" (UID: \"9b5633b4-0459-499d-8d50-ec6f3b35348e\") " Feb 20 14:54:45.815877 master-0 kubenswrapper[7744]: I0220 14:54:45.815812 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b5633b4-0459-499d-8d50-ec6f3b35348e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9b5633b4-0459-499d-8d50-ec6f3b35348e" (UID: "9b5633b4-0459-499d-8d50-ec6f3b35348e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:54:45.816300 master-0 kubenswrapper[7744]: I0220 14:54:45.815983 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b5633b4-0459-499d-8d50-ec6f3b35348e-var-lock\") pod \"9b5633b4-0459-499d-8d50-ec6f3b35348e\" (UID: \"9b5633b4-0459-499d-8d50-ec6f3b35348e\") " Feb 20 14:54:45.816300 master-0 kubenswrapper[7744]: I0220 14:54:45.816036 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b5633b4-0459-499d-8d50-ec6f3b35348e-var-lock" (OuterVolumeSpecName: "var-lock") pod "9b5633b4-0459-499d-8d50-ec6f3b35348e" (UID: "9b5633b4-0459-499d-8d50-ec6f3b35348e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:54:45.816300 master-0 kubenswrapper[7744]: I0220 14:54:45.816115 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b5633b4-0459-499d-8d50-ec6f3b35348e-kube-api-access\") pod \"9b5633b4-0459-499d-8d50-ec6f3b35348e\" (UID: \"9b5633b4-0459-499d-8d50-ec6f3b35348e\") " Feb 20 14:54:45.817195 master-0 kubenswrapper[7744]: I0220 14:54:45.817135 7744 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b5633b4-0459-499d-8d50-ec6f3b35348e-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:45.817195 master-0 kubenswrapper[7744]: I0220 14:54:45.817183 7744 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b5633b4-0459-499d-8d50-ec6f3b35348e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:45.824378 master-0 kubenswrapper[7744]: I0220 14:54:45.824288 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b5633b4-0459-499d-8d50-ec6f3b35348e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9b5633b4-0459-499d-8d50-ec6f3b35348e" (UID: "9b5633b4-0459-499d-8d50-ec6f3b35348e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:54:45.919339 master-0 kubenswrapper[7744]: I0220 14:54:45.919247 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b5633b4-0459-499d-8d50-ec6f3b35348e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:46.566611 master-0 kubenswrapper[7744]: I0220 14:54:46.566528 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_9b5633b4-0459-499d-8d50-ec6f3b35348e/installer/0.log" Feb 20 14:54:46.567517 master-0 kubenswrapper[7744]: I0220 14:54:46.566633 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"9b5633b4-0459-499d-8d50-ec6f3b35348e","Type":"ContainerDied","Data":"d13112859731d18686a15159c8e3489e6bdb627690008a2934c19aa43100750a"} Feb 20 14:54:46.567517 master-0 kubenswrapper[7744]: I0220 14:54:46.566690 7744 scope.go:117] "RemoveContainer" containerID="426d2fed4175d2a5893097b98d0c41809a34de5615fe9d14773afa0232ad7999" Feb 20 14:54:46.567517 master-0 kubenswrapper[7744]: I0220 14:54:46.566731 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 20 14:54:46.630799 master-0 kubenswrapper[7744]: I0220 14:54:46.630691 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 20 14:54:46.636827 master-0 kubenswrapper[7744]: I0220 14:54:46.636758 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 20 14:54:47.050177 master-0 kubenswrapper[7744]: I0220 14:54:47.050089 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b5633b4-0459-499d-8d50-ec6f3b35348e" path="/var/lib/kubelet/pods/9b5633b4-0459-499d-8d50-ec6f3b35348e/volumes" Feb 20 14:54:53.754815 master-0 kubenswrapper[7744]: I0220 14:54:53.754725 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 20 14:54:53.755450 master-0 kubenswrapper[7744]: I0220 14:54:53.755070 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="3753e8e6-e86c-4841-bc82-ce5321b5583f" containerName="installer" containerID="cri-o://e9f5283b1593036f2c2506fd9fc4fbab1721fa59aa90a252424d56b7bd732b24" gracePeriod=30 Feb 20 14:54:54.118133 master-0 kubenswrapper[7744]: E0220 14:54:54.117905 7744 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 14:54:54.120615 master-0 kubenswrapper[7744]: E0220 14:54:54.120517 7744 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 14:54:54.123537 master-0 kubenswrapper[7744]: E0220 14:54:54.123463 7744 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 20 14:54:54.123670 master-0 kubenswrapper[7744]: E0220 14:54:54.123536 7744 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" podUID="50a42db6-cb4a-4290-bff8-44fdb2801256" containerName="kube-multus-additional-cni-plugins" Feb 20 14:54:56.152395 master-0 kubenswrapper[7744]: I0220 14:54:56.152303 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 20 14:54:56.153695 master-0 kubenswrapper[7744]: E0220 14:54:56.152677 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b5633b4-0459-499d-8d50-ec6f3b35348e" containerName="installer" Feb 20 14:54:56.153695 master-0 kubenswrapper[7744]: I0220 14:54:56.152696 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b5633b4-0459-499d-8d50-ec6f3b35348e" containerName="installer" Feb 20 14:54:56.153695 master-0 kubenswrapper[7744]: I0220 14:54:56.152846 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b5633b4-0459-499d-8d50-ec6f3b35348e" containerName="installer" Feb 20 14:54:56.153695 master-0 kubenswrapper[7744]: I0220 14:54:56.153550 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 14:54:56.184420 master-0 kubenswrapper[7744]: I0220 14:54:56.184333 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 20 14:54:56.281178 master-0 kubenswrapper[7744]: I0220 14:54:56.281095 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/380174fb-b30c-4f45-9119-397cdca91756-var-lock\") pod \"installer-3-master-0\" (UID: \"380174fb-b30c-4f45-9119-397cdca91756\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 14:54:56.281178 master-0 kubenswrapper[7744]: I0220 14:54:56.281176 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/380174fb-b30c-4f45-9119-397cdca91756-kube-api-access\") pod \"installer-3-master-0\" (UID: \"380174fb-b30c-4f45-9119-397cdca91756\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 14:54:56.281485 master-0 kubenswrapper[7744]: I0220 14:54:56.281285 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/380174fb-b30c-4f45-9119-397cdca91756-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"380174fb-b30c-4f45-9119-397cdca91756\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 14:54:56.383539 master-0 kubenswrapper[7744]: I0220 14:54:56.383457 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/380174fb-b30c-4f45-9119-397cdca91756-var-lock\") pod \"installer-3-master-0\" (UID: \"380174fb-b30c-4f45-9119-397cdca91756\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 14:54:56.383807 master-0 kubenswrapper[7744]: I0220 14:54:56.383553 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/380174fb-b30c-4f45-9119-397cdca91756-kube-api-access\") pod \"installer-3-master-0\" (UID: \"380174fb-b30c-4f45-9119-397cdca91756\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 14:54:56.383807 master-0 kubenswrapper[7744]: I0220 14:54:56.383696 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/380174fb-b30c-4f45-9119-397cdca91756-var-lock\") pod \"installer-3-master-0\" (UID: \"380174fb-b30c-4f45-9119-397cdca91756\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 14:54:56.383897 master-0 kubenswrapper[7744]: I0220 14:54:56.383817 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/380174fb-b30c-4f45-9119-397cdca91756-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"380174fb-b30c-4f45-9119-397cdca91756\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 14:54:56.384139 master-0 kubenswrapper[7744]: I0220 14:54:56.384087 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/380174fb-b30c-4f45-9119-397cdca91756-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"380174fb-b30c-4f45-9119-397cdca91756\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 14:54:56.412281 master-0 kubenswrapper[7744]: I0220 14:54:56.412131 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/380174fb-b30c-4f45-9119-397cdca91756-kube-api-access\") pod \"installer-3-master-0\" (UID: \"380174fb-b30c-4f45-9119-397cdca91756\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 14:54:56.496592 master-0 kubenswrapper[7744]: I0220 14:54:56.496459 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 14:54:56.992503 master-0 kubenswrapper[7744]: I0220 14:54:56.992430 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 20 14:54:57.000815 master-0 kubenswrapper[7744]: W0220 14:54:57.000745 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod380174fb_b30c_4f45_9119_397cdca91756.slice/crio-1ee17e2383cea0ad71bf0ed7b91b99cbf73c1a9e377abadef6ff61fb1e1e6676 WatchSource:0}: Error finding container 1ee17e2383cea0ad71bf0ed7b91b99cbf73c1a9e377abadef6ff61fb1e1e6676: Status 404 returned error can't find the container with id 1ee17e2383cea0ad71bf0ed7b91b99cbf73c1a9e377abadef6ff61fb1e1e6676 Feb 20 14:54:57.660773 master-0 kubenswrapper[7744]: I0220 14:54:57.660660 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"380174fb-b30c-4f45-9119-397cdca91756","Type":"ContainerStarted","Data":"28ee0d7fd2e81f54f5dcd52927e71c388f397b4ec8b363fb1c98a6fb82168cd2"} Feb 20 14:54:57.660773 master-0 kubenswrapper[7744]: I0220 14:54:57.660731 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"380174fb-b30c-4f45-9119-397cdca91756","Type":"ContainerStarted","Data":"1ee17e2383cea0ad71bf0ed7b91b99cbf73c1a9e377abadef6ff61fb1e1e6676"} Feb 20 14:54:57.690078 master-0 kubenswrapper[7744]: I0220 14:54:57.689969 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=1.689919341 podStartE2EDuration="1.689919341s" podCreationTimestamp="2026-02-20 14:54:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:54:57.686837205 +0000 UTC m=+496.889037155" watchObservedRunningTime="2026-02-20 14:54:57.689919341 +0000 UTC m=+496.892119301" Feb 20 14:54:57.738244 master-0 kubenswrapper[7744]: I0220 14:54:57.738155 7744 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 20 14:54:57.738660 master-0 kubenswrapper[7744]: I0220 14:54:57.738543 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" containerID="cri-o://e95606b40a17608c8c7fdabfbaff98a784411ba115dbcdf26ab46d49f3aaafbd" gracePeriod=30 Feb 20 14:54:57.741791 master-0 kubenswrapper[7744]: I0220 14:54:57.741701 7744 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 20 14:54:57.742314 master-0 kubenswrapper[7744]: E0220 14:54:57.742253 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 20 14:54:57.742314 master-0 kubenswrapper[7744]: I0220 14:54:57.742302 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 20 14:54:57.742517 master-0 kubenswrapper[7744]: E0220 14:54:57.742330 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 20 14:54:57.742517 master-0 kubenswrapper[7744]: I0220 14:54:57.742348 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 20 14:54:57.742686 master-0 kubenswrapper[7744]: I0220 14:54:57.742658 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 20 14:54:57.742776 master-0 kubenswrapper[7744]: I0220 14:54:57.742704 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 20 14:54:57.745572 master-0 kubenswrapper[7744]: I0220 14:54:57.745509 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 14:54:57.815243 master-0 kubenswrapper[7744]: I0220 14:54:57.815102 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 14:54:57.815243 master-0 kubenswrapper[7744]: I0220 14:54:57.815196 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 14:54:57.928967 master-0 kubenswrapper[7744]: I0220 14:54:57.917883 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 14:54:57.928967 master-0 kubenswrapper[7744]: I0220 14:54:57.917996 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 14:54:57.928967 master-0 kubenswrapper[7744]: I0220 14:54:57.918280 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 14:54:57.928967 master-0 kubenswrapper[7744]: I0220 14:54:57.918353 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 14:54:57.959485 master-0 kubenswrapper[7744]: I0220 14:54:57.959398 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 14:54:57.966589 master-0 kubenswrapper[7744]: I0220 14:54:57.966517 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:54:57.967858 master-0 kubenswrapper[7744]: I0220 14:54:57.967785 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 20 14:54:58.014708 master-0 kubenswrapper[7744]: I0220 14:54:58.014333 7744 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="40211bfa-4ef1-46fb-965b-ce149cab7400" Feb 20 14:54:58.018575 master-0 kubenswrapper[7744]: I0220 14:54:58.018501 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"56c3cb71c9851003c8de7e7c5db4b87e\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " Feb 20 14:54:58.018799 master-0 kubenswrapper[7744]: I0220 14:54:58.018583 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"56c3cb71c9851003c8de7e7c5db4b87e\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " Feb 20 14:54:58.019201 master-0 kubenswrapper[7744]: I0220 14:54:58.019058 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets" (OuterVolumeSpecName: "secrets") pod "56c3cb71c9851003c8de7e7c5db4b87e" (UID: "56c3cb71c9851003c8de7e7c5db4b87e"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:54:58.019201 master-0 kubenswrapper[7744]: I0220 14:54:58.019116 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs" (OuterVolumeSpecName: "logs") pod "56c3cb71c9851003c8de7e7c5db4b87e" (UID: "56c3cb71c9851003c8de7e7c5db4b87e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:54:58.122982 master-0 kubenswrapper[7744]: I0220 14:54:58.122903 7744 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:58.122982 master-0 kubenswrapper[7744]: I0220 14:54:58.122969 7744 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:58.593767 master-0 kubenswrapper[7744]: I0220 14:54:58.593720 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-2b9n2_50a42db6-cb4a-4290-bff8-44fdb2801256/kube-multus-additional-cni-plugins/0.log" Feb 20 14:54:58.594059 master-0 kubenswrapper[7744]: I0220 14:54:58.593790 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:58.631634 master-0 kubenswrapper[7744]: I0220 14:54:58.631469 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/50a42db6-cb4a-4290-bff8-44fdb2801256-cni-sysctl-allowlist\") pod \"50a42db6-cb4a-4290-bff8-44fdb2801256\" (UID: \"50a42db6-cb4a-4290-bff8-44fdb2801256\") " Feb 20 14:54:58.631634 master-0 kubenswrapper[7744]: I0220 14:54:58.631548 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqh8z\" (UniqueName: \"kubernetes.io/projected/50a42db6-cb4a-4290-bff8-44fdb2801256-kube-api-access-pqh8z\") pod \"50a42db6-cb4a-4290-bff8-44fdb2801256\" (UID: \"50a42db6-cb4a-4290-bff8-44fdb2801256\") " Feb 20 14:54:58.631634 master-0 kubenswrapper[7744]: I0220 14:54:58.631582 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/50a42db6-cb4a-4290-bff8-44fdb2801256-tuning-conf-dir\") pod \"50a42db6-cb4a-4290-bff8-44fdb2801256\" (UID: \"50a42db6-cb4a-4290-bff8-44fdb2801256\") " Feb 20 14:54:58.631634 master-0 kubenswrapper[7744]: I0220 14:54:58.631625 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/50a42db6-cb4a-4290-bff8-44fdb2801256-ready\") pod \"50a42db6-cb4a-4290-bff8-44fdb2801256\" (UID: \"50a42db6-cb4a-4290-bff8-44fdb2801256\") " Feb 20 14:54:58.632503 master-0 kubenswrapper[7744]: I0220 14:54:58.631771 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50a42db6-cb4a-4290-bff8-44fdb2801256-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "50a42db6-cb4a-4290-bff8-44fdb2801256" (UID: "50a42db6-cb4a-4290-bff8-44fdb2801256"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:54:58.632503 master-0 kubenswrapper[7744]: I0220 14:54:58.632054 7744 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/50a42db6-cb4a-4290-bff8-44fdb2801256-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:58.632636 master-0 kubenswrapper[7744]: I0220 14:54:58.632541 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50a42db6-cb4a-4290-bff8-44fdb2801256-ready" (OuterVolumeSpecName: "ready") pod "50a42db6-cb4a-4290-bff8-44fdb2801256" (UID: "50a42db6-cb4a-4290-bff8-44fdb2801256"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 14:54:58.632789 master-0 kubenswrapper[7744]: I0220 14:54:58.632713 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50a42db6-cb4a-4290-bff8-44fdb2801256-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "50a42db6-cb4a-4290-bff8-44fdb2801256" (UID: "50a42db6-cb4a-4290-bff8-44fdb2801256"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 14:54:58.636293 master-0 kubenswrapper[7744]: I0220 14:54:58.636167 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50a42db6-cb4a-4290-bff8-44fdb2801256-kube-api-access-pqh8z" (OuterVolumeSpecName: "kube-api-access-pqh8z") pod "50a42db6-cb4a-4290-bff8-44fdb2801256" (UID: "50a42db6-cb4a-4290-bff8-44fdb2801256"). InnerVolumeSpecName "kube-api-access-pqh8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:54:58.671241 master-0 kubenswrapper[7744]: I0220 14:54:58.671165 7744 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="e95606b40a17608c8c7fdabfbaff98a784411ba115dbcdf26ab46d49f3aaafbd" exitCode=0 Feb 20 14:54:58.672566 master-0 kubenswrapper[7744]: I0220 14:54:58.671268 7744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d15f8dfa0d113319aa72954517575419d7a6afcad7f7cef9517b2fb935c0ea42" Feb 20 14:54:58.672566 master-0 kubenswrapper[7744]: I0220 14:54:58.671264 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 20 14:54:58.672566 master-0 kubenswrapper[7744]: I0220 14:54:58.671331 7744 scope.go:117] "RemoveContainer" containerID="1dbd1253fb8b09bfbaa096d3703dce0afe66c7bc42222d1d422586b85221b083" Feb 20 14:54:58.674252 master-0 kubenswrapper[7744]: I0220 14:54:58.673862 7744 generic.go:334] "Generic (PLEG): container finished" podID="56ff46cdb00d28519af7c0cdc9ea8d11" containerID="5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f" exitCode=0 Feb 20 14:54:58.674252 master-0 kubenswrapper[7744]: I0220 14:54:58.673998 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerDied","Data":"5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f"} Feb 20 14:54:58.674252 master-0 kubenswrapper[7744]: I0220 14:54:58.674048 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"2210f3254bc0bc47bf63efd7d8223a017f9ce1d63560804be28d1d5db58e4a7d"} Feb 20 14:54:58.677268 master-0 kubenswrapper[7744]: I0220 14:54:58.677231 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-2b9n2_50a42db6-cb4a-4290-bff8-44fdb2801256/kube-multus-additional-cni-plugins/0.log" Feb 20 14:54:58.677443 master-0 kubenswrapper[7744]: I0220 14:54:58.677301 7744 generic.go:334] "Generic (PLEG): container finished" podID="50a42db6-cb4a-4290-bff8-44fdb2801256" containerID="e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86" exitCode=137 Feb 20 14:54:58.678311 master-0 kubenswrapper[7744]: I0220 14:54:58.677509 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" Feb 20 14:54:58.678311 master-0 kubenswrapper[7744]: I0220 14:54:58.677539 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" event={"ID":"50a42db6-cb4a-4290-bff8-44fdb2801256","Type":"ContainerDied","Data":"e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86"} Feb 20 14:54:58.678311 master-0 kubenswrapper[7744]: I0220 14:54:58.677584 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-2b9n2" event={"ID":"50a42db6-cb4a-4290-bff8-44fdb2801256","Type":"ContainerDied","Data":"0b37d26cfb7c6a3acfa7c13c6fa717c94bd0470cdde0665adfa25b31b9af9f84"} Feb 20 14:54:58.680904 master-0 kubenswrapper[7744]: I0220 14:54:58.680575 7744 generic.go:334] "Generic (PLEG): container finished" podID="5c8741d7-c96b-41cc-80cb-81683bb68480" containerID="cff869feeda154776fdb80bde49136ec0b5b04dcf06768e009678b70576a1603" exitCode=0 Feb 20 14:54:58.681607 master-0 kubenswrapper[7744]: I0220 14:54:58.681535 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5c8741d7-c96b-41cc-80cb-81683bb68480","Type":"ContainerDied","Data":"cff869feeda154776fdb80bde49136ec0b5b04dcf06768e009678b70576a1603"} Feb 20 14:54:58.698336 master-0 kubenswrapper[7744]: I0220 14:54:58.697524 7744 scope.go:117] "RemoveContainer" containerID="e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86" Feb 20 14:54:58.736049 master-0 kubenswrapper[7744]: I0220 14:54:58.733244 7744 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/50a42db6-cb4a-4290-bff8-44fdb2801256-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:58.736049 master-0 kubenswrapper[7744]: I0220 14:54:58.733296 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqh8z\" (UniqueName: \"kubernetes.io/projected/50a42db6-cb4a-4290-bff8-44fdb2801256-kube-api-access-pqh8z\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:58.736049 master-0 kubenswrapper[7744]: I0220 14:54:58.733317 7744 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/50a42db6-cb4a-4290-bff8-44fdb2801256-ready\") on node \"master-0\" DevicePath \"\"" Feb 20 14:54:58.744808 master-0 kubenswrapper[7744]: I0220 14:54:58.744747 7744 scope.go:117] "RemoveContainer" containerID="e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86" Feb 20 14:54:58.745491 master-0 kubenswrapper[7744]: E0220 14:54:58.745419 7744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86\": container with ID starting with e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86 not found: ID does not exist" containerID="e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86" Feb 20 14:54:58.745491 master-0 kubenswrapper[7744]: I0220 14:54:58.745480 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86"} err="failed to get container status \"e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86\": rpc error: code = NotFound desc = could not find container \"e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86\": container with ID starting with e01384d41c81f2b9fb2116cca3e254b49c8eb59c01ef221b19d1ac8dc7a6cc86 not found: ID does not exist" Feb 20 14:54:58.767762 master-0 kubenswrapper[7744]: I0220 14:54:58.767700 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-2b9n2"] Feb 20 14:54:58.774753 master-0 kubenswrapper[7744]: I0220 14:54:58.774667 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-2b9n2"] Feb 20 14:54:59.047321 master-0 kubenswrapper[7744]: I0220 14:54:59.047221 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50a42db6-cb4a-4290-bff8-44fdb2801256" path="/var/lib/kubelet/pods/50a42db6-cb4a-4290-bff8-44fdb2801256/volumes" Feb 20 14:54:59.047817 master-0 kubenswrapper[7744]: I0220 14:54:59.047778 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c3cb71c9851003c8de7e7c5db4b87e" path="/var/lib/kubelet/pods/56c3cb71c9851003c8de7e7c5db4b87e/volumes" Feb 20 14:54:59.048141 master-0 kubenswrapper[7744]: I0220 14:54:59.048103 7744 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 20 14:54:59.069063 master-0 kubenswrapper[7744]: I0220 14:54:59.069007 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 20 14:54:59.069063 master-0 kubenswrapper[7744]: I0220 14:54:59.069063 7744 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="40211bfa-4ef1-46fb-965b-ce149cab7400" Feb 20 14:54:59.072482 master-0 kubenswrapper[7744]: I0220 14:54:59.072443 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 20 14:54:59.072482 master-0 kubenswrapper[7744]: I0220 14:54:59.072464 7744 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="40211bfa-4ef1-46fb-965b-ce149cab7400" Feb 20 14:54:59.695456 master-0 kubenswrapper[7744]: I0220 14:54:59.695378 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c"} Feb 20 14:54:59.695456 master-0 kubenswrapper[7744]: I0220 14:54:59.695454 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d"} Feb 20 14:54:59.696477 master-0 kubenswrapper[7744]: I0220 14:54:59.695481 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b"} Feb 20 14:54:59.696477 master-0 kubenswrapper[7744]: I0220 14:54:59.695571 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 14:54:59.729604 master-0 kubenswrapper[7744]: I0220 14:54:59.729430 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.729345486 podStartE2EDuration="2.729345486s" podCreationTimestamp="2026-02-20 14:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 14:54:59.725158013 +0000 UTC m=+498.927358013" watchObservedRunningTime="2026-02-20 14:54:59.729345486 +0000 UTC m=+498.931545476" Feb 20 14:55:00.081077 master-0 kubenswrapper[7744]: I0220 14:55:00.080918 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 20 14:55:00.174071 master-0 kubenswrapper[7744]: I0220 14:55:00.167498 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c8741d7-c96b-41cc-80cb-81683bb68480-kubelet-dir\") pod \"5c8741d7-c96b-41cc-80cb-81683bb68480\" (UID: \"5c8741d7-c96b-41cc-80cb-81683bb68480\") " Feb 20 14:55:00.174071 master-0 kubenswrapper[7744]: I0220 14:55:00.167599 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c8741d7-c96b-41cc-80cb-81683bb68480-var-lock\") pod \"5c8741d7-c96b-41cc-80cb-81683bb68480\" (UID: \"5c8741d7-c96b-41cc-80cb-81683bb68480\") " Feb 20 14:55:00.174071 master-0 kubenswrapper[7744]: I0220 14:55:00.167693 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c8741d7-c96b-41cc-80cb-81683bb68480-kube-api-access\") pod \"5c8741d7-c96b-41cc-80cb-81683bb68480\" (UID: \"5c8741d7-c96b-41cc-80cb-81683bb68480\") " Feb 20 14:55:00.174071 master-0 kubenswrapper[7744]: I0220 14:55:00.169546 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8741d7-c96b-41cc-80cb-81683bb68480-var-lock" (OuterVolumeSpecName: "var-lock") pod "5c8741d7-c96b-41cc-80cb-81683bb68480" (UID: "5c8741d7-c96b-41cc-80cb-81683bb68480"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:55:00.174071 master-0 kubenswrapper[7744]: I0220 14:55:00.169615 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8741d7-c96b-41cc-80cb-81683bb68480-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5c8741d7-c96b-41cc-80cb-81683bb68480" (UID: "5c8741d7-c96b-41cc-80cb-81683bb68480"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:55:00.174985 master-0 kubenswrapper[7744]: I0220 14:55:00.174779 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c8741d7-c96b-41cc-80cb-81683bb68480-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5c8741d7-c96b-41cc-80cb-81683bb68480" (UID: "5c8741d7-c96b-41cc-80cb-81683bb68480"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:55:00.269575 master-0 kubenswrapper[7744]: I0220 14:55:00.269517 7744 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c8741d7-c96b-41cc-80cb-81683bb68480-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:00.269575 master-0 kubenswrapper[7744]: I0220 14:55:00.269556 7744 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c8741d7-c96b-41cc-80cb-81683bb68480-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:00.269575 master-0 kubenswrapper[7744]: I0220 14:55:00.269571 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c8741d7-c96b-41cc-80cb-81683bb68480-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:00.708137 master-0 kubenswrapper[7744]: I0220 14:55:00.708015 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 20 14:55:00.708137 master-0 kubenswrapper[7744]: I0220 14:55:00.708083 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5c8741d7-c96b-41cc-80cb-81683bb68480","Type":"ContainerDied","Data":"eef1aa66846c305d37d9496640c02851ab1df6ad78f667da48c6c7b15695dd4f"} Feb 20 14:55:00.709001 master-0 kubenswrapper[7744]: I0220 14:55:00.708182 7744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef1aa66846c305d37d9496640c02851ab1df6ad78f667da48c6c7b15695dd4f" Feb 20 14:55:05.769236 master-0 kubenswrapper[7744]: I0220 14:55:05.769148 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-wl49x_a1fb2774-6dd7-4429-9df3-4ddfcdaac939/multus-admission-controller/0.log" Feb 20 14:55:05.771407 master-0 kubenswrapper[7744]: I0220 14:55:05.769257 7744 generic.go:334] "Generic (PLEG): container finished" podID="a1fb2774-6dd7-4429-9df3-4ddfcdaac939" containerID="4ddd9de39788fb0527d2c46757c0e3580e52a5c156ee1c89170d5a4e7024de06" exitCode=137 Feb 20 14:55:05.771407 master-0 kubenswrapper[7744]: I0220 14:55:05.769315 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" event={"ID":"a1fb2774-6dd7-4429-9df3-4ddfcdaac939","Type":"ContainerDied","Data":"4ddd9de39788fb0527d2c46757c0e3580e52a5c156ee1c89170d5a4e7024de06"} Feb 20 14:55:06.379117 master-0 kubenswrapper[7744]: I0220 14:55:06.379048 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-wl49x_a1fb2774-6dd7-4429-9df3-4ddfcdaac939/multus-admission-controller/0.log" Feb 20 14:55:06.379117 master-0 kubenswrapper[7744]: I0220 14:55:06.379136 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:55:06.468747 master-0 kubenswrapper[7744]: I0220 14:55:06.468658 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs\") pod \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " Feb 20 14:55:06.469140 master-0 kubenswrapper[7744]: I0220 14:55:06.468795 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk9xr\" (UniqueName: \"kubernetes.io/projected/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-kube-api-access-jk9xr\") pod \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\" (UID: \"a1fb2774-6dd7-4429-9df3-4ddfcdaac939\") " Feb 20 14:55:06.473443 master-0 kubenswrapper[7744]: I0220 14:55:06.473378 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-kube-api-access-jk9xr" (OuterVolumeSpecName: "kube-api-access-jk9xr") pod "a1fb2774-6dd7-4429-9df3-4ddfcdaac939" (UID: "a1fb2774-6dd7-4429-9df3-4ddfcdaac939"). InnerVolumeSpecName "kube-api-access-jk9xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:55:06.474302 master-0 kubenswrapper[7744]: I0220 14:55:06.474230 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "a1fb2774-6dd7-4429-9df3-4ddfcdaac939" (UID: "a1fb2774-6dd7-4429-9df3-4ddfcdaac939"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 14:55:06.571209 master-0 kubenswrapper[7744]: I0220 14:55:06.571135 7744 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-webhook-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:06.571209 master-0 kubenswrapper[7744]: I0220 14:55:06.571210 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk9xr\" (UniqueName: \"kubernetes.io/projected/a1fb2774-6dd7-4429-9df3-4ddfcdaac939-kube-api-access-jk9xr\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:06.781976 master-0 kubenswrapper[7744]: I0220 14:55:06.781886 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-wl49x_a1fb2774-6dd7-4429-9df3-4ddfcdaac939/multus-admission-controller/0.log" Feb 20 14:55:06.782778 master-0 kubenswrapper[7744]: I0220 14:55:06.782020 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" event={"ID":"a1fb2774-6dd7-4429-9df3-4ddfcdaac939","Type":"ContainerDied","Data":"57d1f27e3b1777057d880d98efb4a8e0d90629f2aa4281ad872ea1245d8afe4d"} Feb 20 14:55:06.782778 master-0 kubenswrapper[7744]: I0220 14:55:06.782087 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x" Feb 20 14:55:06.782778 master-0 kubenswrapper[7744]: I0220 14:55:06.782165 7744 scope.go:117] "RemoveContainer" containerID="8e0b13405e2daee0a5927d9bce9075eb42d4f3573a0155428e0f7790d97b9deb" Feb 20 14:55:06.811139 master-0 kubenswrapper[7744]: I0220 14:55:06.811064 7744 scope.go:117] "RemoveContainer" containerID="4ddd9de39788fb0527d2c46757c0e3580e52a5c156ee1c89170d5a4e7024de06" Feb 20 14:55:06.849617 master-0 kubenswrapper[7744]: I0220 14:55:06.849543 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x"] Feb 20 14:55:06.857394 master-0 kubenswrapper[7744]: I0220 14:55:06.857245 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-wl49x"] Feb 20 14:55:07.047337 master-0 kubenswrapper[7744]: I0220 14:55:07.047132 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1fb2774-6dd7-4429-9df3-4ddfcdaac939" path="/var/lib/kubelet/pods/a1fb2774-6dd7-4429-9df3-4ddfcdaac939/volumes" Feb 20 14:55:07.685978 master-0 kubenswrapper[7744]: I0220 14:55:07.685866 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx"] Feb 20 14:55:07.686330 master-0 kubenswrapper[7744]: E0220 14:55:07.686161 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8741d7-c96b-41cc-80cb-81683bb68480" containerName="installer" Feb 20 14:55:07.686330 master-0 kubenswrapper[7744]: I0220 14:55:07.686176 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8741d7-c96b-41cc-80cb-81683bb68480" containerName="installer" Feb 20 14:55:07.686330 master-0 kubenswrapper[7744]: E0220 14:55:07.686213 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fb2774-6dd7-4429-9df3-4ddfcdaac939" containerName="multus-admission-controller" Feb 20 14:55:07.686330 master-0 kubenswrapper[7744]: I0220 14:55:07.686222 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fb2774-6dd7-4429-9df3-4ddfcdaac939" containerName="multus-admission-controller" Feb 20 14:55:07.686330 master-0 kubenswrapper[7744]: E0220 14:55:07.686233 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fb2774-6dd7-4429-9df3-4ddfcdaac939" containerName="kube-rbac-proxy" Feb 20 14:55:07.686330 master-0 kubenswrapper[7744]: I0220 14:55:07.686241 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fb2774-6dd7-4429-9df3-4ddfcdaac939" containerName="kube-rbac-proxy" Feb 20 14:55:07.686330 master-0 kubenswrapper[7744]: E0220 14:55:07.686258 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a42db6-cb4a-4290-bff8-44fdb2801256" containerName="kube-multus-additional-cni-plugins" Feb 20 14:55:07.686330 master-0 kubenswrapper[7744]: I0220 14:55:07.686268 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a42db6-cb4a-4290-bff8-44fdb2801256" containerName="kube-multus-additional-cni-plugins" Feb 20 14:55:07.686713 master-0 kubenswrapper[7744]: I0220 14:55:07.686400 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fb2774-6dd7-4429-9df3-4ddfcdaac939" containerName="kube-rbac-proxy" Feb 20 14:55:07.686713 master-0 kubenswrapper[7744]: I0220 14:55:07.686415 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a42db6-cb4a-4290-bff8-44fdb2801256" containerName="kube-multus-additional-cni-plugins" Feb 20 14:55:07.686713 master-0 kubenswrapper[7744]: I0220 14:55:07.686431 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fb2774-6dd7-4429-9df3-4ddfcdaac939" containerName="multus-admission-controller" Feb 20 14:55:07.686713 master-0 kubenswrapper[7744]: I0220 14:55:07.686444 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8741d7-c96b-41cc-80cb-81683bb68480" containerName="installer" Feb 20 14:55:07.687767 master-0 kubenswrapper[7744]: I0220 14:55:07.687732 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.692698 master-0 kubenswrapper[7744]: I0220 14:55:07.692534 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 20 14:55:07.692698 master-0 kubenswrapper[7744]: I0220 14:55:07.692561 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 20 14:55:07.692698 master-0 kubenswrapper[7744]: I0220 14:55:07.692543 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-7vdpw" Feb 20 14:55:07.693540 master-0 kubenswrapper[7744]: I0220 14:55:07.693358 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 20 14:55:07.693540 master-0 kubenswrapper[7744]: I0220 14:55:07.693371 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 20 14:55:07.696689 master-0 kubenswrapper[7744]: I0220 14:55:07.696545 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 20 14:55:07.700339 master-0 kubenswrapper[7744]: I0220 14:55:07.699700 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 20 14:55:07.707551 master-0 kubenswrapper[7744]: I0220 14:55:07.707508 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx"] Feb 20 14:55:07.792081 master-0 kubenswrapper[7744]: I0220 14:55:07.792005 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-serving-certs-ca-bundle\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.792081 master-0 kubenswrapper[7744]: I0220 14:55:07.792076 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-federate-client-tls\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.792744 master-0 kubenswrapper[7744]: I0220 14:55:07.792185 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-client-tls\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.792744 master-0 kubenswrapper[7744]: I0220 14:55:07.792243 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-metrics-client-ca\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.792744 master-0 kubenswrapper[7744]: I0220 14:55:07.792336 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-trusted-ca-bundle\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.792744 master-0 kubenswrapper[7744]: I0220 14:55:07.792375 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z67rw\" (UniqueName: \"kubernetes.io/projected/8e8c5772-b6e2-43d8-b173-af74541855fb-kube-api-access-z67rw\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.792744 master-0 kubenswrapper[7744]: I0220 14:55:07.792564 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.792744 master-0 kubenswrapper[7744]: I0220 14:55:07.792603 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.893727 master-0 kubenswrapper[7744]: I0220 14:55:07.893618 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-federate-client-tls\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.893727 master-0 kubenswrapper[7744]: I0220 14:55:07.893724 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-client-tls\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.894146 master-0 kubenswrapper[7744]: I0220 14:55:07.893778 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-metrics-client-ca\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.894146 master-0 kubenswrapper[7744]: I0220 14:55:07.893869 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-trusted-ca-bundle\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.894146 master-0 kubenswrapper[7744]: I0220 14:55:07.893967 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z67rw\" (UniqueName: \"kubernetes.io/projected/8e8c5772-b6e2-43d8-b173-af74541855fb-kube-api-access-z67rw\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.894146 master-0 kubenswrapper[7744]: I0220 14:55:07.894072 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.894146 master-0 kubenswrapper[7744]: I0220 14:55:07.894137 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.894446 master-0 kubenswrapper[7744]: I0220 14:55:07.894207 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-serving-certs-ca-bundle\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.896733 master-0 kubenswrapper[7744]: I0220 14:55:07.896659 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-trusted-ca-bundle\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.897116 master-0 kubenswrapper[7744]: I0220 14:55:07.897048 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-serving-certs-ca-bundle\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.898250 master-0 kubenswrapper[7744]: I0220 14:55:07.898177 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-metrics-client-ca\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.899574 master-0 kubenswrapper[7744]: I0220 14:55:07.899512 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-federate-client-tls\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.899721 master-0 kubenswrapper[7744]: I0220 14:55:07.899657 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-client-tls\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.901736 master-0 kubenswrapper[7744]: I0220 14:55:07.901603 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.902024 master-0 kubenswrapper[7744]: I0220 14:55:07.901901 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:07.913385 master-0 kubenswrapper[7744]: I0220 14:55:07.913305 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z67rw\" (UniqueName: \"kubernetes.io/projected/8e8c5772-b6e2-43d8-b173-af74541855fb-kube-api-access-z67rw\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:08.015569 master-0 kubenswrapper[7744]: I0220 14:55:08.015485 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 14:55:08.298523 master-0 kubenswrapper[7744]: I0220 14:55:08.298453 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx"] Feb 20 14:55:08.306242 master-0 kubenswrapper[7744]: W0220 14:55:08.306165 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e8c5772_b6e2_43d8_b173_af74541855fb.slice/crio-04c921d85b432c0d1b6bd571166f434dca8313768c8990c88277ecdb55bd26c7 WatchSource:0}: Error finding container 04c921d85b432c0d1b6bd571166f434dca8313768c8990c88277ecdb55bd26c7: Status 404 returned error can't find the container with id 04c921d85b432c0d1b6bd571166f434dca8313768c8990c88277ecdb55bd26c7 Feb 20 14:55:08.309632 master-0 kubenswrapper[7744]: I0220 14:55:08.309580 7744 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 14:55:08.427919 master-0 kubenswrapper[7744]: I0220 14:55:08.427827 7744 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 20 14:55:08.428822 master-0 kubenswrapper[7744]: I0220 14:55:08.428713 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" containerID="cri-o://03c3011b78a1e090e26282e5bfe01d4a95cee038877b51f0cfa6e5d29c599082" gracePeriod=30 Feb 20 14:55:08.428958 master-0 kubenswrapper[7744]: I0220 14:55:08.428765 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" containerID="cri-o://e7c73184be18a91b74e2b6b30aa92586b0cebf1411e6cd234cef01c85e9c4104" gracePeriod=30 Feb 20 14:55:08.428958 master-0 kubenswrapper[7744]: I0220 14:55:08.428852 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" containerID="cri-o://689ee9b0e5108311ff54df17a58ad47a30c4ae3de8db0ce3794fccb2f1d4b026" gracePeriod=30 Feb 20 14:55:08.429240 master-0 kubenswrapper[7744]: I0220 14:55:08.428918 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" containerID="cri-o://5c1b5acd04d8c9f3d08aff8344aaeb09f20e3251f3c25c2cbd1b35b61bcf2908" gracePeriod=30 Feb 20 14:55:08.429240 master-0 kubenswrapper[7744]: I0220 14:55:08.429062 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" containerID="cri-o://f07ceb1fe9c4ec76c3cffe25b5d95b988b21733f5be6094583204d0c17e7fcb8" gracePeriod=30 Feb 20 14:55:08.440562 master-0 kubenswrapper[7744]: I0220 14:55:08.440453 7744 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 20 14:55:08.441057 master-0 kubenswrapper[7744]: E0220 14:55:08.440969 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 20 14:55:08.441057 master-0 kubenswrapper[7744]: I0220 14:55:08.441027 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 20 14:55:08.441263 master-0 kubenswrapper[7744]: E0220 14:55:08.441054 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="setup" Feb 20 14:55:08.441263 master-0 kubenswrapper[7744]: I0220 14:55:08.441105 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="setup" Feb 20 14:55:08.441604 master-0 kubenswrapper[7744]: E0220 14:55:08.441561 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 20 14:55:08.441604 master-0 kubenswrapper[7744]: I0220 14:55:08.441594 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 20 14:55:08.441814 master-0 kubenswrapper[7744]: E0220 14:55:08.441652 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 20 14:55:08.441814 master-0 kubenswrapper[7744]: I0220 14:55:08.441665 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 20 14:55:08.441814 master-0 kubenswrapper[7744]: E0220 14:55:08.441687 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-ensure-env-vars" Feb 20 14:55:08.441814 master-0 kubenswrapper[7744]: I0220 14:55:08.441735 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-ensure-env-vars" Feb 20 14:55:08.441814 master-0 kubenswrapper[7744]: E0220 14:55:08.441764 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 20 14:55:08.441814 master-0 kubenswrapper[7744]: I0220 14:55:08.441777 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 20 14:55:08.443232 master-0 kubenswrapper[7744]: E0220 14:55:08.441831 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-resources-copy" Feb 20 14:55:08.443232 master-0 kubenswrapper[7744]: I0220 14:55:08.441844 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-resources-copy" Feb 20 14:55:08.443232 master-0 kubenswrapper[7744]: E0220 14:55:08.441872 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 20 14:55:08.443232 master-0 kubenswrapper[7744]: I0220 14:55:08.441916 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 20 14:55:08.443232 master-0 kubenswrapper[7744]: I0220 14:55:08.442332 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 20 14:55:08.443232 master-0 kubenswrapper[7744]: I0220 14:55:08.442361 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 20 14:55:08.443232 master-0 kubenswrapper[7744]: I0220 14:55:08.442379 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 20 14:55:08.443232 master-0 kubenswrapper[7744]: I0220 14:55:08.442399 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 20 14:55:08.443232 master-0 kubenswrapper[7744]: I0220 14:55:08.442427 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 20 14:55:08.522663 master-0 kubenswrapper[7744]: I0220 14:55:08.522576 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.522663 master-0 kubenswrapper[7744]: I0220 14:55:08.522650 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.522985 master-0 kubenswrapper[7744]: I0220 14:55:08.522697 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.522985 master-0 kubenswrapper[7744]: I0220 14:55:08.522758 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.522985 master-0 kubenswrapper[7744]: I0220 14:55:08.522842 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.522985 master-0 kubenswrapper[7744]: I0220 14:55:08.522878 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.624341 master-0 kubenswrapper[7744]: I0220 14:55:08.624275 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.624341 master-0 kubenswrapper[7744]: I0220 14:55:08.624340 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.624581 master-0 kubenswrapper[7744]: I0220 14:55:08.624431 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.624650 master-0 kubenswrapper[7744]: I0220 14:55:08.624586 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.624739 master-0 kubenswrapper[7744]: I0220 14:55:08.624675 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.624817 master-0 kubenswrapper[7744]: I0220 14:55:08.624767 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.624889 master-0 kubenswrapper[7744]: I0220 14:55:08.624819 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.624889 master-0 kubenswrapper[7744]: I0220 14:55:08.624837 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.625153 master-0 kubenswrapper[7744]: I0220 14:55:08.624899 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.625153 master-0 kubenswrapper[7744]: I0220 14:55:08.624990 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.625153 master-0 kubenswrapper[7744]: I0220 14:55:08.625035 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.625153 master-0 kubenswrapper[7744]: I0220 14:55:08.624995 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 14:55:08.798391 master-0 kubenswrapper[7744]: I0220 14:55:08.798295 7744 generic.go:334] "Generic (PLEG): container finished" podID="b6285323-3e75-4d44-ad05-98890c097dd2" containerID="e0e54afa304c07256ca81f12b5ac712d5ac8488390931a330fe4a44a3c9b790d" exitCode=0 Feb 20 14:55:08.798391 master-0 kubenswrapper[7744]: I0220 14:55:08.798398 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b6285323-3e75-4d44-ad05-98890c097dd2","Type":"ContainerDied","Data":"e0e54afa304c07256ca81f12b5ac712d5ac8488390931a330fe4a44a3c9b790d"} Feb 20 14:55:08.800756 master-0 kubenswrapper[7744]: I0220 14:55:08.800685 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" event={"ID":"8e8c5772-b6e2-43d8-b173-af74541855fb","Type":"ContainerStarted","Data":"04c921d85b432c0d1b6bd571166f434dca8313768c8990c88277ecdb55bd26c7"} Feb 20 14:55:08.803454 master-0 kubenswrapper[7744]: I0220 14:55:08.803415 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 20 14:55:08.805110 master-0 kubenswrapper[7744]: I0220 14:55:08.805071 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 20 14:55:08.808023 master-0 kubenswrapper[7744]: I0220 14:55:08.807821 7744 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="e7c73184be18a91b74e2b6b30aa92586b0cebf1411e6cd234cef01c85e9c4104" exitCode=2 Feb 20 14:55:08.808023 master-0 kubenswrapper[7744]: I0220 14:55:08.807885 7744 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="f07ceb1fe9c4ec76c3cffe25b5d95b988b21733f5be6094583204d0c17e7fcb8" exitCode=0 Feb 20 14:55:08.808023 master-0 kubenswrapper[7744]: I0220 14:55:08.807910 7744 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="689ee9b0e5108311ff54df17a58ad47a30c4ae3de8db0ce3794fccb2f1d4b026" exitCode=2 Feb 20 14:55:10.422204 master-0 kubenswrapper[7744]: I0220 14:55:10.422153 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 20 14:55:10.569324 master-0 kubenswrapper[7744]: I0220 14:55:10.569256 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b6285323-3e75-4d44-ad05-98890c097dd2-kubelet-dir\") pod \"b6285323-3e75-4d44-ad05-98890c097dd2\" (UID: \"b6285323-3e75-4d44-ad05-98890c097dd2\") " Feb 20 14:55:10.569324 master-0 kubenswrapper[7744]: I0220 14:55:10.569327 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b6285323-3e75-4d44-ad05-98890c097dd2-var-lock\") pod \"b6285323-3e75-4d44-ad05-98890c097dd2\" (UID: \"b6285323-3e75-4d44-ad05-98890c097dd2\") " Feb 20 14:55:10.569643 master-0 kubenswrapper[7744]: I0220 14:55:10.569402 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6285323-3e75-4d44-ad05-98890c097dd2-kube-api-access\") pod \"b6285323-3e75-4d44-ad05-98890c097dd2\" (UID: \"b6285323-3e75-4d44-ad05-98890c097dd2\") " Feb 20 14:55:10.569643 master-0 kubenswrapper[7744]: I0220 14:55:10.569421 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6285323-3e75-4d44-ad05-98890c097dd2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b6285323-3e75-4d44-ad05-98890c097dd2" (UID: "b6285323-3e75-4d44-ad05-98890c097dd2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:55:10.569643 master-0 kubenswrapper[7744]: I0220 14:55:10.569501 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6285323-3e75-4d44-ad05-98890c097dd2-var-lock" (OuterVolumeSpecName: "var-lock") pod "b6285323-3e75-4d44-ad05-98890c097dd2" (UID: "b6285323-3e75-4d44-ad05-98890c097dd2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:55:10.570295 master-0 kubenswrapper[7744]: I0220 14:55:10.570244 7744 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b6285323-3e75-4d44-ad05-98890c097dd2-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:10.570295 master-0 kubenswrapper[7744]: I0220 14:55:10.570286 7744 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b6285323-3e75-4d44-ad05-98890c097dd2-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:10.574528 master-0 kubenswrapper[7744]: I0220 14:55:10.574444 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6285323-3e75-4d44-ad05-98890c097dd2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b6285323-3e75-4d44-ad05-98890c097dd2" (UID: "b6285323-3e75-4d44-ad05-98890c097dd2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:55:10.671639 master-0 kubenswrapper[7744]: I0220 14:55:10.671484 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6285323-3e75-4d44-ad05-98890c097dd2-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:10.828913 master-0 kubenswrapper[7744]: I0220 14:55:10.828841 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" event={"ID":"8e8c5772-b6e2-43d8-b173-af74541855fb","Type":"ContainerStarted","Data":"ec1f2942b833e4699e40cac84e92b5387087cd186af08453ba24e486f285439e"} Feb 20 14:55:10.831191 master-0 kubenswrapper[7744]: I0220 14:55:10.831124 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b6285323-3e75-4d44-ad05-98890c097dd2","Type":"ContainerDied","Data":"a18ba6fef141df70b03fa378f8e3dafed41e947f342e811cb930b80a2236b753"} Feb 20 14:55:10.831191 master-0 kubenswrapper[7744]: I0220 14:55:10.831166 7744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a18ba6fef141df70b03fa378f8e3dafed41e947f342e811cb930b80a2236b753" Feb 20 14:55:10.831437 master-0 kubenswrapper[7744]: I0220 14:55:10.831219 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 20 14:55:13.212955 master-0 kubenswrapper[7744]: I0220 14:55:13.212867 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_3753e8e6-e86c-4841-bc82-ce5321b5583f/installer/0.log" Feb 20 14:55:13.213631 master-0 kubenswrapper[7744]: I0220 14:55:13.213007 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 14:55:13.316422 master-0 kubenswrapper[7744]: I0220 14:55:13.316275 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_3753e8e6-e86c-4841-bc82-ce5321b5583f/installer/0.log" Feb 20 14:55:13.316691 master-0 kubenswrapper[7744]: I0220 14:55:13.316510 7744 generic.go:334] "Generic (PLEG): container finished" podID="3753e8e6-e86c-4841-bc82-ce5321b5583f" containerID="e9f5283b1593036f2c2506fd9fc4fbab1721fa59aa90a252424d56b7bd732b24" exitCode=1 Feb 20 14:55:13.316691 master-0 kubenswrapper[7744]: I0220 14:55:13.316564 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"3753e8e6-e86c-4841-bc82-ce5321b5583f","Type":"ContainerDied","Data":"e9f5283b1593036f2c2506fd9fc4fbab1721fa59aa90a252424d56b7bd732b24"} Feb 20 14:55:13.316691 master-0 kubenswrapper[7744]: I0220 14:55:13.316618 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"3753e8e6-e86c-4841-bc82-ce5321b5583f","Type":"ContainerDied","Data":"fc878293ad9742e0e6e203c029346a7ba8ae6bc45e7cd5df0e7da3c99e545909"} Feb 20 14:55:13.316691 master-0 kubenswrapper[7744]: I0220 14:55:13.316648 7744 scope.go:117] "RemoveContainer" containerID="e9f5283b1593036f2c2506fd9fc4fbab1721fa59aa90a252424d56b7bd732b24" Feb 20 14:55:13.317026 master-0 kubenswrapper[7744]: I0220 14:55:13.316703 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 20 14:55:13.317329 master-0 kubenswrapper[7744]: I0220 14:55:13.317266 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3753e8e6-e86c-4841-bc82-ce5321b5583f-kube-api-access\") pod \"3753e8e6-e86c-4841-bc82-ce5321b5583f\" (UID: \"3753e8e6-e86c-4841-bc82-ce5321b5583f\") " Feb 20 14:55:13.317499 master-0 kubenswrapper[7744]: I0220 14:55:13.317456 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3753e8e6-e86c-4841-bc82-ce5321b5583f-var-lock\") pod \"3753e8e6-e86c-4841-bc82-ce5321b5583f\" (UID: \"3753e8e6-e86c-4841-bc82-ce5321b5583f\") " Feb 20 14:55:13.317582 master-0 kubenswrapper[7744]: I0220 14:55:13.317503 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3753e8e6-e86c-4841-bc82-ce5321b5583f-kubelet-dir\") pod \"3753e8e6-e86c-4841-bc82-ce5321b5583f\" (UID: \"3753e8e6-e86c-4841-bc82-ce5321b5583f\") " Feb 20 14:55:13.317838 master-0 kubenswrapper[7744]: I0220 14:55:13.317793 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3753e8e6-e86c-4841-bc82-ce5321b5583f-var-lock" (OuterVolumeSpecName: "var-lock") pod "3753e8e6-e86c-4841-bc82-ce5321b5583f" (UID: "3753e8e6-e86c-4841-bc82-ce5321b5583f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:55:13.317838 master-0 kubenswrapper[7744]: I0220 14:55:13.317829 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3753e8e6-e86c-4841-bc82-ce5321b5583f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3753e8e6-e86c-4841-bc82-ce5321b5583f" (UID: "3753e8e6-e86c-4841-bc82-ce5321b5583f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:55:13.318084 master-0 kubenswrapper[7744]: I0220 14:55:13.318053 7744 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3753e8e6-e86c-4841-bc82-ce5321b5583f-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:13.318156 master-0 kubenswrapper[7744]: I0220 14:55:13.318085 7744 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3753e8e6-e86c-4841-bc82-ce5321b5583f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:13.322544 master-0 kubenswrapper[7744]: I0220 14:55:13.322462 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3753e8e6-e86c-4841-bc82-ce5321b5583f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3753e8e6-e86c-4841-bc82-ce5321b5583f" (UID: "3753e8e6-e86c-4841-bc82-ce5321b5583f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:55:13.362316 master-0 kubenswrapper[7744]: I0220 14:55:13.362262 7744 scope.go:117] "RemoveContainer" containerID="e9f5283b1593036f2c2506fd9fc4fbab1721fa59aa90a252424d56b7bd732b24" Feb 20 14:55:13.362839 master-0 kubenswrapper[7744]: E0220 14:55:13.362781 7744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9f5283b1593036f2c2506fd9fc4fbab1721fa59aa90a252424d56b7bd732b24\": container with ID starting with e9f5283b1593036f2c2506fd9fc4fbab1721fa59aa90a252424d56b7bd732b24 not found: ID does not exist" containerID="e9f5283b1593036f2c2506fd9fc4fbab1721fa59aa90a252424d56b7bd732b24" Feb 20 14:55:13.362919 master-0 kubenswrapper[7744]: I0220 14:55:13.362840 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9f5283b1593036f2c2506fd9fc4fbab1721fa59aa90a252424d56b7bd732b24"} err="failed to get container status \"e9f5283b1593036f2c2506fd9fc4fbab1721fa59aa90a252424d56b7bd732b24\": rpc error: code = NotFound desc = could not find container \"e9f5283b1593036f2c2506fd9fc4fbab1721fa59aa90a252424d56b7bd732b24\": container with ID starting with e9f5283b1593036f2c2506fd9fc4fbab1721fa59aa90a252424d56b7bd732b24 not found: ID does not exist" Feb 20 14:55:13.419632 master-0 kubenswrapper[7744]: I0220 14:55:13.419485 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3753e8e6-e86c-4841-bc82-ce5321b5583f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:14.327634 master-0 kubenswrapper[7744]: I0220 14:55:14.327526 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" event={"ID":"8e8c5772-b6e2-43d8-b173-af74541855fb","Type":"ContainerStarted","Data":"fde4c1f926ef51136abe74f14fd5102b8adec09263eab2c1bc2673f3a644e9e6"} Feb 20 14:55:14.327634 master-0 kubenswrapper[7744]: I0220 14:55:14.327600 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" event={"ID":"8e8c5772-b6e2-43d8-b173-af74541855fb","Type":"ContainerStarted","Data":"f67a6f8819fd04404e66693d9145f5c750d553aae06fb05a6df8c4ce1725f387"} Feb 20 14:55:19.953309 master-0 kubenswrapper[7744]: E0220 14:55:19.953196 7744 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:55:22.407038 master-0 kubenswrapper[7744]: I0220 14:55:22.406835 7744 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="c892ef72ed4285d837da54275f52fe7ef188495a8fa319fa925855d3b2fd2a7f" exitCode=1 Feb 20 14:55:22.407038 master-0 kubenswrapper[7744]: I0220 14:55:22.406903 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"c892ef72ed4285d837da54275f52fe7ef188495a8fa319fa925855d3b2fd2a7f"} Feb 20 14:55:22.407038 master-0 kubenswrapper[7744]: I0220 14:55:22.407020 7744 scope.go:117] "RemoveContainer" containerID="270d3a75efe91ed6ef6d1abeb18e00097f8477e6f1fadd3a750363afe0b16909" Feb 20 14:55:22.407907 master-0 kubenswrapper[7744]: I0220 14:55:22.407760 7744 scope.go:117] "RemoveContainer" containerID="c892ef72ed4285d837da54275f52fe7ef188495a8fa319fa925855d3b2fd2a7f" Feb 20 14:55:22.408288 master-0 kubenswrapper[7744]: E0220 14:55:22.408220 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 20 14:55:22.899900 master-0 kubenswrapper[7744]: I0220 14:55:22.899784 7744 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:55:23.433801 master-0 kubenswrapper[7744]: I0220 14:55:23.433708 7744 scope.go:117] "RemoveContainer" containerID="c892ef72ed4285d837da54275f52fe7ef188495a8fa319fa925855d3b2fd2a7f" Feb 20 14:55:23.434608 master-0 kubenswrapper[7744]: E0220 14:55:23.434131 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 20 14:55:26.463120 master-0 kubenswrapper[7744]: I0220 14:55:26.463033 7744 generic.go:334] "Generic (PLEG): container finished" podID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerID="b8c9ab75c341608bbd631623c30a262c8f71065b35633a99f02888aa224f7c9c" exitCode=0 Feb 20 14:55:26.463771 master-0 kubenswrapper[7744]: I0220 14:55:26.463139 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" event={"ID":"5f55b652-bef8-4f50-9d1d-9d0a340c1dea","Type":"ContainerDied","Data":"b8c9ab75c341608bbd631623c30a262c8f71065b35633a99f02888aa224f7c9c"} Feb 20 14:55:26.463771 master-0 kubenswrapper[7744]: I0220 14:55:26.463233 7744 scope.go:117] "RemoveContainer" containerID="f3242db03cac46e4568d01c2eb90056f6c103228ea7040c2d234fdcf31ba865d" Feb 20 14:55:27.475835 master-0 kubenswrapper[7744]: I0220 14:55:27.475704 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" event={"ID":"5f55b652-bef8-4f50-9d1d-9d0a340c1dea","Type":"ContainerStarted","Data":"a4f9d4f4e0643d58b1e9cfa61dcaf195c6366d9cff2806438f5074f51bd80343"} Feb 20 14:55:28.314476 master-0 kubenswrapper[7744]: I0220 14:55:28.314380 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:55:28.318425 master-0 kubenswrapper[7744]: I0220 14:55:28.318362 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:28.318425 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:28.318425 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:28.318425 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:28.318733 master-0 kubenswrapper[7744]: I0220 14:55:28.318447 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:29.316804 master-0 kubenswrapper[7744]: I0220 14:55:29.316701 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:29.316804 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:29.316804 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:29.316804 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:29.317984 master-0 kubenswrapper[7744]: I0220 14:55:29.316823 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:29.953635 master-0 kubenswrapper[7744]: E0220 14:55:29.953510 7744 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" Feb 20 14:55:30.105355 master-0 kubenswrapper[7744]: I0220 14:55:30.105257 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:55:30.106426 master-0 kubenswrapper[7744]: I0220 14:55:30.106368 7744 scope.go:117] "RemoveContainer" containerID="c892ef72ed4285d837da54275f52fe7ef188495a8fa319fa925855d3b2fd2a7f" Feb 20 14:55:30.107069 master-0 kubenswrapper[7744]: E0220 14:55:30.107007 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 20 14:55:30.313903 master-0 kubenswrapper[7744]: I0220 14:55:30.313796 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:55:30.317221 master-0 kubenswrapper[7744]: I0220 14:55:30.317134 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:30.317221 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:30.317221 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:30.317221 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:30.318070 master-0 kubenswrapper[7744]: I0220 14:55:30.317247 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:30.357749 master-0 kubenswrapper[7744]: I0220 14:55:30.357649 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:55:30.501040 master-0 kubenswrapper[7744]: I0220 14:55:30.500899 7744 scope.go:117] "RemoveContainer" containerID="c892ef72ed4285d837da54275f52fe7ef188495a8fa319fa925855d3b2fd2a7f" Feb 20 14:55:30.501477 master-0 kubenswrapper[7744]: E0220 14:55:30.501421 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 20 14:55:31.316778 master-0 kubenswrapper[7744]: I0220 14:55:31.316705 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:31.316778 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:31.316778 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:31.316778 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:31.317074 master-0 kubenswrapper[7744]: I0220 14:55:31.316829 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:32.316600 master-0 kubenswrapper[7744]: I0220 14:55:32.316485 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:32.316600 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:32.316600 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:32.316600 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:32.316600 master-0 kubenswrapper[7744]: I0220 14:55:32.316563 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:33.316719 master-0 kubenswrapper[7744]: I0220 14:55:33.316634 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:33.316719 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:33.316719 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:33.316719 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:33.317604 master-0 kubenswrapper[7744]: I0220 14:55:33.316723 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:34.316716 master-0 kubenswrapper[7744]: I0220 14:55:34.316631 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:34.316716 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:34.316716 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:34.316716 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:34.317604 master-0 kubenswrapper[7744]: I0220 14:55:34.316731 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:35.317323 master-0 kubenswrapper[7744]: I0220 14:55:35.317240 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:35.317323 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:35.317323 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:35.317323 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:35.318486 master-0 kubenswrapper[7744]: I0220 14:55:35.317339 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:36.316743 master-0 kubenswrapper[7744]: I0220 14:55:36.316662 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:36.316743 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:36.316743 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:36.316743 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:36.317261 master-0 kubenswrapper[7744]: I0220 14:55:36.316756 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:37.316606 master-0 kubenswrapper[7744]: I0220 14:55:37.316519 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:37.316606 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:37.316606 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:37.316606 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:37.317582 master-0 kubenswrapper[7744]: I0220 14:55:37.316612 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:38.317802 master-0 kubenswrapper[7744]: I0220 14:55:38.317674 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:38.317802 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:38.317802 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:38.317802 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:38.318731 master-0 kubenswrapper[7744]: I0220 14:55:38.317880 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:38.585679 master-0 kubenswrapper[7744]: I0220 14:55:38.585534 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 20 14:55:38.587068 master-0 kubenswrapper[7744]: I0220 14:55:38.587010 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 20 14:55:38.588107 master-0 kubenswrapper[7744]: I0220 14:55:38.588040 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd/0.log" Feb 20 14:55:38.588543 master-0 kubenswrapper[7744]: I0220 14:55:38.588516 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 20 14:55:38.589747 master-0 kubenswrapper[7744]: I0220 14:55:38.589694 7744 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="5c1b5acd04d8c9f3d08aff8344aaeb09f20e3251f3c25c2cbd1b35b61bcf2908" exitCode=137 Feb 20 14:55:38.589896 master-0 kubenswrapper[7744]: I0220 14:55:38.589751 7744 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="03c3011b78a1e090e26282e5bfe01d4a95cee038877b51f0cfa6e5d29c599082" exitCode=137 Feb 20 14:55:39.039294 master-0 kubenswrapper[7744]: I0220 14:55:39.039240 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 20 14:55:39.040988 master-0 kubenswrapper[7744]: I0220 14:55:39.040907 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 20 14:55:39.042036 master-0 kubenswrapper[7744]: I0220 14:55:39.041988 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd/0.log" Feb 20 14:55:39.042812 master-0 kubenswrapper[7744]: I0220 14:55:39.042762 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 20 14:55:39.045628 master-0 kubenswrapper[7744]: I0220 14:55:39.045564 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 20 14:55:39.237586 master-0 kubenswrapper[7744]: I0220 14:55:39.237431 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 20 14:55:39.237822 master-0 kubenswrapper[7744]: I0220 14:55:39.237604 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:55:39.237822 master-0 kubenswrapper[7744]: I0220 14:55:39.237618 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 20 14:55:39.237822 master-0 kubenswrapper[7744]: I0220 14:55:39.237661 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:55:39.237822 master-0 kubenswrapper[7744]: I0220 14:55:39.237701 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 20 14:55:39.237822 master-0 kubenswrapper[7744]: I0220 14:55:39.237754 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 20 14:55:39.237822 master-0 kubenswrapper[7744]: I0220 14:55:39.237807 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 20 14:55:39.237822 master-0 kubenswrapper[7744]: I0220 14:55:39.237808 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir" (OuterVolumeSpecName: "log-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:55:39.238299 master-0 kubenswrapper[7744]: I0220 14:55:39.237853 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:55:39.238299 master-0 kubenswrapper[7744]: I0220 14:55:39.237915 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 20 14:55:39.238299 master-0 kubenswrapper[7744]: I0220 14:55:39.238031 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir" (OuterVolumeSpecName: "data-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:55:39.238299 master-0 kubenswrapper[7744]: I0220 14:55:39.238068 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:55:39.238299 master-0 kubenswrapper[7744]: I0220 14:55:39.238303 7744 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:39.238597 master-0 kubenswrapper[7744]: I0220 14:55:39.238327 7744 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:39.238597 master-0 kubenswrapper[7744]: I0220 14:55:39.238348 7744 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:39.238597 master-0 kubenswrapper[7744]: I0220 14:55:39.238364 7744 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:39.238597 master-0 kubenswrapper[7744]: I0220 14:55:39.238381 7744 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:39.238597 master-0 kubenswrapper[7744]: I0220 14:55:39.238398 7744 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:39.320078 master-0 kubenswrapper[7744]: I0220 14:55:39.319991 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:39.320078 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:39.320078 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:39.320078 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:39.321063 master-0 kubenswrapper[7744]: I0220 14:55:39.320099 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:39.600155 master-0 kubenswrapper[7744]: I0220 14:55:39.600052 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 20 14:55:39.601515 master-0 kubenswrapper[7744]: I0220 14:55:39.601452 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 20 14:55:39.602566 master-0 kubenswrapper[7744]: I0220 14:55:39.602492 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd/0.log" Feb 20 14:55:39.603463 master-0 kubenswrapper[7744]: I0220 14:55:39.603409 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 20 14:55:39.605172 master-0 kubenswrapper[7744]: I0220 14:55:39.605111 7744 scope.go:117] "RemoveContainer" containerID="e7c73184be18a91b74e2b6b30aa92586b0cebf1411e6cd234cef01c85e9c4104" Feb 20 14:55:39.605172 master-0 kubenswrapper[7744]: I0220 14:55:39.605154 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 20 14:55:39.608888 master-0 kubenswrapper[7744]: I0220 14:55:39.608842 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_3ef51d3b-cd8b-4f34-961e-8daebbed3ca6/installer/0.log" Feb 20 14:55:39.609064 master-0 kubenswrapper[7744]: I0220 14:55:39.608912 7744 generic.go:334] "Generic (PLEG): container finished" podID="3ef51d3b-cd8b-4f34-961e-8daebbed3ca6" containerID="992d06369bcdfc83fe57ae6d1c5dce1f2cfa2163b4588fe5df6d49020418c795" exitCode=1 Feb 20 14:55:39.609064 master-0 kubenswrapper[7744]: I0220 14:55:39.608984 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6","Type":"ContainerDied","Data":"992d06369bcdfc83fe57ae6d1c5dce1f2cfa2163b4588fe5df6d49020418c795"} Feb 20 14:55:39.635295 master-0 kubenswrapper[7744]: I0220 14:55:39.635216 7744 scope.go:117] "RemoveContainer" containerID="f07ceb1fe9c4ec76c3cffe25b5d95b988b21733f5be6094583204d0c17e7fcb8" Feb 20 14:55:39.665145 master-0 kubenswrapper[7744]: I0220 14:55:39.665071 7744 scope.go:117] "RemoveContainer" containerID="689ee9b0e5108311ff54df17a58ad47a30c4ae3de8db0ce3794fccb2f1d4b026" Feb 20 14:55:39.690327 master-0 kubenswrapper[7744]: I0220 14:55:39.690263 7744 scope.go:117] "RemoveContainer" containerID="5c1b5acd04d8c9f3d08aff8344aaeb09f20e3251f3c25c2cbd1b35b61bcf2908" Feb 20 14:55:39.715262 master-0 kubenswrapper[7744]: I0220 14:55:39.715193 7744 scope.go:117] "RemoveContainer" containerID="03c3011b78a1e090e26282e5bfe01d4a95cee038877b51f0cfa6e5d29c599082" Feb 20 14:55:39.733890 master-0 kubenswrapper[7744]: I0220 14:55:39.733837 7744 scope.go:117] "RemoveContainer" containerID="afdfde0efc416d5e6424b7e7305c6f92f436f753f3f94c9b4efe806e43f618f1" Feb 20 14:55:39.769548 master-0 kubenswrapper[7744]: I0220 14:55:39.769482 7744 scope.go:117] "RemoveContainer" containerID="667e04d2ee9447d5c6be6502611e06945f27b2a635f79b208a58d8042b30dc6b" Feb 20 14:55:39.798345 master-0 kubenswrapper[7744]: I0220 14:55:39.798276 7744 scope.go:117] "RemoveContainer" containerID="0790f8358bd195055e3ef6e8082a46ba0465ead8edb0163b9e940b4b722d77b1" Feb 20 14:55:39.881107 master-0 kubenswrapper[7744]: E0220 14:55:39.880806 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T14:55:29Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T14:55:29Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T14:55:29Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T14:55:29Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:584b5d125dad1fa4f8d03e6ace2e4901c173569ff1ed9536da6915c56fa52bc0\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8124eb3839b25af23303e9fdde35728bfd24d7c0c47530e77852cba1dd9d1ffb\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702755272},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:983e809911852d534091e23b95da37b24a6b70dcb49c55e79ce6dfdaa4ca0c05\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c170c998b3e5eb96b80b09daf4a33aa903dac048a2f7d79e4c10e78e309c01eb\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1236108493},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2458acf77e6551a99656a2a1643e7ef4bf008f6bf792157614710eb9b28e0e64\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:3c45f047394ebd29a640afe4c1e96739e5155ec608b61170a2274911bdf56a3d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210258627},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c\\\"],\\\"sizeBytes\\\":480427687},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb\\\"],\\\"sizeBytes\\\":471325816}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:55:39.954678 master-0 kubenswrapper[7744]: E0220 14:55:39.954584 7744 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:55:40.318722 master-0 kubenswrapper[7744]: I0220 14:55:40.318601 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:40.318722 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:40.318722 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:40.318722 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:40.318722 master-0 kubenswrapper[7744]: I0220 14:55:40.318687 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:41.051358 master-0 kubenswrapper[7744]: I0220 14:55:41.051278 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_3ef51d3b-cd8b-4f34-961e-8daebbed3ca6/installer/0.log" Feb 20 14:55:41.052124 master-0 kubenswrapper[7744]: I0220 14:55:41.051393 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 14:55:41.053037 master-0 kubenswrapper[7744]: I0220 14:55:41.052971 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a83278819db2092fa26d8274eb3f00" path="/var/lib/kubelet/pods/18a83278819db2092fa26d8274eb3f00/volumes" Feb 20 14:55:41.168636 master-0 kubenswrapper[7744]: I0220 14:55:41.168570 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-var-lock\") pod \"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6\" (UID: \"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6\") " Feb 20 14:55:41.168831 master-0 kubenswrapper[7744]: I0220 14:55:41.168661 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-var-lock" (OuterVolumeSpecName: "var-lock") pod "3ef51d3b-cd8b-4f34-961e-8daebbed3ca6" (UID: "3ef51d3b-cd8b-4f34-961e-8daebbed3ca6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:55:41.168831 master-0 kubenswrapper[7744]: I0220 14:55:41.168691 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-kube-api-access\") pod \"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6\" (UID: \"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6\") " Feb 20 14:55:41.168831 master-0 kubenswrapper[7744]: I0220 14:55:41.168822 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-kubelet-dir\") pod \"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6\" (UID: \"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6\") " Feb 20 14:55:41.169047 master-0 kubenswrapper[7744]: I0220 14:55:41.168957 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3ef51d3b-cd8b-4f34-961e-8daebbed3ca6" (UID: "3ef51d3b-cd8b-4f34-961e-8daebbed3ca6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:55:41.169694 master-0 kubenswrapper[7744]: I0220 14:55:41.169661 7744 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:41.169751 master-0 kubenswrapper[7744]: I0220 14:55:41.169697 7744 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:41.171609 master-0 kubenswrapper[7744]: I0220 14:55:41.171536 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3ef51d3b-cd8b-4f34-961e-8daebbed3ca6" (UID: "3ef51d3b-cd8b-4f34-961e-8daebbed3ca6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:55:41.271874 master-0 kubenswrapper[7744]: I0220 14:55:41.271778 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ef51d3b-cd8b-4f34-961e-8daebbed3ca6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 14:55:41.303056 master-0 kubenswrapper[7744]: I0220 14:55:41.302914 7744 scope.go:117] "RemoveContainer" containerID="e95606b40a17608c8c7fdabfbaff98a784411ba115dbcdf26ab46d49f3aaafbd" Feb 20 14:55:41.316597 master-0 kubenswrapper[7744]: I0220 14:55:41.316518 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:41.316597 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:41.316597 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:41.316597 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:41.316976 master-0 kubenswrapper[7744]: I0220 14:55:41.316625 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:41.635528 master-0 kubenswrapper[7744]: I0220 14:55:41.635349 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_3ef51d3b-cd8b-4f34-961e-8daebbed3ca6/installer/0.log" Feb 20 14:55:41.635528 master-0 kubenswrapper[7744]: I0220 14:55:41.635475 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6","Type":"ContainerDied","Data":"00c49d62b94564e456b20bb8a4dbb2c93a1fe2806ab8327bbf14d442fc57441b"} Feb 20 14:55:41.635528 master-0 kubenswrapper[7744]: I0220 14:55:41.635524 7744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00c49d62b94564e456b20bb8a4dbb2c93a1fe2806ab8327bbf14d442fc57441b" Feb 20 14:55:41.636018 master-0 kubenswrapper[7744]: I0220 14:55:41.635603 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 14:55:42.318066 master-0 kubenswrapper[7744]: I0220 14:55:42.317970 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:42.318066 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:42.318066 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:42.318066 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:42.319176 master-0 kubenswrapper[7744]: I0220 14:55:42.318088 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:42.456829 master-0 kubenswrapper[7744]: E0220 14:55:42.456668 7744 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.1895fc2f4738f513 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:18a83278819db2092fa26d8274eb3f00,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Killing,Message:Stopping container etcd-rev,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:55:08.428735763 +0000 UTC m=+507.630935723,LastTimestamp:2026-02-20 14:55:08.428735763 +0000 UTC m=+507.630935723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:55:43.317796 master-0 kubenswrapper[7744]: I0220 14:55:43.317714 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:43.317796 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:43.317796 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:43.317796 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:43.318799 master-0 kubenswrapper[7744]: I0220 14:55:43.317809 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:44.038371 master-0 kubenswrapper[7744]: I0220 14:55:44.038277 7744 scope.go:117] "RemoveContainer" containerID="c892ef72ed4285d837da54275f52fe7ef188495a8fa319fa925855d3b2fd2a7f" Feb 20 14:55:44.317021 master-0 kubenswrapper[7744]: I0220 14:55:44.316965 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:44.317021 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:44.317021 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:44.317021 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:44.317418 master-0 kubenswrapper[7744]: I0220 14:55:44.317033 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:44.664137 master-0 kubenswrapper[7744]: I0220 14:55:44.663998 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"8b9e8f4c341fb912cdf6bf5e28d7e341853636e046981ef1ccb6eadf89c8d654"} Feb 20 14:55:45.037159 master-0 kubenswrapper[7744]: I0220 14:55:45.037041 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 20 14:55:45.070319 master-0 kubenswrapper[7744]: I0220 14:55:45.070260 7744 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a76c28a2-1d48-49cb-8275-540ce323528c" Feb 20 14:55:45.070319 master-0 kubenswrapper[7744]: I0220 14:55:45.070314 7744 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a76c28a2-1d48-49cb-8275-540ce323528c" Feb 20 14:55:45.317864 master-0 kubenswrapper[7744]: I0220 14:55:45.317729 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:45.317864 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:45.317864 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:45.317864 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:45.317864 master-0 kubenswrapper[7744]: I0220 14:55:45.317818 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:46.318054 master-0 kubenswrapper[7744]: I0220 14:55:46.317958 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:46.318054 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:46.318054 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:46.318054 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:46.319253 master-0 kubenswrapper[7744]: I0220 14:55:46.318078 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:47.317324 master-0 kubenswrapper[7744]: I0220 14:55:47.317209 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:47.317324 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:47.317324 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:47.317324 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:47.318082 master-0 kubenswrapper[7744]: I0220 14:55:47.317331 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:47.971689 master-0 kubenswrapper[7744]: I0220 14:55:47.971575 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 14:55:48.316599 master-0 kubenswrapper[7744]: I0220 14:55:48.316452 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:48.316599 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:48.316599 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:48.316599 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:48.316599 master-0 kubenswrapper[7744]: I0220 14:55:48.316537 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:49.316994 master-0 kubenswrapper[7744]: I0220 14:55:49.316811 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:49.316994 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:49.316994 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:49.316994 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:49.316994 master-0 kubenswrapper[7744]: I0220 14:55:49.316898 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:49.884428 master-0 kubenswrapper[7744]: E0220 14:55:49.884308 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:55:49.955181 master-0 kubenswrapper[7744]: E0220 14:55:49.955105 7744 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:55:50.105153 master-0 kubenswrapper[7744]: I0220 14:55:50.105064 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:55:50.317234 master-0 kubenswrapper[7744]: I0220 14:55:50.317102 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:50.317234 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:50.317234 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:50.317234 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:50.318153 master-0 kubenswrapper[7744]: I0220 14:55:50.317268 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:50.357409 master-0 kubenswrapper[7744]: I0220 14:55:50.357333 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:55:51.316550 master-0 kubenswrapper[7744]: I0220 14:55:51.316487 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:51.316550 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:51.316550 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:51.316550 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:51.317034 master-0 kubenswrapper[7744]: I0220 14:55:51.316567 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:52.316710 master-0 kubenswrapper[7744]: I0220 14:55:52.316642 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:52.316710 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:52.316710 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:52.316710 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:52.318197 master-0 kubenswrapper[7744]: I0220 14:55:52.316721 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:53.105969 master-0 kubenswrapper[7744]: I0220 14:55:53.105846 7744 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 14:55:53.316988 master-0 kubenswrapper[7744]: I0220 14:55:53.316898 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:53.316988 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:53.316988 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:53.316988 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:53.317651 master-0 kubenswrapper[7744]: I0220 14:55:53.317023 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:54.317110 master-0 kubenswrapper[7744]: I0220 14:55:54.316925 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:54.317110 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:54.317110 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:54.317110 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:54.317110 master-0 kubenswrapper[7744]: I0220 14:55:54.317111 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:55.316573 master-0 kubenswrapper[7744]: I0220 14:55:55.316467 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:55.316573 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:55.316573 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:55.316573 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:55.317139 master-0 kubenswrapper[7744]: I0220 14:55:55.316585 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:56.316916 master-0 kubenswrapper[7744]: I0220 14:55:56.316693 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:56.316916 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:56.316916 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:56.316916 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:56.316916 master-0 kubenswrapper[7744]: I0220 14:55:56.316828 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:56.762193 master-0 kubenswrapper[7744]: I0220 14:55:56.762129 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-fjtrw_4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/ingress-operator/2.log" Feb 20 14:55:56.763619 master-0 kubenswrapper[7744]: I0220 14:55:56.763550 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-fjtrw_4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/ingress-operator/1.log" Feb 20 14:55:56.764304 master-0 kubenswrapper[7744]: I0220 14:55:56.764247 7744 generic.go:334] "Generic (PLEG): container finished" podID="4b6a656c-40d6-4c63-9c6f-ac943eae4c9a" containerID="ba86653512a4222e60f99c9a0811e8150bea75c06b16f3bd7d165d8b4d82ace0" exitCode=1 Feb 20 14:55:56.764461 master-0 kubenswrapper[7744]: I0220 14:55:56.764321 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" event={"ID":"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a","Type":"ContainerDied","Data":"ba86653512a4222e60f99c9a0811e8150bea75c06b16f3bd7d165d8b4d82ace0"} Feb 20 14:55:56.764461 master-0 kubenswrapper[7744]: I0220 14:55:56.764453 7744 scope.go:117] "RemoveContainer" containerID="714dbfc4fc378943ded0e9dbde4bd4d13b8c24e9f4a8b6486b4468145db42e05" Feb 20 14:55:56.765370 master-0 kubenswrapper[7744]: I0220 14:55:56.765321 7744 scope.go:117] "RemoveContainer" containerID="ba86653512a4222e60f99c9a0811e8150bea75c06b16f3bd7d165d8b4d82ace0" Feb 20 14:55:56.765767 master-0 kubenswrapper[7744]: E0220 14:55:56.765721 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-fjtrw_openshift-ingress-operator(4b6a656c-40d6-4c63-9c6f-ac943eae4c9a)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" podUID="4b6a656c-40d6-4c63-9c6f-ac943eae4c9a" Feb 20 14:55:57.316791 master-0 kubenswrapper[7744]: I0220 14:55:57.316699 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:57.316791 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:57.316791 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:57.316791 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:57.316791 master-0 kubenswrapper[7744]: I0220 14:55:57.316775 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:57.773328 master-0 kubenswrapper[7744]: I0220 14:55:57.773268 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-fjtrw_4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/ingress-operator/2.log" Feb 20 14:55:58.317239 master-0 kubenswrapper[7744]: I0220 14:55:58.317156 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:58.317239 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:58.317239 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:58.317239 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:58.318250 master-0 kubenswrapper[7744]: I0220 14:55:58.317248 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:59.316961 master-0 kubenswrapper[7744]: I0220 14:55:59.316847 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:55:59.316961 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:55:59.316961 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:55:59.316961 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:55:59.319144 master-0 kubenswrapper[7744]: I0220 14:55:59.316993 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:55:59.885475 master-0 kubenswrapper[7744]: E0220 14:55:59.885356 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:55:59.956504 master-0 kubenswrapper[7744]: E0220 14:55:59.956370 7744 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:55:59.956504 master-0 kubenswrapper[7744]: I0220 14:55:59.956460 7744 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 20 14:56:00.317282 master-0 kubenswrapper[7744]: I0220 14:56:00.316427 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:00.317282 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:00.317282 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:00.317282 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:00.317282 master-0 kubenswrapper[7744]: I0220 14:56:00.316529 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:01.317246 master-0 kubenswrapper[7744]: I0220 14:56:01.317160 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:01.317246 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:01.317246 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:01.317246 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:01.317806 master-0 kubenswrapper[7744]: I0220 14:56:01.317252 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:02.316638 master-0 kubenswrapper[7744]: I0220 14:56:02.316520 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:02.316638 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:02.316638 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:02.316638 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:02.316638 master-0 kubenswrapper[7744]: I0220 14:56:02.316627 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:02.816905 master-0 kubenswrapper[7744]: I0220 14:56:02.816857 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-gprr4_33675e96-ce49-49be-9117-954ac7cca5d5/approver/1.log" Feb 20 14:56:02.818313 master-0 kubenswrapper[7744]: I0220 14:56:02.818285 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-gprr4_33675e96-ce49-49be-9117-954ac7cca5d5/approver/0.log" Feb 20 14:56:02.819167 master-0 kubenswrapper[7744]: I0220 14:56:02.819106 7744 generic.go:334] "Generic (PLEG): container finished" podID="33675e96-ce49-49be-9117-954ac7cca5d5" containerID="93c53e18dcac71f47a3746e6562e8b692068a3b0ff7c4afe8e6e0d3f178f230b" exitCode=1 Feb 20 14:56:02.819293 master-0 kubenswrapper[7744]: I0220 14:56:02.819176 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-gprr4" event={"ID":"33675e96-ce49-49be-9117-954ac7cca5d5","Type":"ContainerDied","Data":"93c53e18dcac71f47a3746e6562e8b692068a3b0ff7c4afe8e6e0d3f178f230b"} Feb 20 14:56:02.819293 master-0 kubenswrapper[7744]: I0220 14:56:02.819227 7744 scope.go:117] "RemoveContainer" containerID="4e27eb5860cdd7ddac83a0d0bd7cc2ce5f678c93e28b4ef780b63b34098f4c71" Feb 20 14:56:02.820237 master-0 kubenswrapper[7744]: I0220 14:56:02.820184 7744 scope.go:117] "RemoveContainer" containerID="93c53e18dcac71f47a3746e6562e8b692068a3b0ff7c4afe8e6e0d3f178f230b" Feb 20 14:56:02.820563 master-0 kubenswrapper[7744]: E0220 14:56:02.820517 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-gprr4_openshift-network-node-identity(33675e96-ce49-49be-9117-954ac7cca5d5)\"" pod="openshift-network-node-identity/network-node-identity-gprr4" podUID="33675e96-ce49-49be-9117-954ac7cca5d5" Feb 20 14:56:03.105809 master-0 kubenswrapper[7744]: I0220 14:56:03.105647 7744 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 14:56:03.316986 master-0 kubenswrapper[7744]: I0220 14:56:03.316884 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:03.316986 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:03.316986 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:03.316986 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:03.318249 master-0 kubenswrapper[7744]: I0220 14:56:03.317007 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:03.829636 master-0 kubenswrapper[7744]: I0220 14:56:03.829567 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-gprr4_33675e96-ce49-49be-9117-954ac7cca5d5/approver/1.log" Feb 20 14:56:04.317148 master-0 kubenswrapper[7744]: I0220 14:56:04.317055 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:04.317148 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:04.317148 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:04.317148 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:04.318171 master-0 kubenswrapper[7744]: I0220 14:56:04.317172 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:05.316660 master-0 kubenswrapper[7744]: I0220 14:56:05.316579 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:05.316660 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:05.316660 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:05.316660 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:05.317165 master-0 kubenswrapper[7744]: I0220 14:56:05.316678 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:06.316740 master-0 kubenswrapper[7744]: I0220 14:56:06.316614 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:06.316740 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:06.316740 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:06.316740 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:06.316740 master-0 kubenswrapper[7744]: I0220 14:56:06.316731 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:07.316813 master-0 kubenswrapper[7744]: I0220 14:56:07.316743 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:07.316813 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:07.316813 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:07.316813 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:07.317345 master-0 kubenswrapper[7744]: I0220 14:56:07.316829 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:08.316232 master-0 kubenswrapper[7744]: I0220 14:56:08.316162 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:08.316232 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:08.316232 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:08.316232 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:08.316618 master-0 kubenswrapper[7744]: I0220 14:56:08.316246 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:08.800411 master-0 kubenswrapper[7744]: I0220 14:56:08.800237 7744 status_manager.go:851] "Failed to get status for pod" podUID="b6285323-3e75-4d44-ad05-98890c097dd2" pod="openshift-etcd/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Feb 20 14:56:09.317273 master-0 kubenswrapper[7744]: I0220 14:56:09.317194 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:09.317273 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:09.317273 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:09.317273 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:09.318052 master-0 kubenswrapper[7744]: I0220 14:56:09.317286 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:09.886527 master-0 kubenswrapper[7744]: E0220 14:56:09.886420 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:56:09.957788 master-0 kubenswrapper[7744]: E0220 14:56:09.957640 7744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 20 14:56:10.316045 master-0 kubenswrapper[7744]: I0220 14:56:10.315918 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:10.316045 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:10.316045 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:10.316045 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:10.316473 master-0 kubenswrapper[7744]: I0220 14:56:10.316049 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:11.037761 master-0 kubenswrapper[7744]: I0220 14:56:11.037667 7744 scope.go:117] "RemoveContainer" containerID="ba86653512a4222e60f99c9a0811e8150bea75c06b16f3bd7d165d8b4d82ace0" Feb 20 14:56:11.038703 master-0 kubenswrapper[7744]: E0220 14:56:11.038166 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-fjtrw_openshift-ingress-operator(4b6a656c-40d6-4c63-9c6f-ac943eae4c9a)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" podUID="4b6a656c-40d6-4c63-9c6f-ac943eae4c9a" Feb 20 14:56:11.553753 master-0 kubenswrapper[7744]: I0220 14:56:11.545644 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:11.553753 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:11.553753 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:11.553753 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:11.553753 master-0 kubenswrapper[7744]: I0220 14:56:11.545706 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:12.318011 master-0 kubenswrapper[7744]: I0220 14:56:12.317868 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:12.318011 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:12.318011 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:12.318011 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:12.318870 master-0 kubenswrapper[7744]: I0220 14:56:12.318000 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:13.105709 master-0 kubenswrapper[7744]: I0220 14:56:13.105593 7744 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 14:56:13.106030 master-0 kubenswrapper[7744]: I0220 14:56:13.105730 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:56:13.106637 master-0 kubenswrapper[7744]: I0220 14:56:13.106580 7744 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"8b9e8f4c341fb912cdf6bf5e28d7e341853636e046981ef1ccb6eadf89c8d654"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 20 14:56:13.106717 master-0 kubenswrapper[7744]: I0220 14:56:13.106691 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://8b9e8f4c341fb912cdf6bf5e28d7e341853636e046981ef1ccb6eadf89c8d654" gracePeriod=30 Feb 20 14:56:13.316850 master-0 kubenswrapper[7744]: I0220 14:56:13.316754 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:13.316850 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:13.316850 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:13.316850 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:13.316850 master-0 kubenswrapper[7744]: I0220 14:56:13.316842 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:13.905953 master-0 kubenswrapper[7744]: I0220 14:56:13.905823 7744 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="8b9e8f4c341fb912cdf6bf5e28d7e341853636e046981ef1ccb6eadf89c8d654" exitCode=2 Feb 20 14:56:13.906883 master-0 kubenswrapper[7744]: I0220 14:56:13.905971 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"8b9e8f4c341fb912cdf6bf5e28d7e341853636e046981ef1ccb6eadf89c8d654"} Feb 20 14:56:13.906883 master-0 kubenswrapper[7744]: I0220 14:56:13.906069 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d"} Feb 20 14:56:13.906883 master-0 kubenswrapper[7744]: I0220 14:56:13.906103 7744 scope.go:117] "RemoveContainer" containerID="c892ef72ed4285d837da54275f52fe7ef188495a8fa319fa925855d3b2fd2a7f" Feb 20 14:56:14.317252 master-0 kubenswrapper[7744]: I0220 14:56:14.317200 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:14.317252 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:14.317252 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:14.317252 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:14.317483 master-0 kubenswrapper[7744]: I0220 14:56:14.317280 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:15.317386 master-0 kubenswrapper[7744]: I0220 14:56:15.317313 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:15.317386 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:15.317386 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:15.317386 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:15.318568 master-0 kubenswrapper[7744]: I0220 14:56:15.318077 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:16.038748 master-0 kubenswrapper[7744]: I0220 14:56:16.038676 7744 scope.go:117] "RemoveContainer" containerID="93c53e18dcac71f47a3746e6562e8b692068a3b0ff7c4afe8e6e0d3f178f230b" Feb 20 14:56:16.316827 master-0 kubenswrapper[7744]: I0220 14:56:16.316642 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:16.316827 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:16.316827 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:16.316827 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:16.316827 master-0 kubenswrapper[7744]: I0220 14:56:16.316738 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:16.460548 master-0 kubenswrapper[7744]: E0220 14:56:16.460358 7744 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{telemeter-client-64bcb8ffcf-vwfzx.1895fc2fbd2afaae openshift-monitoring 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:telemeter-client-64bcb8ffcf-vwfzx,UID:8e8c5772-b6e2-43d8-b173-af74541855fb,APIVersion:v1,ResourceVersion:12062,FieldPath:spec.containers{telemeter-client},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c\" in 2.098s (2.098s including waiting). Image size: 480427687 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:55:10.407531182 +0000 UTC m=+509.609731112,LastTimestamp:2026-02-20 14:55:10.407531182 +0000 UTC m=+509.609731112,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:56:16.936005 master-0 kubenswrapper[7744]: I0220 14:56:16.935909 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_380174fb-b30c-4f45-9119-397cdca91756/installer/0.log" Feb 20 14:56:16.936318 master-0 kubenswrapper[7744]: I0220 14:56:16.936030 7744 generic.go:334] "Generic (PLEG): container finished" podID="380174fb-b30c-4f45-9119-397cdca91756" containerID="28ee0d7fd2e81f54f5dcd52927e71c388f397b4ec8b363fb1c98a6fb82168cd2" exitCode=1 Feb 20 14:56:16.936318 master-0 kubenswrapper[7744]: I0220 14:56:16.936128 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"380174fb-b30c-4f45-9119-397cdca91756","Type":"ContainerDied","Data":"28ee0d7fd2e81f54f5dcd52927e71c388f397b4ec8b363fb1c98a6fb82168cd2"} Feb 20 14:56:16.939250 master-0 kubenswrapper[7744]: I0220 14:56:16.939207 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-gprr4_33675e96-ce49-49be-9117-954ac7cca5d5/approver/1.log" Feb 20 14:56:16.939838 master-0 kubenswrapper[7744]: I0220 14:56:16.939778 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-gprr4" event={"ID":"33675e96-ce49-49be-9117-954ac7cca5d5","Type":"ContainerStarted","Data":"e228425982ffade67c1a967b350cd6a3af970665a081f0a86186926eabc43343"} Feb 20 14:56:17.316857 master-0 kubenswrapper[7744]: I0220 14:56:17.316759 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:17.316857 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:17.316857 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:17.316857 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:17.317365 master-0 kubenswrapper[7744]: I0220 14:56:17.316860 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:18.317124 master-0 kubenswrapper[7744]: I0220 14:56:18.317038 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:18.317124 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:18.317124 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:18.317124 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:18.318139 master-0 kubenswrapper[7744]: I0220 14:56:18.317131 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:18.415552 master-0 kubenswrapper[7744]: I0220 14:56:18.415472 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_380174fb-b30c-4f45-9119-397cdca91756/installer/0.log" Feb 20 14:56:18.415714 master-0 kubenswrapper[7744]: I0220 14:56:18.415619 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 14:56:18.558345 master-0 kubenswrapper[7744]: I0220 14:56:18.558246 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/380174fb-b30c-4f45-9119-397cdca91756-var-lock\") pod \"380174fb-b30c-4f45-9119-397cdca91756\" (UID: \"380174fb-b30c-4f45-9119-397cdca91756\") " Feb 20 14:56:18.558345 master-0 kubenswrapper[7744]: I0220 14:56:18.558331 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/380174fb-b30c-4f45-9119-397cdca91756-kube-api-access\") pod \"380174fb-b30c-4f45-9119-397cdca91756\" (UID: \"380174fb-b30c-4f45-9119-397cdca91756\") " Feb 20 14:56:18.558668 master-0 kubenswrapper[7744]: I0220 14:56:18.558403 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/380174fb-b30c-4f45-9119-397cdca91756-var-lock" (OuterVolumeSpecName: "var-lock") pod "380174fb-b30c-4f45-9119-397cdca91756" (UID: "380174fb-b30c-4f45-9119-397cdca91756"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:56:18.558668 master-0 kubenswrapper[7744]: I0220 14:56:18.558532 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/380174fb-b30c-4f45-9119-397cdca91756-kubelet-dir\") pod \"380174fb-b30c-4f45-9119-397cdca91756\" (UID: \"380174fb-b30c-4f45-9119-397cdca91756\") " Feb 20 14:56:18.558808 master-0 kubenswrapper[7744]: I0220 14:56:18.558724 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/380174fb-b30c-4f45-9119-397cdca91756-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "380174fb-b30c-4f45-9119-397cdca91756" (UID: "380174fb-b30c-4f45-9119-397cdca91756"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 14:56:18.559079 master-0 kubenswrapper[7744]: I0220 14:56:18.559027 7744 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/380174fb-b30c-4f45-9119-397cdca91756-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 14:56:18.559079 master-0 kubenswrapper[7744]: I0220 14:56:18.559063 7744 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/380174fb-b30c-4f45-9119-397cdca91756-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 14:56:18.563395 master-0 kubenswrapper[7744]: I0220 14:56:18.563320 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/380174fb-b30c-4f45-9119-397cdca91756-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "380174fb-b30c-4f45-9119-397cdca91756" (UID: "380174fb-b30c-4f45-9119-397cdca91756"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 14:56:18.661311 master-0 kubenswrapper[7744]: I0220 14:56:18.661200 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/380174fb-b30c-4f45-9119-397cdca91756-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 14:56:18.957504 master-0 kubenswrapper[7744]: I0220 14:56:18.957348 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_380174fb-b30c-4f45-9119-397cdca91756/installer/0.log" Feb 20 14:56:18.957504 master-0 kubenswrapper[7744]: I0220 14:56:18.957445 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"380174fb-b30c-4f45-9119-397cdca91756","Type":"ContainerDied","Data":"1ee17e2383cea0ad71bf0ed7b91b99cbf73c1a9e377abadef6ff61fb1e1e6676"} Feb 20 14:56:18.957504 master-0 kubenswrapper[7744]: I0220 14:56:18.957501 7744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ee17e2383cea0ad71bf0ed7b91b99cbf73c1a9e377abadef6ff61fb1e1e6676" Feb 20 14:56:18.957953 master-0 kubenswrapper[7744]: I0220 14:56:18.957542 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 14:56:19.073388 master-0 kubenswrapper[7744]: E0220 14:56:19.073275 7744 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 20 14:56:19.073865 master-0 kubenswrapper[7744]: I0220 14:56:19.073825 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 20 14:56:19.104742 master-0 kubenswrapper[7744]: W0220 14:56:19.104669 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb419b8533666d3ae7054c771ce97a95f.slice/crio-996b54ad7bf339a39ffff49432d0181ad23ef73bddec2b3817ca026944ee2962 WatchSource:0}: Error finding container 996b54ad7bf339a39ffff49432d0181ad23ef73bddec2b3817ca026944ee2962: Status 404 returned error can't find the container with id 996b54ad7bf339a39ffff49432d0181ad23ef73bddec2b3817ca026944ee2962 Feb 20 14:56:19.317344 master-0 kubenswrapper[7744]: I0220 14:56:19.317295 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:19.317344 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:19.317344 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:19.317344 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:19.318209 master-0 kubenswrapper[7744]: I0220 14:56:19.318083 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:19.886836 master-0 kubenswrapper[7744]: E0220 14:56:19.886730 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:56:19.886836 master-0 kubenswrapper[7744]: E0220 14:56:19.886790 7744 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 20 14:56:19.966321 master-0 kubenswrapper[7744]: I0220 14:56:19.966175 7744 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="b6ea946617b2fbe51c03eb02d48883421215780882113bffd73308a394e3acaf" exitCode=0 Feb 20 14:56:19.966321 master-0 kubenswrapper[7744]: I0220 14:56:19.966253 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"b6ea946617b2fbe51c03eb02d48883421215780882113bffd73308a394e3acaf"} Feb 20 14:56:19.966321 master-0 kubenswrapper[7744]: I0220 14:56:19.966320 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"996b54ad7bf339a39ffff49432d0181ad23ef73bddec2b3817ca026944ee2962"} Feb 20 14:56:19.966858 master-0 kubenswrapper[7744]: I0220 14:56:19.966804 7744 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a76c28a2-1d48-49cb-8275-540ce323528c" Feb 20 14:56:19.966858 master-0 kubenswrapper[7744]: I0220 14:56:19.966841 7744 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a76c28a2-1d48-49cb-8275-540ce323528c" Feb 20 14:56:20.105115 master-0 kubenswrapper[7744]: I0220 14:56:20.104965 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:56:20.159692 master-0 kubenswrapper[7744]: E0220 14:56:20.159485 7744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 20 14:56:20.317747 master-0 kubenswrapper[7744]: I0220 14:56:20.317657 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:20.317747 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:20.317747 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:20.317747 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:20.318738 master-0 kubenswrapper[7744]: I0220 14:56:20.317744 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:20.358163 master-0 kubenswrapper[7744]: I0220 14:56:20.358042 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:56:21.316201 master-0 kubenswrapper[7744]: I0220 14:56:21.316156 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:21.316201 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:21.316201 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:21.316201 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:21.316627 master-0 kubenswrapper[7744]: I0220 14:56:21.316601 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:22.316353 master-0 kubenswrapper[7744]: I0220 14:56:22.316273 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:22.316353 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:22.316353 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:22.316353 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:22.317052 master-0 kubenswrapper[7744]: I0220 14:56:22.316356 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:23.106159 master-0 kubenswrapper[7744]: I0220 14:56:23.106029 7744 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 14:56:23.328893 master-0 kubenswrapper[7744]: I0220 14:56:23.328805 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:23.328893 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:23.328893 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:23.328893 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:23.329963 master-0 kubenswrapper[7744]: I0220 14:56:23.328900 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:24.316950 master-0 kubenswrapper[7744]: I0220 14:56:24.316849 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:24.316950 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:24.316950 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:24.316950 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:24.317425 master-0 kubenswrapper[7744]: I0220 14:56:24.316959 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:25.037967 master-0 kubenswrapper[7744]: I0220 14:56:25.037870 7744 scope.go:117] "RemoveContainer" containerID="ba86653512a4222e60f99c9a0811e8150bea75c06b16f3bd7d165d8b4d82ace0" Feb 20 14:56:25.317335 master-0 kubenswrapper[7744]: I0220 14:56:25.317163 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:25.317335 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:25.317335 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:25.317335 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:25.317335 master-0 kubenswrapper[7744]: I0220 14:56:25.317260 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:26.018262 master-0 kubenswrapper[7744]: I0220 14:56:26.018173 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-fjtrw_4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/ingress-operator/2.log" Feb 20 14:56:26.018886 master-0 kubenswrapper[7744]: I0220 14:56:26.018811 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" event={"ID":"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a","Type":"ContainerStarted","Data":"a9ce7eb71cc45446f0234e09c6889c880d3cf1028e26cba052e2e23651943f9c"} Feb 20 14:56:26.316725 master-0 kubenswrapper[7744]: I0220 14:56:26.316558 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:26.316725 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:26.316725 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:26.316725 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:26.316725 master-0 kubenswrapper[7744]: I0220 14:56:26.316674 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:27.317354 master-0 kubenswrapper[7744]: I0220 14:56:27.317263 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:27.317354 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:27.317354 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:27.317354 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:27.318752 master-0 kubenswrapper[7744]: I0220 14:56:27.317361 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:28.316728 master-0 kubenswrapper[7744]: I0220 14:56:28.316630 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:28.316728 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:28.316728 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:28.316728 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:28.317213 master-0 kubenswrapper[7744]: I0220 14:56:28.316727 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:29.317035 master-0 kubenswrapper[7744]: I0220 14:56:29.316879 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:29.317035 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:29.317035 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:29.317035 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:29.317902 master-0 kubenswrapper[7744]: I0220 14:56:29.317071 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:30.317142 master-0 kubenswrapper[7744]: I0220 14:56:30.317054 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:30.317142 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:30.317142 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:30.317142 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:30.318268 master-0 kubenswrapper[7744]: I0220 14:56:30.317164 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:30.561237 master-0 kubenswrapper[7744]: E0220 14:56:30.561145 7744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 20 14:56:31.316811 master-0 kubenswrapper[7744]: I0220 14:56:31.316686 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:31.316811 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:31.316811 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:31.316811 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:31.316811 master-0 kubenswrapper[7744]: I0220 14:56:31.316802 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:32.317417 master-0 kubenswrapper[7744]: I0220 14:56:32.317307 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:32.317417 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:32.317417 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:32.317417 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:32.317417 master-0 kubenswrapper[7744]: I0220 14:56:32.317398 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:33.105193 master-0 kubenswrapper[7744]: I0220 14:56:33.105076 7744 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 14:56:33.317712 master-0 kubenswrapper[7744]: I0220 14:56:33.317625 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:33.317712 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:33.317712 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:33.317712 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:33.318723 master-0 kubenswrapper[7744]: I0220 14:56:33.317723 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:34.317720 master-0 kubenswrapper[7744]: I0220 14:56:34.317635 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:34.317720 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:34.317720 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:34.317720 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:34.318717 master-0 kubenswrapper[7744]: I0220 14:56:34.317730 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:35.316917 master-0 kubenswrapper[7744]: I0220 14:56:35.316801 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:35.316917 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:35.316917 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:35.316917 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:35.317438 master-0 kubenswrapper[7744]: I0220 14:56:35.316905 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:36.316182 master-0 kubenswrapper[7744]: I0220 14:56:36.316117 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:36.316182 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:36.316182 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:36.316182 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:36.317251 master-0 kubenswrapper[7744]: I0220 14:56:36.317070 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:37.316559 master-0 kubenswrapper[7744]: I0220 14:56:37.316474 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:37.316559 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:37.316559 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:37.316559 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:37.317488 master-0 kubenswrapper[7744]: I0220 14:56:37.316578 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:38.317246 master-0 kubenswrapper[7744]: I0220 14:56:38.316976 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:38.317246 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:38.317246 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:38.317246 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:38.318408 master-0 kubenswrapper[7744]: I0220 14:56:38.317261 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:39.316848 master-0 kubenswrapper[7744]: I0220 14:56:39.316778 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:39.316848 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:39.316848 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:39.316848 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:39.318002 master-0 kubenswrapper[7744]: I0220 14:56:39.316866 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:40.149596 master-0 kubenswrapper[7744]: E0220 14:56:40.149269 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T14:56:30Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T14:56:30Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T14:56:30Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T14:56:30Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:584b5d125dad1fa4f8d03e6ace2e4901c173569ff1ed9536da6915c56fa52bc0\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8124eb3839b25af23303e9fdde35728bfd24d7c0c47530e77852cba1dd9d1ffb\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702755272},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:983e809911852d534091e23b95da37b24a6b70dcb49c55e79ce6dfdaa4ca0c05\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c170c998b3e5eb96b80b09daf4a33aa903dac048a2f7d79e4c10e78e309c01eb\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1236108493},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2458acf77e6551a99656a2a1643e7ef4bf008f6bf792157614710eb9b28e0e64\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:3c45f047394ebd29a640afe4c1e96739e5155ec608b61170a2274911bdf56a3d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210258627},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c\\\"],\\\"sizeBytes\\\":480427687},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb\\\"],\\\"sizeBytes\\\":471325816}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:56:40.317524 master-0 kubenswrapper[7744]: I0220 14:56:40.317440 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:40.317524 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:40.317524 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:40.317524 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:40.318158 master-0 kubenswrapper[7744]: I0220 14:56:40.317575 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:41.316765 master-0 kubenswrapper[7744]: I0220 14:56:41.316632 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:41.316765 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:41.316765 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:41.316765 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:41.316765 master-0 kubenswrapper[7744]: I0220 14:56:41.316725 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:41.363604 master-0 kubenswrapper[7744]: E0220 14:56:41.363527 7744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 20 14:56:42.317343 master-0 kubenswrapper[7744]: I0220 14:56:42.317253 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:42.317343 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:42.317343 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:42.317343 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:42.317786 master-0 kubenswrapper[7744]: I0220 14:56:42.317359 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:43.105031 master-0 kubenswrapper[7744]: I0220 14:56:43.104901 7744 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 14:56:43.105031 master-0 kubenswrapper[7744]: I0220 14:56:43.105048 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:56:43.106000 master-0 kubenswrapper[7744]: I0220 14:56:43.105856 7744 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 20 14:56:43.106090 master-0 kubenswrapper[7744]: I0220 14:56:43.106021 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d" gracePeriod=30 Feb 20 14:56:43.228555 master-0 kubenswrapper[7744]: E0220 14:56:43.228467 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 20 14:56:43.316514 master-0 kubenswrapper[7744]: I0220 14:56:43.316439 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:43.316514 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:43.316514 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:43.316514 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:43.316830 master-0 kubenswrapper[7744]: I0220 14:56:43.316533 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:44.164997 master-0 kubenswrapper[7744]: I0220 14:56:44.164908 7744 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d" exitCode=2 Feb 20 14:56:44.165608 master-0 kubenswrapper[7744]: I0220 14:56:44.164981 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d"} Feb 20 14:56:44.165608 master-0 kubenswrapper[7744]: I0220 14:56:44.165081 7744 scope.go:117] "RemoveContainer" containerID="8b9e8f4c341fb912cdf6bf5e28d7e341853636e046981ef1ccb6eadf89c8d654" Feb 20 14:56:44.165727 master-0 kubenswrapper[7744]: I0220 14:56:44.165700 7744 scope.go:117] "RemoveContainer" containerID="29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d" Feb 20 14:56:44.166013 master-0 kubenswrapper[7744]: E0220 14:56:44.165977 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 20 14:56:44.317215 master-0 kubenswrapper[7744]: I0220 14:56:44.317136 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:44.317215 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:44.317215 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:44.317215 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:44.317539 master-0 kubenswrapper[7744]: I0220 14:56:44.317230 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:45.317579 master-0 kubenswrapper[7744]: I0220 14:56:45.317506 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:45.317579 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:45.317579 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:45.317579 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:45.318790 master-0 kubenswrapper[7744]: I0220 14:56:45.317585 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:46.317119 master-0 kubenswrapper[7744]: I0220 14:56:46.317062 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:46.317119 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:46.317119 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:46.317119 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:46.318225 master-0 kubenswrapper[7744]: I0220 14:56:46.317957 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:47.316196 master-0 kubenswrapper[7744]: I0220 14:56:47.316110 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:47.316196 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:47.316196 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:47.316196 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:47.316717 master-0 kubenswrapper[7744]: I0220 14:56:47.316214 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:48.316911 master-0 kubenswrapper[7744]: I0220 14:56:48.316818 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:48.316911 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:48.316911 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:48.316911 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:48.317852 master-0 kubenswrapper[7744]: I0220 14:56:48.316957 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:49.317754 master-0 kubenswrapper[7744]: I0220 14:56:49.317652 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:49.317754 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:49.317754 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:49.317754 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:49.318660 master-0 kubenswrapper[7744]: I0220 14:56:49.317777 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:50.150634 master-0 kubenswrapper[7744]: E0220 14:56:50.150517 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:56:50.316842 master-0 kubenswrapper[7744]: I0220 14:56:50.316771 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:50.316842 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:50.316842 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:50.316842 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:50.317279 master-0 kubenswrapper[7744]: I0220 14:56:50.316867 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:50.464279 master-0 kubenswrapper[7744]: E0220 14:56:50.463894 7744 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{telemeter-client-64bcb8ffcf-vwfzx.1895fc2fc7156e9c openshift-monitoring 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:telemeter-client-64bcb8ffcf-vwfzx,UID:8e8c5772-b6e2-43d8-b173-af74541855fb,APIVersion:v1,ResourceVersion:12062,FieldPath:spec.containers{telemeter-client},},Reason:Created,Message:Created container: telemeter-client,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:55:10.573891228 +0000 UTC m=+509.776091188,LastTimestamp:2026-02-20 14:55:10.573891228 +0000 UTC m=+509.776091188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:56:51.317244 master-0 kubenswrapper[7744]: I0220 14:56:51.317185 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:51.317244 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:51.317244 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:51.317244 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:51.318012 master-0 kubenswrapper[7744]: I0220 14:56:51.317964 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:52.317172 master-0 kubenswrapper[7744]: I0220 14:56:52.317098 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:52.317172 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:52.317172 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:52.317172 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:52.317172 master-0 kubenswrapper[7744]: I0220 14:56:52.317173 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:52.900062 master-0 kubenswrapper[7744]: I0220 14:56:52.899966 7744 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:56:52.900844 master-0 kubenswrapper[7744]: I0220 14:56:52.900799 7744 scope.go:117] "RemoveContainer" containerID="29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d" Feb 20 14:56:52.901231 master-0 kubenswrapper[7744]: E0220 14:56:52.901187 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 20 14:56:52.965264 master-0 kubenswrapper[7744]: E0220 14:56:52.965132 7744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 20 14:56:53.317894 master-0 kubenswrapper[7744]: I0220 14:56:53.317755 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:53.317894 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:53.317894 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:53.317894 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:53.317894 master-0 kubenswrapper[7744]: I0220 14:56:53.317867 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:53.970334 master-0 kubenswrapper[7744]: E0220 14:56:53.970230 7744 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 20 14:56:54.317353 master-0 kubenswrapper[7744]: I0220 14:56:54.317269 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:54.317353 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:54.317353 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:54.317353 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:54.317745 master-0 kubenswrapper[7744]: I0220 14:56:54.317352 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:55.254540 master-0 kubenswrapper[7744]: I0220 14:56:55.254453 7744 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="3bde77e581302fa688ce598a59eb1521eeb691223c05ecf792bb7f274b1fd8f2" exitCode=0 Feb 20 14:56:55.254540 master-0 kubenswrapper[7744]: I0220 14:56:55.254505 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"3bde77e581302fa688ce598a59eb1521eeb691223c05ecf792bb7f274b1fd8f2"} Feb 20 14:56:55.255469 master-0 kubenswrapper[7744]: I0220 14:56:55.254977 7744 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a76c28a2-1d48-49cb-8275-540ce323528c" Feb 20 14:56:55.255469 master-0 kubenswrapper[7744]: I0220 14:56:55.255005 7744 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a76c28a2-1d48-49cb-8275-540ce323528c" Feb 20 14:56:55.316536 master-0 kubenswrapper[7744]: I0220 14:56:55.316449 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:55.316536 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:55.316536 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:55.316536 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:55.316536 master-0 kubenswrapper[7744]: I0220 14:56:55.316517 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:56.316755 master-0 kubenswrapper[7744]: I0220 14:56:56.316636 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:56.316755 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:56.316755 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:56.316755 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:56.317706 master-0 kubenswrapper[7744]: I0220 14:56:56.316756 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:57.270654 master-0 kubenswrapper[7744]: I0220 14:56:57.270560 7744 generic.go:334] "Generic (PLEG): container finished" podID="c0a3548f-299c-4234-9bf1-c93efcb9740b" containerID="52bf43d0e30c121fdb642cca3e4e8c737348e2c0806817b6c660ae4bd355d192" exitCode=0 Feb 20 14:56:57.270913 master-0 kubenswrapper[7744]: I0220 14:56:57.270700 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" event={"ID":"c0a3548f-299c-4234-9bf1-c93efcb9740b","Type":"ContainerDied","Data":"52bf43d0e30c121fdb642cca3e4e8c737348e2c0806817b6c660ae4bd355d192"} Feb 20 14:56:57.271507 master-0 kubenswrapper[7744]: I0220 14:56:57.271449 7744 scope.go:117] "RemoveContainer" containerID="52bf43d0e30c121fdb642cca3e4e8c737348e2c0806817b6c660ae4bd355d192" Feb 20 14:56:57.275053 master-0 kubenswrapper[7744]: I0220 14:56:57.274981 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-855tj_caef1c17-56b0-479c-b000-caaac3c2b249/config-sync-controllers/0.log" Feb 20 14:56:57.275703 master-0 kubenswrapper[7744]: I0220 14:56:57.275657 7744 generic.go:334] "Generic (PLEG): container finished" podID="caef1c17-56b0-479c-b000-caaac3c2b249" containerID="40d63e74e24fee68be44b5de74837dcb78a9dc13e3f7cf14b4e7c069fc14a3c1" exitCode=1 Feb 20 14:56:57.275883 master-0 kubenswrapper[7744]: I0220 14:56:57.275709 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" event={"ID":"caef1c17-56b0-479c-b000-caaac3c2b249","Type":"ContainerDied","Data":"40d63e74e24fee68be44b5de74837dcb78a9dc13e3f7cf14b4e7c069fc14a3c1"} Feb 20 14:56:57.277676 master-0 kubenswrapper[7744]: I0220 14:56:57.277643 7744 scope.go:117] "RemoveContainer" containerID="40d63e74e24fee68be44b5de74837dcb78a9dc13e3f7cf14b4e7c069fc14a3c1" Feb 20 14:56:57.317238 master-0 kubenswrapper[7744]: I0220 14:56:57.317170 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:57.317238 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:57.317238 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:57.317238 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:57.318112 master-0 kubenswrapper[7744]: I0220 14:56:57.317265 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:58.287069 master-0 kubenswrapper[7744]: I0220 14:56:58.286998 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-855tj_caef1c17-56b0-479c-b000-caaac3c2b249/config-sync-controllers/0.log" Feb 20 14:56:58.287784 master-0 kubenswrapper[7744]: I0220 14:56:58.287726 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" event={"ID":"caef1c17-56b0-479c-b000-caaac3c2b249","Type":"ContainerStarted","Data":"031dcaaee6eadbfa72ca313aff262add31ad56554e56df47aa69a95767f1176d"} Feb 20 14:56:58.290387 master-0 kubenswrapper[7744]: I0220 14:56:58.290340 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" event={"ID":"c0a3548f-299c-4234-9bf1-c93efcb9740b","Type":"ContainerStarted","Data":"8955afec05ac17b6d5bd5b27623b6f73413fa01ace341f3ccb7e06f06406e93d"} Feb 20 14:56:58.290683 master-0 kubenswrapper[7744]: I0220 14:56:58.290632 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:56:58.292799 master-0 kubenswrapper[7744]: I0220 14:56:58.292728 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 14:56:58.316894 master-0 kubenswrapper[7744]: I0220 14:56:58.316803 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:58.316894 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:58.316894 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:58.316894 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:58.316894 master-0 kubenswrapper[7744]: I0220 14:56:58.316883 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:56:59.300404 master-0 kubenswrapper[7744]: I0220 14:56:59.300315 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-855tj_caef1c17-56b0-479c-b000-caaac3c2b249/config-sync-controllers/0.log" Feb 20 14:56:59.301389 master-0 kubenswrapper[7744]: I0220 14:56:59.300987 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-855tj_caef1c17-56b0-479c-b000-caaac3c2b249/cluster-cloud-controller-manager/0.log" Feb 20 14:56:59.301389 master-0 kubenswrapper[7744]: I0220 14:56:59.301034 7744 generic.go:334] "Generic (PLEG): container finished" podID="caef1c17-56b0-479c-b000-caaac3c2b249" containerID="ba4791195ab28fdefd71609ee2f152b2f868666e0ec80047600b61f1c976a50f" exitCode=1 Feb 20 14:56:59.301389 master-0 kubenswrapper[7744]: I0220 14:56:59.301099 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" event={"ID":"caef1c17-56b0-479c-b000-caaac3c2b249","Type":"ContainerDied","Data":"ba4791195ab28fdefd71609ee2f152b2f868666e0ec80047600b61f1c976a50f"} Feb 20 14:56:59.301897 master-0 kubenswrapper[7744]: I0220 14:56:59.301829 7744 scope.go:117] "RemoveContainer" containerID="ba4791195ab28fdefd71609ee2f152b2f868666e0ec80047600b61f1c976a50f" Feb 20 14:56:59.316611 master-0 kubenswrapper[7744]: I0220 14:56:59.316542 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:56:59.316611 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:56:59.316611 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:56:59.316611 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:56:59.316967 master-0 kubenswrapper[7744]: I0220 14:56:59.316638 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:00.151009 master-0 kubenswrapper[7744]: E0220 14:57:00.150910 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:57:00.311030 master-0 kubenswrapper[7744]: I0220 14:57:00.310917 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-855tj_caef1c17-56b0-479c-b000-caaac3c2b249/config-sync-controllers/0.log" Feb 20 14:57:00.311810 master-0 kubenswrapper[7744]: I0220 14:57:00.311569 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-855tj_caef1c17-56b0-479c-b000-caaac3c2b249/cluster-cloud-controller-manager/0.log" Feb 20 14:57:00.311891 master-0 kubenswrapper[7744]: I0220 14:57:00.311854 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" event={"ID":"caef1c17-56b0-479c-b000-caaac3c2b249","Type":"ContainerStarted","Data":"9fb95b2eeb097676234cbbf758fd01689ed32c313ae8911055836e8c306a38f8"} Feb 20 14:57:00.316996 master-0 kubenswrapper[7744]: I0220 14:57:00.316903 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:00.316996 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:00.316996 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:00.316996 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:00.317405 master-0 kubenswrapper[7744]: I0220 14:57:00.317029 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:01.317168 master-0 kubenswrapper[7744]: I0220 14:57:01.317059 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:01.317168 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:01.317168 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:01.317168 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:01.317168 master-0 kubenswrapper[7744]: I0220 14:57:01.317147 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:02.316679 master-0 kubenswrapper[7744]: I0220 14:57:02.316574 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:02.316679 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:02.316679 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:02.316679 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:02.316679 master-0 kubenswrapper[7744]: I0220 14:57:02.316670 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:03.039472 master-0 kubenswrapper[7744]: I0220 14:57:03.039398 7744 scope.go:117] "RemoveContainer" containerID="29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d" Feb 20 14:57:03.039804 master-0 kubenswrapper[7744]: E0220 14:57:03.039623 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 20 14:57:03.316704 master-0 kubenswrapper[7744]: I0220 14:57:03.316511 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:03.316704 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:03.316704 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:03.316704 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:03.316704 master-0 kubenswrapper[7744]: I0220 14:57:03.316611 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:04.317322 master-0 kubenswrapper[7744]: I0220 14:57:04.317241 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:04.317322 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:04.317322 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:04.317322 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:04.318283 master-0 kubenswrapper[7744]: I0220 14:57:04.317334 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:05.316919 master-0 kubenswrapper[7744]: I0220 14:57:05.316854 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:05.316919 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:05.316919 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:05.316919 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:05.317382 master-0 kubenswrapper[7744]: I0220 14:57:05.316969 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:06.166384 master-0 kubenswrapper[7744]: E0220 14:57:06.166290 7744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 20 14:57:06.317152 master-0 kubenswrapper[7744]: I0220 14:57:06.317073 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:06.317152 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:06.317152 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:06.317152 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:06.318294 master-0 kubenswrapper[7744]: I0220 14:57:06.317160 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:07.317015 master-0 kubenswrapper[7744]: I0220 14:57:07.316882 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:07.317015 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:07.317015 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:07.317015 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:07.317616 master-0 kubenswrapper[7744]: I0220 14:57:07.317531 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:08.317146 master-0 kubenswrapper[7744]: I0220 14:57:08.317014 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:08.317146 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:08.317146 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:08.317146 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:08.317146 master-0 kubenswrapper[7744]: I0220 14:57:08.317122 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:08.802374 master-0 kubenswrapper[7744]: I0220 14:57:08.802258 7744 status_manager.go:851] "Failed to get status for pod" podUID="18a83278819db2092fa26d8274eb3f00" pod="openshift-etcd/etcd-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" Feb 20 14:57:09.316616 master-0 kubenswrapper[7744]: I0220 14:57:09.316530 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:09.316616 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:09.316616 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:09.316616 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:09.317079 master-0 kubenswrapper[7744]: I0220 14:57:09.316644 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:10.152086 master-0 kubenswrapper[7744]: E0220 14:57:10.151993 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:57:10.316890 master-0 kubenswrapper[7744]: I0220 14:57:10.316830 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:10.316890 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:10.316890 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:10.316890 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:10.317430 master-0 kubenswrapper[7744]: I0220 14:57:10.316908 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:10.407490 master-0 kubenswrapper[7744]: I0220 14:57:10.407333 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-6qqvd_84a61910-48eb-4c27-8d69-f6aa7ce912ca/manager/0.log" Feb 20 14:57:10.407490 master-0 kubenswrapper[7744]: I0220 14:57:10.407416 7744 generic.go:334] "Generic (PLEG): container finished" podID="84a61910-48eb-4c27-8d69-f6aa7ce912ca" containerID="033a3d2eac65c1b4d9f27c950aeb8dc662b4f02d9215e718db95c771bce201e1" exitCode=1 Feb 20 14:57:10.407772 master-0 kubenswrapper[7744]: I0220 14:57:10.407508 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" event={"ID":"84a61910-48eb-4c27-8d69-f6aa7ce912ca","Type":"ContainerDied","Data":"033a3d2eac65c1b4d9f27c950aeb8dc662b4f02d9215e718db95c771bce201e1"} Feb 20 14:57:10.408348 master-0 kubenswrapper[7744]: I0220 14:57:10.408301 7744 scope.go:117] "RemoveContainer" containerID="033a3d2eac65c1b4d9f27c950aeb8dc662b4f02d9215e718db95c771bce201e1" Feb 20 14:57:10.409905 master-0 kubenswrapper[7744]: I0220 14:57:10.409867 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-2mtj6_a1af84e0-776b-4285-906a-6880dbc82a7b/snapshot-controller/0.log" Feb 20 14:57:10.410030 master-0 kubenswrapper[7744]: I0220 14:57:10.409954 7744 generic.go:334] "Generic (PLEG): container finished" podID="a1af84e0-776b-4285-906a-6880dbc82a7b" containerID="05169fed6fef4d82074b47315517f420ef327f3261f2444e53508e66bd83fdf7" exitCode=1 Feb 20 14:57:10.410030 master-0 kubenswrapper[7744]: I0220 14:57:10.409991 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" event={"ID":"a1af84e0-776b-4285-906a-6880dbc82a7b","Type":"ContainerDied","Data":"05169fed6fef4d82074b47315517f420ef327f3261f2444e53508e66bd83fdf7"} Feb 20 14:57:10.410793 master-0 kubenswrapper[7744]: I0220 14:57:10.410715 7744 scope.go:117] "RemoveContainer" containerID="05169fed6fef4d82074b47315517f420ef327f3261f2444e53508e66bd83fdf7" Feb 20 14:57:10.413032 master-0 kubenswrapper[7744]: I0220 14:57:10.412979 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jl7zr_fc334fff-c0bf-4905-bcdb-b0d2a35b0590/manager/0.log" Feb 20 14:57:10.413764 master-0 kubenswrapper[7744]: I0220 14:57:10.413622 7744 generic.go:334] "Generic (PLEG): container finished" podID="fc334fff-c0bf-4905-bcdb-b0d2a35b0590" containerID="c477064b0f3fd6cd0d107cda0e6daa47e69c108cc08e8c15adda744ad3c559d0" exitCode=1 Feb 20 14:57:10.413764 master-0 kubenswrapper[7744]: I0220 14:57:10.413676 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" event={"ID":"fc334fff-c0bf-4905-bcdb-b0d2a35b0590","Type":"ContainerDied","Data":"c477064b0f3fd6cd0d107cda0e6daa47e69c108cc08e8c15adda744ad3c559d0"} Feb 20 14:57:10.414399 master-0 kubenswrapper[7744]: I0220 14:57:10.414359 7744 scope.go:117] "RemoveContainer" containerID="c477064b0f3fd6cd0d107cda0e6daa47e69c108cc08e8c15adda744ad3c559d0" Feb 20 14:57:11.317320 master-0 kubenswrapper[7744]: I0220 14:57:11.317242 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:11.317320 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:11.317320 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:11.317320 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:11.318282 master-0 kubenswrapper[7744]: I0220 14:57:11.317337 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:11.427190 master-0 kubenswrapper[7744]: I0220 14:57:11.427129 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jl7zr_fc334fff-c0bf-4905-bcdb-b0d2a35b0590/manager/0.log" Feb 20 14:57:11.428008 master-0 kubenswrapper[7744]: I0220 14:57:11.427919 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" event={"ID":"fc334fff-c0bf-4905-bcdb-b0d2a35b0590","Type":"ContainerStarted","Data":"e10f915b137c12c85dbe6de89c833ba4a9f763caac14e31f03d7e9153f656999"} Feb 20 14:57:11.428568 master-0 kubenswrapper[7744]: I0220 14:57:11.428509 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:57:11.432320 master-0 kubenswrapper[7744]: I0220 14:57:11.432266 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-6qqvd_84a61910-48eb-4c27-8d69-f6aa7ce912ca/manager/0.log" Feb 20 14:57:11.432449 master-0 kubenswrapper[7744]: I0220 14:57:11.432411 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" event={"ID":"84a61910-48eb-4c27-8d69-f6aa7ce912ca","Type":"ContainerStarted","Data":"0c7b8bf82047d7f14cd11a58c6013f1477a0bb779432cc1841cbfb0ce5b3642f"} Feb 20 14:57:11.432745 master-0 kubenswrapper[7744]: I0220 14:57:11.432693 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:57:11.435575 master-0 kubenswrapper[7744]: I0220 14:57:11.435542 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-2mtj6_a1af84e0-776b-4285-906a-6880dbc82a7b/snapshot-controller/0.log" Feb 20 14:57:11.435684 master-0 kubenswrapper[7744]: I0220 14:57:11.435622 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" event={"ID":"a1af84e0-776b-4285-906a-6880dbc82a7b","Type":"ContainerStarted","Data":"c72f14e035fd996eb495f313cdb7f235446e53513149f0182ebc36aa18d25724"} Feb 20 14:57:12.317086 master-0 kubenswrapper[7744]: I0220 14:57:12.317002 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:12.317086 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:12.317086 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:12.317086 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:12.317086 master-0 kubenswrapper[7744]: I0220 14:57:12.317086 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:13.316490 master-0 kubenswrapper[7744]: I0220 14:57:13.316410 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:13.316490 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:13.316490 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:13.316490 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:13.316844 master-0 kubenswrapper[7744]: I0220 14:57:13.316506 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:14.038473 master-0 kubenswrapper[7744]: I0220 14:57:14.038391 7744 scope.go:117] "RemoveContainer" containerID="29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d" Feb 20 14:57:14.039517 master-0 kubenswrapper[7744]: E0220 14:57:14.038810 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 20 14:57:14.317061 master-0 kubenswrapper[7744]: I0220 14:57:14.316812 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:14.317061 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:14.317061 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:14.317061 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:14.317061 master-0 kubenswrapper[7744]: I0220 14:57:14.316904 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:15.317029 master-0 kubenswrapper[7744]: I0220 14:57:15.316920 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:15.317029 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:15.317029 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:15.317029 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:15.317997 master-0 kubenswrapper[7744]: I0220 14:57:15.317053 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:16.317252 master-0 kubenswrapper[7744]: I0220 14:57:16.317171 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:16.317252 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:16.317252 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:16.317252 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:16.318401 master-0 kubenswrapper[7744]: I0220 14:57:16.317280 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:17.316741 master-0 kubenswrapper[7744]: I0220 14:57:17.316650 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:17.316741 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:17.316741 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:17.316741 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:17.316741 master-0 kubenswrapper[7744]: I0220 14:57:17.316739 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:18.316344 master-0 kubenswrapper[7744]: I0220 14:57:18.316265 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:18.316344 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:18.316344 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:18.316344 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:18.317305 master-0 kubenswrapper[7744]: I0220 14:57:18.316390 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:19.316583 master-0 kubenswrapper[7744]: I0220 14:57:19.316417 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:19.316583 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:19.316583 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:19.316583 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:19.316583 master-0 kubenswrapper[7744]: I0220 14:57:19.316523 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:20.152889 master-0 kubenswrapper[7744]: E0220 14:57:20.152779 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:57:20.152889 master-0 kubenswrapper[7744]: E0220 14:57:20.152837 7744 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 20 14:57:20.317034 master-0 kubenswrapper[7744]: I0220 14:57:20.316958 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:20.317034 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:20.317034 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:20.317034 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:20.318029 master-0 kubenswrapper[7744]: I0220 14:57:20.317039 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:21.318728 master-0 kubenswrapper[7744]: I0220 14:57:21.318447 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:21.318728 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:21.318728 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:21.318728 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:21.318728 master-0 kubenswrapper[7744]: I0220 14:57:21.318581 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:22.317197 master-0 kubenswrapper[7744]: I0220 14:57:22.317133 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:22.317197 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:22.317197 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:22.317197 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:22.317197 master-0 kubenswrapper[7744]: I0220 14:57:22.317209 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:22.568583 master-0 kubenswrapper[7744]: E0220 14:57:22.568268 7744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 14:57:23.316788 master-0 kubenswrapper[7744]: I0220 14:57:23.316688 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:23.316788 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:23.316788 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:23.316788 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:23.317338 master-0 kubenswrapper[7744]: I0220 14:57:23.316803 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:24.316456 master-0 kubenswrapper[7744]: I0220 14:57:24.316356 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:24.316456 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:24.316456 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:24.316456 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:24.316456 master-0 kubenswrapper[7744]: I0220 14:57:24.316452 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:24.370316 master-0 kubenswrapper[7744]: I0220 14:57:24.370230 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 14:57:24.468422 master-0 kubenswrapper[7744]: E0220 14:57:24.468160 7744 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{telemeter-client-64bcb8ffcf-vwfzx.1895fc2fc81872b1 openshift-monitoring 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:telemeter-client-64bcb8ffcf-vwfzx,UID:8e8c5772-b6e2-43d8-b173-af74541855fb,APIVersion:v1,ResourceVersion:12062,FieldPath:spec.containers{telemeter-client},},Reason:Started,Message:Started container telemeter-client,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:55:10.590866097 +0000 UTC m=+509.793066057,LastTimestamp:2026-02-20 14:55:10.590866097 +0000 UTC m=+509.793066057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:57:24.496132 master-0 kubenswrapper[7744]: I0220 14:57:24.496053 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 14:57:25.317061 master-0 kubenswrapper[7744]: I0220 14:57:25.316912 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:25.317061 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:25.317061 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:25.317061 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:25.318096 master-0 kubenswrapper[7744]: I0220 14:57:25.317054 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:26.046002 master-0 kubenswrapper[7744]: I0220 14:57:26.045763 7744 scope.go:117] "RemoveContainer" containerID="29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d" Feb 20 14:57:26.046526 master-0 kubenswrapper[7744]: E0220 14:57:26.046124 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 20 14:57:26.317300 master-0 kubenswrapper[7744]: I0220 14:57:26.317069 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:26.317300 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:26.317300 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:26.317300 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:26.317300 master-0 kubenswrapper[7744]: I0220 14:57:26.317132 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:27.316268 master-0 kubenswrapper[7744]: I0220 14:57:27.316197 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:57:27.316268 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:57:27.316268 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:57:27.316268 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:57:27.316610 master-0 kubenswrapper[7744]: I0220 14:57:27.316270 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:57:27.316610 master-0 kubenswrapper[7744]: I0220 14:57:27.316325 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:57:27.316947 master-0 kubenswrapper[7744]: I0220 14:57:27.316896 7744 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"a4f9d4f4e0643d58b1e9cfa61dcaf195c6366d9cff2806438f5074f51bd80343"} pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" containerMessage="Container router failed startup probe, will be restarted" Feb 20 14:57:27.317006 master-0 kubenswrapper[7744]: I0220 14:57:27.316960 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" containerID="cri-o://a4f9d4f4e0643d58b1e9cfa61dcaf195c6366d9cff2806438f5074f51bd80343" gracePeriod=3600 Feb 20 14:57:29.258137 master-0 kubenswrapper[7744]: E0220 14:57:29.258064 7744 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 20 14:57:29.731047 master-0 kubenswrapper[7744]: E0220 14:57:29.731002 7744 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb419b8533666d3ae7054c771ce97a95f.slice/crio-conmon-3bc728ed313ea4c2c24bfa6e5ec35ea80b76ead7f7237f5bfbb4c7d63e868b56.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb419b8533666d3ae7054c771ce97a95f.slice/crio-3bc728ed313ea4c2c24bfa6e5ec35ea80b76ead7f7237f5bfbb4c7d63e868b56.scope\": RecentStats: unable to find data in memory cache]" Feb 20 14:57:30.592310 master-0 kubenswrapper[7744]: I0220 14:57:30.592208 7744 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="3bc728ed313ea4c2c24bfa6e5ec35ea80b76ead7f7237f5bfbb4c7d63e868b56" exitCode=0 Feb 20 14:57:30.592310 master-0 kubenswrapper[7744]: I0220 14:57:30.592298 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"3bc728ed313ea4c2c24bfa6e5ec35ea80b76ead7f7237f5bfbb4c7d63e868b56"} Feb 20 14:57:30.593337 master-0 kubenswrapper[7744]: I0220 14:57:30.592769 7744 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a76c28a2-1d48-49cb-8275-540ce323528c" Feb 20 14:57:30.593337 master-0 kubenswrapper[7744]: I0220 14:57:30.592801 7744 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a76c28a2-1d48-49cb-8275-540ce323528c" Feb 20 14:57:39.569511 master-0 kubenswrapper[7744]: E0220 14:57:39.569393 7744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 14:57:40.426881 master-0 kubenswrapper[7744]: E0220 14:57:40.426469 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T14:57:30Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T14:57:30Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T14:57:30Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T14:57:30Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:584b5d125dad1fa4f8d03e6ace2e4901c173569ff1ed9536da6915c56fa52bc0\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8124eb3839b25af23303e9fdde35728bfd24d7c0c47530e77852cba1dd9d1ffb\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702755272},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:983e809911852d534091e23b95da37b24a6b70dcb49c55e79ce6dfdaa4ca0c05\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c170c998b3e5eb96b80b09daf4a33aa903dac048a2f7d79e4c10e78e309c01eb\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1236108493},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2458acf77e6551a99656a2a1643e7ef4bf008f6bf792157614710eb9b28e0e64\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:3c45f047394ebd29a640afe4c1e96739e5155ec608b61170a2274911bdf56a3d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210258627},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c\\\"],\\\"sizeBytes\\\":480427687},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb\\\"],\\\"sizeBytes\\\":471325816}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:57:41.037792 master-0 kubenswrapper[7744]: I0220 14:57:41.037716 7744 scope.go:117] "RemoveContainer" containerID="29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d" Feb 20 14:57:41.038678 master-0 kubenswrapper[7744]: E0220 14:57:41.038156 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 20 14:57:41.685369 master-0 kubenswrapper[7744]: I0220 14:57:41.685285 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-2mtj6_a1af84e0-776b-4285-906a-6880dbc82a7b/snapshot-controller/1.log" Feb 20 14:57:41.686309 master-0 kubenswrapper[7744]: I0220 14:57:41.686270 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-2mtj6_a1af84e0-776b-4285-906a-6880dbc82a7b/snapshot-controller/0.log" Feb 20 14:57:41.686565 master-0 kubenswrapper[7744]: I0220 14:57:41.686521 7744 generic.go:334] "Generic (PLEG): container finished" podID="a1af84e0-776b-4285-906a-6880dbc82a7b" containerID="c72f14e035fd996eb495f313cdb7f235446e53513149f0182ebc36aa18d25724" exitCode=1 Feb 20 14:57:41.686761 master-0 kubenswrapper[7744]: I0220 14:57:41.686662 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" event={"ID":"a1af84e0-776b-4285-906a-6880dbc82a7b","Type":"ContainerDied","Data":"c72f14e035fd996eb495f313cdb7f235446e53513149f0182ebc36aa18d25724"} Feb 20 14:57:41.686884 master-0 kubenswrapper[7744]: I0220 14:57:41.686836 7744 scope.go:117] "RemoveContainer" containerID="05169fed6fef4d82074b47315517f420ef327f3261f2444e53508e66bd83fdf7" Feb 20 14:57:41.688014 master-0 kubenswrapper[7744]: I0220 14:57:41.687973 7744 scope.go:117] "RemoveContainer" containerID="c72f14e035fd996eb495f313cdb7f235446e53513149f0182ebc36aa18d25724" Feb 20 14:57:41.688617 master-0 kubenswrapper[7744]: E0220 14:57:41.688580 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-2mtj6_openshift-cluster-storage-operator(a1af84e0-776b-4285-906a-6880dbc82a7b)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" podUID="a1af84e0-776b-4285-906a-6880dbc82a7b" Feb 20 14:57:42.702879 master-0 kubenswrapper[7744]: I0220 14:57:42.702802 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-2mtj6_a1af84e0-776b-4285-906a-6880dbc82a7b/snapshot-controller/1.log" Feb 20 14:57:43.713396 master-0 kubenswrapper[7744]: I0220 14:57:43.713334 7744 generic.go:334] "Generic (PLEG): container finished" podID="5d2b154b-de63-4c9b-99d8-487fb3035fb9" containerID="c3644a2305f2cac790098fa61dc92fdcede4316b05ab9e68ec6a558810ecdfcf" exitCode=0 Feb 20 14:57:43.713955 master-0 kubenswrapper[7744]: I0220 14:57:43.713403 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" event={"ID":"5d2b154b-de63-4c9b-99d8-487fb3035fb9","Type":"ContainerDied","Data":"c3644a2305f2cac790098fa61dc92fdcede4316b05ab9e68ec6a558810ecdfcf"} Feb 20 14:57:43.714284 master-0 kubenswrapper[7744]: I0220 14:57:43.714242 7744 scope.go:117] "RemoveContainer" containerID="c3644a2305f2cac790098fa61dc92fdcede4316b05ab9e68ec6a558810ecdfcf" Feb 20 14:57:44.725246 master-0 kubenswrapper[7744]: I0220 14:57:44.725116 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" event={"ID":"5d2b154b-de63-4c9b-99d8-487fb3035fb9","Type":"ContainerStarted","Data":"17a1dcd626d2cfc41eeea0541351130306226005125096a98152fe8eaa485bfc"} Feb 20 14:57:46.745405 master-0 kubenswrapper[7744]: I0220 14:57:46.745298 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_56ff46cdb00d28519af7c0cdc9ea8d11/kube-scheduler/0.log" Feb 20 14:57:46.746419 master-0 kubenswrapper[7744]: I0220 14:57:46.746018 7744 generic.go:334] "Generic (PLEG): container finished" podID="56ff46cdb00d28519af7c0cdc9ea8d11" containerID="c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b" exitCode=1 Feb 20 14:57:46.746419 master-0 kubenswrapper[7744]: I0220 14:57:46.746080 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerDied","Data":"c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b"} Feb 20 14:57:46.746871 master-0 kubenswrapper[7744]: I0220 14:57:46.746818 7744 scope.go:117] "RemoveContainer" containerID="c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b" Feb 20 14:57:47.758966 master-0 kubenswrapper[7744]: I0220 14:57:47.758871 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_56ff46cdb00d28519af7c0cdc9ea8d11/kube-scheduler/0.log" Feb 20 14:57:47.759786 master-0 kubenswrapper[7744]: I0220 14:57:47.759696 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181"} Feb 20 14:57:47.760153 master-0 kubenswrapper[7744]: I0220 14:57:47.760123 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 14:57:50.427553 master-0 kubenswrapper[7744]: E0220 14:57:50.427435 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:57:51.798524 master-0 kubenswrapper[7744]: I0220 14:57:51.798290 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k2tnk_86f6836b-b018-4c7a-87ad-51809a4b9c7a/cluster-baremetal-operator/0.log" Feb 20 14:57:51.798524 master-0 kubenswrapper[7744]: I0220 14:57:51.798400 7744 generic.go:334] "Generic (PLEG): container finished" podID="86f6836b-b018-4c7a-87ad-51809a4b9c7a" containerID="57a9d244672b000b813223a646214cb5149d5553c3f6c953fcf4645211da137b" exitCode=1 Feb 20 14:57:51.799613 master-0 kubenswrapper[7744]: I0220 14:57:51.799110 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" event={"ID":"86f6836b-b018-4c7a-87ad-51809a4b9c7a","Type":"ContainerDied","Data":"57a9d244672b000b813223a646214cb5149d5553c3f6c953fcf4645211da137b"} Feb 20 14:57:51.800107 master-0 kubenswrapper[7744]: I0220 14:57:51.800046 7744 scope.go:117] "RemoveContainer" containerID="57a9d244672b000b813223a646214cb5149d5553c3f6c953fcf4645211da137b" Feb 20 14:57:51.803962 master-0 kubenswrapper[7744]: I0220 14:57:51.803882 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-2tpv8_49044786-483a-406e-8750-f6ded400841d/control-plane-machine-set-operator/0.log" Feb 20 14:57:51.804056 master-0 kubenswrapper[7744]: I0220 14:57:51.803994 7744 generic.go:334] "Generic (PLEG): container finished" podID="49044786-483a-406e-8750-f6ded400841d" containerID="c537be0fb6abb27532917c3ba13de8d47b09b2f7faa20aacc94423594538336f" exitCode=1 Feb 20 14:57:51.804056 master-0 kubenswrapper[7744]: I0220 14:57:51.804043 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" event={"ID":"49044786-483a-406e-8750-f6ded400841d","Type":"ContainerDied","Data":"c537be0fb6abb27532917c3ba13de8d47b09b2f7faa20aacc94423594538336f"} Feb 20 14:57:51.805024 master-0 kubenswrapper[7744]: I0220 14:57:51.804976 7744 scope.go:117] "RemoveContainer" containerID="c537be0fb6abb27532917c3ba13de8d47b09b2f7faa20aacc94423594538336f" Feb 20 14:57:52.386752 master-0 kubenswrapper[7744]: I0220 14:57:52.386614 7744 patch_prober.go:28] interesting pod/controller-manager-647657fcb-w9586 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.74:8443/healthz\": dial tcp 10.128.0.74:8443: connect: connection refused" start-of-body= Feb 20 14:57:52.386752 master-0 kubenswrapper[7744]: I0220 14:57:52.386697 7744 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" podUID="bdf18981-b755-4b11-8793-38bc5e2e755b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.74:8443/healthz\": dial tcp 10.128.0.74:8443: connect: connection refused" Feb 20 14:57:52.387097 master-0 kubenswrapper[7744]: I0220 14:57:52.386750 7744 patch_prober.go:28] interesting pod/controller-manager-647657fcb-w9586 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.74:8443/healthz\": dial tcp 10.128.0.74:8443: connect: connection refused" start-of-body= Feb 20 14:57:52.387097 master-0 kubenswrapper[7744]: I0220 14:57:52.386826 7744 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" podUID="bdf18981-b755-4b11-8793-38bc5e2e755b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.74:8443/healthz\": dial tcp 10.128.0.74:8443: connect: connection refused" Feb 20 14:57:52.815598 master-0 kubenswrapper[7744]: I0220 14:57:52.815547 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-2tpv8_49044786-483a-406e-8750-f6ded400841d/control-plane-machine-set-operator/0.log" Feb 20 14:57:52.816618 master-0 kubenswrapper[7744]: I0220 14:57:52.816565 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" event={"ID":"49044786-483a-406e-8750-f6ded400841d","Type":"ContainerStarted","Data":"34cd67fe375d543593e71b0db6a6c6578ad59b2187779424e14bfbf76ca085fe"} Feb 20 14:57:52.819995 master-0 kubenswrapper[7744]: I0220 14:57:52.819964 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k2tnk_86f6836b-b018-4c7a-87ad-51809a4b9c7a/cluster-baremetal-operator/0.log" Feb 20 14:57:52.820317 master-0 kubenswrapper[7744]: I0220 14:57:52.820274 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" event={"ID":"86f6836b-b018-4c7a-87ad-51809a4b9c7a","Type":"ContainerStarted","Data":"56def389d27cfe7ad67180dd3ed63a339125a9d6855768c06cdeebbe5ed251cd"} Feb 20 14:57:52.822720 master-0 kubenswrapper[7744]: I0220 14:57:52.822631 7744 generic.go:334] "Generic (PLEG): container finished" podID="bdf18981-b755-4b11-8793-38bc5e2e755b" containerID="71a3faa6e2a13b4bcadc91647966380b556ee1824a73e0209af007ec80d749b3" exitCode=0 Feb 20 14:57:52.822855 master-0 kubenswrapper[7744]: I0220 14:57:52.822721 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" event={"ID":"bdf18981-b755-4b11-8793-38bc5e2e755b","Type":"ContainerDied","Data":"71a3faa6e2a13b4bcadc91647966380b556ee1824a73e0209af007ec80d749b3"} Feb 20 14:57:52.823581 master-0 kubenswrapper[7744]: I0220 14:57:52.823523 7744 scope.go:117] "RemoveContainer" containerID="71a3faa6e2a13b4bcadc91647966380b556ee1824a73e0209af007ec80d749b3" Feb 20 14:57:53.833779 master-0 kubenswrapper[7744]: I0220 14:57:53.833670 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" event={"ID":"bdf18981-b755-4b11-8793-38bc5e2e755b","Type":"ContainerStarted","Data":"db8b2b97e53f2e0f9eb8b077984d360867eb853438c79f964c4316743bc03b9a"} Feb 20 14:57:53.834630 master-0 kubenswrapper[7744]: I0220 14:57:53.834196 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:57:53.836461 master-0 kubenswrapper[7744]: I0220 14:57:53.836399 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-xcrlh_996d4949-f92c-42ac-9bda-8c6ec0295e92/machine-approver-controller/0.log" Feb 20 14:57:53.837191 master-0 kubenswrapper[7744]: I0220 14:57:53.837131 7744 generic.go:334] "Generic (PLEG): container finished" podID="996d4949-f92c-42ac-9bda-8c6ec0295e92" containerID="b6e9e6d9ccde8375bcdecc9c3bf9ed6951fb841bc2a4f124a46a0fefb565de16" exitCode=255 Feb 20 14:57:53.837292 master-0 kubenswrapper[7744]: I0220 14:57:53.837193 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" event={"ID":"996d4949-f92c-42ac-9bda-8c6ec0295e92","Type":"ContainerDied","Data":"b6e9e6d9ccde8375bcdecc9c3bf9ed6951fb841bc2a4f124a46a0fefb565de16"} Feb 20 14:57:53.837899 master-0 kubenswrapper[7744]: I0220 14:57:53.837843 7744 scope.go:117] "RemoveContainer" containerID="b6e9e6d9ccde8375bcdecc9c3bf9ed6951fb841bc2a4f124a46a0fefb565de16" Feb 20 14:57:53.841410 master-0 kubenswrapper[7744]: I0220 14:57:53.841354 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 14:57:54.039211 master-0 kubenswrapper[7744]: I0220 14:57:54.039096 7744 scope.go:117] "RemoveContainer" containerID="c72f14e035fd996eb495f313cdb7f235446e53513149f0182ebc36aa18d25724" Feb 20 14:57:54.039211 master-0 kubenswrapper[7744]: I0220 14:57:54.039208 7744 scope.go:117] "RemoveContainer" containerID="29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d" Feb 20 14:57:54.039779 master-0 kubenswrapper[7744]: E0220 14:57:54.039711 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(c9ad9373c007a4fcd25e70622bdc8deb)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" Feb 20 14:57:54.848760 master-0 kubenswrapper[7744]: I0220 14:57:54.848670 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-xcrlh_996d4949-f92c-42ac-9bda-8c6ec0295e92/machine-approver-controller/0.log" Feb 20 14:57:54.849598 master-0 kubenswrapper[7744]: I0220 14:57:54.849249 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" event={"ID":"996d4949-f92c-42ac-9bda-8c6ec0295e92","Type":"ContainerStarted","Data":"21cd6842aaa686fe3e5ffd58e0388911fa0632be4778f1bd4b937f97547182c4"} Feb 20 14:57:54.852415 master-0 kubenswrapper[7744]: I0220 14:57:54.852303 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-2mtj6_a1af84e0-776b-4285-906a-6880dbc82a7b/snapshot-controller/1.log" Feb 20 14:57:54.852587 master-0 kubenswrapper[7744]: I0220 14:57:54.852475 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" event={"ID":"a1af84e0-776b-4285-906a-6880dbc82a7b","Type":"ContainerStarted","Data":"476bf416eb721beeab9d0378cf1e0c01b5fd08764fb04a228352a3607254df23"} Feb 20 14:57:56.570825 master-0 kubenswrapper[7744]: E0220 14:57:56.570757 7744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 14:57:58.471585 master-0 kubenswrapper[7744]: E0220 14:57:58.471348 7744 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{telemeter-client-64bcb8ffcf-vwfzx.1895fc2fc82c5567 openshift-monitoring 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:telemeter-client-64bcb8ffcf-vwfzx,UID:8e8c5772-b6e2-43d8-b173-af74541855fb,APIVersion:v1,ResourceVersion:12062,FieldPath:spec.containers{reload},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 14:55:10.592169319 +0000 UTC m=+509.794369279,LastTimestamp:2026-02-20 14:55:10.592169319 +0000 UTC m=+509.794369279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 14:58:00.428967 master-0 kubenswrapper[7744]: E0220 14:58:00.428797 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:58:04.596223 master-0 kubenswrapper[7744]: E0220 14:58:04.596123 7744 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 20 14:58:04.941351 master-0 kubenswrapper[7744]: I0220 14:58:04.941263 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"a9910dbc8b4d784e8e79d6947c350b95a90988deee0d33fc3478aba56e03a8cd"} Feb 20 14:58:05.976403 master-0 kubenswrapper[7744]: I0220 14:58:05.976349 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"388297d97925469d6ea9b84d2f2a86a576d0a7b6f5f083892f33906a5b9e0f04"} Feb 20 14:58:05.977104 master-0 kubenswrapper[7744]: I0220 14:58:05.977020 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"f4c7e525c09ff1dff6066196cae634275eb0ec1bb486fdb04d0889a7fba258c3"} Feb 20 14:58:05.977188 master-0 kubenswrapper[7744]: I0220 14:58:05.977105 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"1fb66d91bf9251ceb0c28561e4de849b3048f457180199219b47b4ae089b8f04"} Feb 20 14:58:06.992518 master-0 kubenswrapper[7744]: I0220 14:58:06.992407 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"e37813285486b0462bda9c69308d201df650878eef48faf9f76e3de05ff0a8ac"} Feb 20 14:58:06.993345 master-0 kubenswrapper[7744]: I0220 14:58:06.992979 7744 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a76c28a2-1d48-49cb-8275-540ce323528c" Feb 20 14:58:06.993345 master-0 kubenswrapper[7744]: I0220 14:58:06.993023 7744 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a76c28a2-1d48-49cb-8275-540ce323528c" Feb 20 14:58:07.038173 master-0 kubenswrapper[7744]: I0220 14:58:07.038104 7744 scope.go:117] "RemoveContainer" containerID="29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d" Feb 20 14:58:08.005042 master-0 kubenswrapper[7744]: I0220 14:58:08.004945 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788"} Feb 20 14:58:08.830834 master-0 kubenswrapper[7744]: I0220 14:58:08.830729 7744 status_manager.go:851] "Failed to get status for pod" podUID="3753e8e6-e86c-4841-bc82-ce5321b5583f" pod="openshift-kube-controller-manager/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Feb 20 14:58:09.074506 master-0 kubenswrapper[7744]: I0220 14:58:09.074396 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 20 14:58:09.075448 master-0 kubenswrapper[7744]: I0220 14:58:09.074504 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 20 14:58:10.104730 master-0 kubenswrapper[7744]: I0220 14:58:10.104620 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:58:10.357653 master-0 kubenswrapper[7744]: I0220 14:58:10.357472 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:58:10.429513 master-0 kubenswrapper[7744]: E0220 14:58:10.429425 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:58:13.105368 master-0 kubenswrapper[7744]: I0220 14:58:13.105275 7744 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 20 14:58:13.573510 master-0 kubenswrapper[7744]: E0220 14:58:13.573432 7744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 14:58:14.059538 master-0 kubenswrapper[7744]: I0220 14:58:14.059466 7744 generic.go:334] "Generic (PLEG): container finished" podID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerID="a4f9d4f4e0643d58b1e9cfa61dcaf195c6366d9cff2806438f5074f51bd80343" exitCode=0 Feb 20 14:58:14.059538 master-0 kubenswrapper[7744]: I0220 14:58:14.059531 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" event={"ID":"5f55b652-bef8-4f50-9d1d-9d0a340c1dea","Type":"ContainerDied","Data":"a4f9d4f4e0643d58b1e9cfa61dcaf195c6366d9cff2806438f5074f51bd80343"} Feb 20 14:58:14.059896 master-0 kubenswrapper[7744]: I0220 14:58:14.059564 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" event={"ID":"5f55b652-bef8-4f50-9d1d-9d0a340c1dea","Type":"ContainerStarted","Data":"4c99e85f05d7056363eecf219cc429ad9226d3b3266d2b4c70190b2024933a11"} Feb 20 14:58:14.059896 master-0 kubenswrapper[7744]: I0220 14:58:14.059595 7744 scope.go:117] "RemoveContainer" containerID="b8c9ab75c341608bbd631623c30a262c8f71065b35633a99f02888aa224f7c9c" Feb 20 14:58:14.314358 master-0 kubenswrapper[7744]: I0220 14:58:14.314178 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:58:14.318344 master-0 kubenswrapper[7744]: I0220 14:58:14.318271 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:14.318344 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:14.318344 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:14.318344 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:14.318626 master-0 kubenswrapper[7744]: I0220 14:58:14.318370 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:15.316415 master-0 kubenswrapper[7744]: I0220 14:58:15.316321 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:15.316415 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:15.316415 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:15.316415 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:15.316415 master-0 kubenswrapper[7744]: I0220 14:58:15.316404 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:16.316448 master-0 kubenswrapper[7744]: I0220 14:58:16.316358 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:16.316448 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:16.316448 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:16.316448 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:16.317503 master-0 kubenswrapper[7744]: I0220 14:58:16.316467 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:17.317306 master-0 kubenswrapper[7744]: I0220 14:58:17.317227 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:17.317306 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:17.317306 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:17.317306 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:17.318285 master-0 kubenswrapper[7744]: I0220 14:58:17.317326 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:18.316803 master-0 kubenswrapper[7744]: I0220 14:58:18.316704 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:18.316803 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:18.316803 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:18.316803 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:18.317255 master-0 kubenswrapper[7744]: I0220 14:58:18.316812 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:19.113177 master-0 kubenswrapper[7744]: I0220 14:58:19.113118 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 20 14:58:19.317893 master-0 kubenswrapper[7744]: I0220 14:58:19.317805 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:19.317893 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:19.317893 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:19.317893 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:19.318415 master-0 kubenswrapper[7744]: I0220 14:58:19.317912 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:20.313571 master-0 kubenswrapper[7744]: I0220 14:58:20.313486 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 14:58:20.316402 master-0 kubenswrapper[7744]: I0220 14:58:20.316354 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:20.316402 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:20.316402 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:20.316402 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:20.316748 master-0 kubenswrapper[7744]: I0220 14:58:20.316403 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:20.430307 master-0 kubenswrapper[7744]: E0220 14:58:20.430203 7744 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 14:58:20.430307 master-0 kubenswrapper[7744]: E0220 14:58:20.430266 7744 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 20 14:58:21.317148 master-0 kubenswrapper[7744]: I0220 14:58:21.317073 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:21.317148 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:21.317148 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:21.317148 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:21.318170 master-0 kubenswrapper[7744]: I0220 14:58:21.317165 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:22.316567 master-0 kubenswrapper[7744]: I0220 14:58:22.316466 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:22.316567 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:22.316567 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:22.316567 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:22.317083 master-0 kubenswrapper[7744]: I0220 14:58:22.316585 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:22.443940 master-0 kubenswrapper[7744]: I0220 14:58:22.443879 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:58:22.464415 master-0 kubenswrapper[7744]: I0220 14:58:22.464356 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 14:58:23.316989 master-0 kubenswrapper[7744]: I0220 14:58:23.316905 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:23.316989 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:23.316989 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:23.316989 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:23.317472 master-0 kubenswrapper[7744]: I0220 14:58:23.317013 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:24.115879 master-0 kubenswrapper[7744]: I0220 14:58:24.115802 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 20 14:58:24.317448 master-0 kubenswrapper[7744]: I0220 14:58:24.317357 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:24.317448 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:24.317448 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:24.317448 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:24.317840 master-0 kubenswrapper[7744]: I0220 14:58:24.317462 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:25.161573 master-0 kubenswrapper[7744]: I0220 14:58:25.161514 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-2mtj6_a1af84e0-776b-4285-906a-6880dbc82a7b/snapshot-controller/2.log" Feb 20 14:58:25.162388 master-0 kubenswrapper[7744]: I0220 14:58:25.162361 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-2mtj6_a1af84e0-776b-4285-906a-6880dbc82a7b/snapshot-controller/1.log" Feb 20 14:58:25.162468 master-0 kubenswrapper[7744]: I0220 14:58:25.162420 7744 generic.go:334] "Generic (PLEG): container finished" podID="a1af84e0-776b-4285-906a-6880dbc82a7b" containerID="476bf416eb721beeab9d0378cf1e0c01b5fd08764fb04a228352a3607254df23" exitCode=1 Feb 20 14:58:25.162468 master-0 kubenswrapper[7744]: I0220 14:58:25.162458 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" event={"ID":"a1af84e0-776b-4285-906a-6880dbc82a7b","Type":"ContainerDied","Data":"476bf416eb721beeab9d0378cf1e0c01b5fd08764fb04a228352a3607254df23"} Feb 20 14:58:25.162609 master-0 kubenswrapper[7744]: I0220 14:58:25.162502 7744 scope.go:117] "RemoveContainer" containerID="c72f14e035fd996eb495f313cdb7f235446e53513149f0182ebc36aa18d25724" Feb 20 14:58:25.163287 master-0 kubenswrapper[7744]: I0220 14:58:25.163172 7744 scope.go:117] "RemoveContainer" containerID="476bf416eb721beeab9d0378cf1e0c01b5fd08764fb04a228352a3607254df23" Feb 20 14:58:25.163576 master-0 kubenswrapper[7744]: E0220 14:58:25.163525 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-2mtj6_openshift-cluster-storage-operator(a1af84e0-776b-4285-906a-6880dbc82a7b)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" podUID="a1af84e0-776b-4285-906a-6880dbc82a7b" Feb 20 14:58:25.316485 master-0 kubenswrapper[7744]: I0220 14:58:25.316418 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:25.316485 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:25.316485 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:25.316485 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:25.316844 master-0 kubenswrapper[7744]: I0220 14:58:25.316502 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:26.173339 master-0 kubenswrapper[7744]: I0220 14:58:26.173293 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-2mtj6_a1af84e0-776b-4285-906a-6880dbc82a7b/snapshot-controller/2.log" Feb 20 14:58:26.317338 master-0 kubenswrapper[7744]: I0220 14:58:26.317278 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:26.317338 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:26.317338 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:26.317338 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:26.317888 master-0 kubenswrapper[7744]: I0220 14:58:26.317847 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:27.317422 master-0 kubenswrapper[7744]: I0220 14:58:27.317350 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:27.317422 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:27.317422 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:27.317422 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:27.318344 master-0 kubenswrapper[7744]: I0220 14:58:27.317432 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:28.317016 master-0 kubenswrapper[7744]: I0220 14:58:28.316887 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:28.317016 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:28.317016 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:28.317016 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:28.317473 master-0 kubenswrapper[7744]: I0220 14:58:28.317069 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:29.316412 master-0 kubenswrapper[7744]: I0220 14:58:29.316252 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:29.316412 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:29.316412 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:29.316412 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:29.316884 master-0 kubenswrapper[7744]: I0220 14:58:29.316475 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:30.316679 master-0 kubenswrapper[7744]: I0220 14:58:30.316604 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:30.316679 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:30.316679 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:30.316679 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:30.317710 master-0 kubenswrapper[7744]: I0220 14:58:30.316691 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:30.575286 master-0 kubenswrapper[7744]: E0220 14:58:30.574804 7744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 14:58:31.316915 master-0 kubenswrapper[7744]: I0220 14:58:31.316826 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:31.316915 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:31.316915 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:31.316915 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:31.316915 master-0 kubenswrapper[7744]: I0220 14:58:31.316903 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:32.316513 master-0 kubenswrapper[7744]: I0220 14:58:32.316417 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:32.316513 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:32.316513 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:32.316513 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:32.318089 master-0 kubenswrapper[7744]: I0220 14:58:32.316516 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:33.317253 master-0 kubenswrapper[7744]: I0220 14:58:33.317133 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:33.317253 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:33.317253 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:33.317253 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:33.317253 master-0 kubenswrapper[7744]: I0220 14:58:33.317233 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:34.316614 master-0 kubenswrapper[7744]: I0220 14:58:34.316492 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:34.316614 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:34.316614 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:34.316614 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:34.316614 master-0 kubenswrapper[7744]: I0220 14:58:34.316587 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:35.317576 master-0 kubenswrapper[7744]: I0220 14:58:35.317435 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:35.317576 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:35.317576 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:35.317576 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:35.318653 master-0 kubenswrapper[7744]: I0220 14:58:35.317579 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:36.316960 master-0 kubenswrapper[7744]: I0220 14:58:36.316858 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:36.316960 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:36.316960 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:36.316960 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:36.317385 master-0 kubenswrapper[7744]: I0220 14:58:36.317013 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:37.316591 master-0 kubenswrapper[7744]: I0220 14:58:37.316477 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:37.316591 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:37.316591 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:37.316591 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:37.316591 master-0 kubenswrapper[7744]: I0220 14:58:37.316585 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:37.968050 master-0 kubenswrapper[7744]: I0220 14:58:37.967964 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 14:58:38.316729 master-0 kubenswrapper[7744]: I0220 14:58:38.316625 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:38.316729 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:38.316729 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:38.316729 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:38.317842 master-0 kubenswrapper[7744]: I0220 14:58:38.316729 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:39.316897 master-0 kubenswrapper[7744]: I0220 14:58:39.316783 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:39.316897 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:39.316897 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:39.316897 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:39.317899 master-0 kubenswrapper[7744]: I0220 14:58:39.316896 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:40.038502 master-0 kubenswrapper[7744]: I0220 14:58:40.038362 7744 scope.go:117] "RemoveContainer" containerID="476bf416eb721beeab9d0378cf1e0c01b5fd08764fb04a228352a3607254df23" Feb 20 14:58:40.038956 master-0 kubenswrapper[7744]: E0220 14:58:40.038859 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-2mtj6_openshift-cluster-storage-operator(a1af84e0-776b-4285-906a-6880dbc82a7b)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" podUID="a1af84e0-776b-4285-906a-6880dbc82a7b" Feb 20 14:58:40.316529 master-0 kubenswrapper[7744]: I0220 14:58:40.316340 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:40.316529 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:40.316529 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:40.316529 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:40.316529 master-0 kubenswrapper[7744]: I0220 14:58:40.316459 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:40.996272 master-0 kubenswrapper[7744]: E0220 14:58:40.996173 7744 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 20 14:58:41.295037 master-0 kubenswrapper[7744]: I0220 14:58:41.294808 7744 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a76c28a2-1d48-49cb-8275-540ce323528c" Feb 20 14:58:41.295037 master-0 kubenswrapper[7744]: I0220 14:58:41.294874 7744 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a76c28a2-1d48-49cb-8275-540ce323528c" Feb 20 14:58:41.317207 master-0 kubenswrapper[7744]: I0220 14:58:41.317106 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:41.317207 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:41.317207 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:41.317207 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:41.317207 master-0 kubenswrapper[7744]: I0220 14:58:41.317192 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:42.316749 master-0 kubenswrapper[7744]: I0220 14:58:42.316638 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:42.316749 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:42.316749 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:42.316749 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:42.316749 master-0 kubenswrapper[7744]: I0220 14:58:42.316732 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:43.317010 master-0 kubenswrapper[7744]: I0220 14:58:43.316887 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:43.317010 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:43.317010 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:43.317010 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:43.318163 master-0 kubenswrapper[7744]: I0220 14:58:43.317020 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:44.316855 master-0 kubenswrapper[7744]: I0220 14:58:44.316755 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:44.316855 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:44.316855 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:44.316855 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:44.316855 master-0 kubenswrapper[7744]: I0220 14:58:44.316852 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:45.316816 master-0 kubenswrapper[7744]: I0220 14:58:45.316677 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:45.316816 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:45.316816 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:45.316816 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:45.317767 master-0 kubenswrapper[7744]: I0220 14:58:45.316807 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:46.318084 master-0 kubenswrapper[7744]: I0220 14:58:46.317972 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:46.318084 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:46.318084 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:46.318084 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:46.318084 master-0 kubenswrapper[7744]: I0220 14:58:46.318078 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:47.318145 master-0 kubenswrapper[7744]: I0220 14:58:47.318053 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:47.318145 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:47.318145 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:47.318145 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:47.318145 master-0 kubenswrapper[7744]: I0220 14:58:47.318138 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:47.576337 master-0 kubenswrapper[7744]: E0220 14:58:47.576159 7744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 14:58:48.316993 master-0 kubenswrapper[7744]: I0220 14:58:48.316834 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:48.316993 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:48.316993 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:48.316993 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:48.316993 master-0 kubenswrapper[7744]: I0220 14:58:48.316949 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:49.317892 master-0 kubenswrapper[7744]: I0220 14:58:49.317664 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:49.317892 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:49.317892 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:49.317892 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:49.317892 master-0 kubenswrapper[7744]: I0220 14:58:49.317790 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:50.318858 master-0 kubenswrapper[7744]: I0220 14:58:50.318719 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:50.318858 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:50.318858 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:50.318858 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:50.319889 master-0 kubenswrapper[7744]: I0220 14:58:50.319214 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:51.037714 master-0 kubenswrapper[7744]: I0220 14:58:51.037621 7744 scope.go:117] "RemoveContainer" containerID="476bf416eb721beeab9d0378cf1e0c01b5fd08764fb04a228352a3607254df23" Feb 20 14:58:51.317908 master-0 kubenswrapper[7744]: I0220 14:58:51.317727 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:51.317908 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:51.317908 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:51.317908 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:51.317908 master-0 kubenswrapper[7744]: I0220 14:58:51.317864 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:51.380308 master-0 kubenswrapper[7744]: I0220 14:58:51.380204 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-2mtj6_a1af84e0-776b-4285-906a-6880dbc82a7b/snapshot-controller/2.log" Feb 20 14:58:51.381148 master-0 kubenswrapper[7744]: I0220 14:58:51.380343 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" event={"ID":"a1af84e0-776b-4285-906a-6880dbc82a7b","Type":"ContainerStarted","Data":"f1b1e34a79f20570df08b5141ba77d85f604d72218b6eb7fe601f67b1fcd7a77"} Feb 20 14:58:52.316961 master-0 kubenswrapper[7744]: I0220 14:58:52.316854 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:52.316961 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:52.316961 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:52.316961 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:52.317440 master-0 kubenswrapper[7744]: I0220 14:58:52.317028 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:53.317627 master-0 kubenswrapper[7744]: I0220 14:58:53.317526 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:53.317627 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:53.317627 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:53.317627 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:53.317627 master-0 kubenswrapper[7744]: I0220 14:58:53.317624 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:54.317063 master-0 kubenswrapper[7744]: I0220 14:58:54.316950 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:54.317063 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:54.317063 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:54.317063 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:54.317063 master-0 kubenswrapper[7744]: I0220 14:58:54.317059 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:55.317231 master-0 kubenswrapper[7744]: I0220 14:58:55.317130 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:55.317231 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:55.317231 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:55.317231 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:55.317855 master-0 kubenswrapper[7744]: I0220 14:58:55.317269 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:56.317071 master-0 kubenswrapper[7744]: I0220 14:58:56.316985 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:56.317071 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:56.317071 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:56.317071 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:56.318035 master-0 kubenswrapper[7744]: I0220 14:58:56.317080 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:57.316829 master-0 kubenswrapper[7744]: I0220 14:58:57.316730 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:57.316829 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:57.316829 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:57.316829 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:57.316829 master-0 kubenswrapper[7744]: I0220 14:58:57.316826 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:58.317196 master-0 kubenswrapper[7744]: I0220 14:58:58.317094 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:58.317196 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:58.317196 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:58.317196 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:58.318403 master-0 kubenswrapper[7744]: I0220 14:58:58.317210 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:58:59.317588 master-0 kubenswrapper[7744]: I0220 14:58:59.317423 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:58:59.317588 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:58:59.317588 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:58:59.317588 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:58:59.318711 master-0 kubenswrapper[7744]: I0220 14:58:59.317672 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:00.317730 master-0 kubenswrapper[7744]: I0220 14:59:00.317194 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:00.317730 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:00.317730 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:00.317730 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:00.317730 master-0 kubenswrapper[7744]: I0220 14:59:00.317291 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:01.316429 master-0 kubenswrapper[7744]: I0220 14:59:01.316354 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:01.316429 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:01.316429 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:01.316429 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:01.317077 master-0 kubenswrapper[7744]: I0220 14:59:01.316431 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:01.468534 master-0 kubenswrapper[7744]: I0220 14:59:01.468445 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-6r5qx_8157f73d-c757-40c4-80bc-3c9de2f2288a/authentication-operator/1.log" Feb 20 14:59:01.469272 master-0 kubenswrapper[7744]: I0220 14:59:01.469135 7744 generic.go:334] "Generic (PLEG): container finished" podID="8157f73d-c757-40c4-80bc-3c9de2f2288a" containerID="9eac150251b3b5d386062f7aa8467ef3cc273bff50cfaf7bb7d3226879ebfbb8" exitCode=1 Feb 20 14:59:01.469272 master-0 kubenswrapper[7744]: I0220 14:59:01.469182 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" event={"ID":"8157f73d-c757-40c4-80bc-3c9de2f2288a","Type":"ContainerDied","Data":"9eac150251b3b5d386062f7aa8467ef3cc273bff50cfaf7bb7d3226879ebfbb8"} Feb 20 14:59:01.469272 master-0 kubenswrapper[7744]: I0220 14:59:01.469232 7744 scope.go:117] "RemoveContainer" containerID="cc8ec7e8b926ba49c143a81485ff0f3a14da5399a34238c1afe1d5e4cc71a0ba" Feb 20 14:59:01.469857 master-0 kubenswrapper[7744]: I0220 14:59:01.469794 7744 scope.go:117] "RemoveContainer" containerID="9eac150251b3b5d386062f7aa8467ef3cc273bff50cfaf7bb7d3226879ebfbb8" Feb 20 14:59:02.020326 master-0 kubenswrapper[7744]: I0220 14:59:02.020253 7744 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 14:59:02.316713 master-0 kubenswrapper[7744]: I0220 14:59:02.316561 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:02.316713 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:02.316713 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:02.316713 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:02.316713 master-0 kubenswrapper[7744]: I0220 14:59:02.316634 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:02.488106 master-0 kubenswrapper[7744]: I0220 14:59:02.488008 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-6r5qx_8157f73d-c757-40c4-80bc-3c9de2f2288a/authentication-operator/1.log" Feb 20 14:59:02.488748 master-0 kubenswrapper[7744]: I0220 14:59:02.488141 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" event={"ID":"8157f73d-c757-40c4-80bc-3c9de2f2288a","Type":"ContainerStarted","Data":"7385adb772bdc866c1e9e9a8c8aa66d6fd12f60258c65541abbc4f3fd882ad30"} Feb 20 14:59:03.316630 master-0 kubenswrapper[7744]: I0220 14:59:03.316535 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:03.316630 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:03.316630 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:03.316630 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:03.317171 master-0 kubenswrapper[7744]: I0220 14:59:03.316634 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:04.317217 master-0 kubenswrapper[7744]: I0220 14:59:04.317119 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:04.317217 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:04.317217 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:04.317217 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:04.318346 master-0 kubenswrapper[7744]: I0220 14:59:04.317231 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:04.577881 master-0 kubenswrapper[7744]: E0220 14:59:04.577651 7744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 14:59:05.317857 master-0 kubenswrapper[7744]: I0220 14:59:05.317771 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:05.317857 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:05.317857 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:05.317857 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:05.318798 master-0 kubenswrapper[7744]: I0220 14:59:05.317888 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:06.317353 master-0 kubenswrapper[7744]: I0220 14:59:06.317255 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:06.317353 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:06.317353 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:06.317353 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:06.317812 master-0 kubenswrapper[7744]: I0220 14:59:06.317352 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:07.317069 master-0 kubenswrapper[7744]: I0220 14:59:07.316985 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:07.317069 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:07.317069 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:07.317069 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:07.318522 master-0 kubenswrapper[7744]: I0220 14:59:07.317080 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:08.316833 master-0 kubenswrapper[7744]: I0220 14:59:08.316754 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:08.316833 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:08.316833 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:08.316833 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:08.317902 master-0 kubenswrapper[7744]: I0220 14:59:08.316841 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:08.832893 master-0 kubenswrapper[7744]: I0220 14:59:08.832764 7744 status_manager.go:851] "Failed to get status for pod" podUID="380174fb-b30c-4f45-9119-397cdca91756" pod="openshift-kube-controller-manager/installer-3-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-3-master-0)" Feb 20 14:59:09.316040 master-0 kubenswrapper[7744]: I0220 14:59:09.315916 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:09.316040 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:09.316040 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:09.316040 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:09.316476 master-0 kubenswrapper[7744]: I0220 14:59:09.316067 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:10.318784 master-0 kubenswrapper[7744]: I0220 14:59:10.318283 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:10.318784 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:10.318784 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:10.318784 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:10.318784 master-0 kubenswrapper[7744]: I0220 14:59:10.318379 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:11.317316 master-0 kubenswrapper[7744]: I0220 14:59:11.317181 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:11.317316 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:11.317316 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:11.317316 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:11.317781 master-0 kubenswrapper[7744]: I0220 14:59:11.317372 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:12.316825 master-0 kubenswrapper[7744]: I0220 14:59:12.316737 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:12.316825 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:12.316825 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:12.316825 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:12.317645 master-0 kubenswrapper[7744]: I0220 14:59:12.316840 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:13.317429 master-0 kubenswrapper[7744]: I0220 14:59:13.317339 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:13.317429 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:13.317429 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:13.317429 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:13.318599 master-0 kubenswrapper[7744]: I0220 14:59:13.317438 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:14.316756 master-0 kubenswrapper[7744]: I0220 14:59:14.316630 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:14.316756 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:14.316756 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:14.316756 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:14.317264 master-0 kubenswrapper[7744]: I0220 14:59:14.316753 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:15.297692 master-0 kubenswrapper[7744]: E0220 14:59:15.297595 7744 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 20 14:59:15.316893 master-0 kubenswrapper[7744]: I0220 14:59:15.316810 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:15.316893 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:15.316893 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:15.316893 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:15.317350 master-0 kubenswrapper[7744]: I0220 14:59:15.316911 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:16.317382 master-0 kubenswrapper[7744]: I0220 14:59:16.317274 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:16.317382 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:16.317382 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:16.317382 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:16.318401 master-0 kubenswrapper[7744]: I0220 14:59:16.317392 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:17.316316 master-0 kubenswrapper[7744]: I0220 14:59:17.316217 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:17.316316 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:17.316316 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:17.316316 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:17.316771 master-0 kubenswrapper[7744]: I0220 14:59:17.316318 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:18.317072 master-0 kubenswrapper[7744]: I0220 14:59:18.316993 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:18.317072 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:18.317072 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:18.317072 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:18.318110 master-0 kubenswrapper[7744]: I0220 14:59:18.317078 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:19.316611 master-0 kubenswrapper[7744]: I0220 14:59:19.316427 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:19.316611 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:19.316611 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:19.316611 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:19.316611 master-0 kubenswrapper[7744]: I0220 14:59:19.316518 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:20.317065 master-0 kubenswrapper[7744]: I0220 14:59:20.317012 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:20.317065 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:20.317065 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:20.317065 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:20.317469 master-0 kubenswrapper[7744]: I0220 14:59:20.317436 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:21.318205 master-0 kubenswrapper[7744]: I0220 14:59:21.318103 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:21.318205 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:21.318205 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:21.318205 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:21.319682 master-0 kubenswrapper[7744]: I0220 14:59:21.318212 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:21.579479 master-0 kubenswrapper[7744]: E0220 14:59:21.579216 7744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 14:59:21.681376 master-0 kubenswrapper[7744]: I0220 14:59:21.681322 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-2mtj6_a1af84e0-776b-4285-906a-6880dbc82a7b/snapshot-controller/3.log" Feb 20 14:59:21.682354 master-0 kubenswrapper[7744]: I0220 14:59:21.682326 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-2mtj6_a1af84e0-776b-4285-906a-6880dbc82a7b/snapshot-controller/2.log" Feb 20 14:59:21.682573 master-0 kubenswrapper[7744]: I0220 14:59:21.682537 7744 generic.go:334] "Generic (PLEG): container finished" podID="a1af84e0-776b-4285-906a-6880dbc82a7b" containerID="f1b1e34a79f20570df08b5141ba77d85f604d72218b6eb7fe601f67b1fcd7a77" exitCode=1 Feb 20 14:59:21.682721 master-0 kubenswrapper[7744]: I0220 14:59:21.682599 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" event={"ID":"a1af84e0-776b-4285-906a-6880dbc82a7b","Type":"ContainerDied","Data":"f1b1e34a79f20570df08b5141ba77d85f604d72218b6eb7fe601f67b1fcd7a77"} Feb 20 14:59:21.682880 master-0 kubenswrapper[7744]: I0220 14:59:21.682859 7744 scope.go:117] "RemoveContainer" containerID="476bf416eb721beeab9d0378cf1e0c01b5fd08764fb04a228352a3607254df23" Feb 20 14:59:21.683659 master-0 kubenswrapper[7744]: I0220 14:59:21.683598 7744 scope.go:117] "RemoveContainer" containerID="f1b1e34a79f20570df08b5141ba77d85f604d72218b6eb7fe601f67b1fcd7a77" Feb 20 14:59:21.684241 master-0 kubenswrapper[7744]: E0220 14:59:21.684089 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-2mtj6_openshift-cluster-storage-operator(a1af84e0-776b-4285-906a-6880dbc82a7b)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" podUID="a1af84e0-776b-4285-906a-6880dbc82a7b" Feb 20 14:59:22.316741 master-0 kubenswrapper[7744]: I0220 14:59:22.316651 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:22.316741 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:22.316741 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:22.316741 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:22.316741 master-0 kubenswrapper[7744]: I0220 14:59:22.316738 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:22.693452 master-0 kubenswrapper[7744]: I0220 14:59:22.693305 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-2mtj6_a1af84e0-776b-4285-906a-6880dbc82a7b/snapshot-controller/3.log" Feb 20 14:59:23.317180 master-0 kubenswrapper[7744]: I0220 14:59:23.317049 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:23.317180 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:23.317180 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:23.317180 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:23.317627 master-0 kubenswrapper[7744]: I0220 14:59:23.317179 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:24.317375 master-0 kubenswrapper[7744]: I0220 14:59:24.317302 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:24.317375 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:24.317375 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:24.317375 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:24.318325 master-0 kubenswrapper[7744]: I0220 14:59:24.317382 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:25.317239 master-0 kubenswrapper[7744]: I0220 14:59:25.317120 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:25.317239 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:25.317239 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:25.317239 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:25.318450 master-0 kubenswrapper[7744]: I0220 14:59:25.317248 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:26.316801 master-0 kubenswrapper[7744]: I0220 14:59:26.316741 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:26.316801 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:26.316801 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:26.316801 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:26.317432 master-0 kubenswrapper[7744]: I0220 14:59:26.317387 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:27.319149 master-0 kubenswrapper[7744]: I0220 14:59:27.318609 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:27.319149 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:27.319149 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:27.319149 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:27.319149 master-0 kubenswrapper[7744]: I0220 14:59:27.318709 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:28.317595 master-0 kubenswrapper[7744]: I0220 14:59:28.317462 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:28.317595 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:28.317595 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:28.317595 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:28.318143 master-0 kubenswrapper[7744]: I0220 14:59:28.317607 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:29.317466 master-0 kubenswrapper[7744]: I0220 14:59:29.317374 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:29.317466 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:29.317466 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:29.317466 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:29.318430 master-0 kubenswrapper[7744]: I0220 14:59:29.317477 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:30.318016 master-0 kubenswrapper[7744]: I0220 14:59:30.317888 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:30.318016 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:30.318016 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:30.318016 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:30.319254 master-0 kubenswrapper[7744]: I0220 14:59:30.318038 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:31.317813 master-0 kubenswrapper[7744]: I0220 14:59:31.317696 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:31.317813 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:31.317813 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:31.317813 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:31.319300 master-0 kubenswrapper[7744]: I0220 14:59:31.317818 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:32.023285 master-0 kubenswrapper[7744]: I0220 14:59:32.023003 7744 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-6r5qx container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 20 14:59:32.023285 master-0 kubenswrapper[7744]: I0220 14:59:32.023279 7744 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" podUID="8157f73d-c757-40c4-80bc-3c9de2f2288a" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 20 14:59:32.316781 master-0 kubenswrapper[7744]: I0220 14:59:32.316639 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:32.316781 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:32.316781 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:32.316781 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:32.316781 master-0 kubenswrapper[7744]: I0220 14:59:32.316705 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:33.316707 master-0 kubenswrapper[7744]: I0220 14:59:33.316641 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:33.316707 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:33.316707 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:33.316707 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:33.317719 master-0 kubenswrapper[7744]: I0220 14:59:33.316727 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:34.317459 master-0 kubenswrapper[7744]: I0220 14:59:34.317389 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:34.317459 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:34.317459 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:34.317459 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:34.318526 master-0 kubenswrapper[7744]: I0220 14:59:34.318102 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:35.038600 master-0 kubenswrapper[7744]: I0220 14:59:35.038427 7744 scope.go:117] "RemoveContainer" containerID="f1b1e34a79f20570df08b5141ba77d85f604d72218b6eb7fe601f67b1fcd7a77" Feb 20 14:59:35.038872 master-0 kubenswrapper[7744]: E0220 14:59:35.038828 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-2mtj6_openshift-cluster-storage-operator(a1af84e0-776b-4285-906a-6880dbc82a7b)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" podUID="a1af84e0-776b-4285-906a-6880dbc82a7b" Feb 20 14:59:35.317286 master-0 kubenswrapper[7744]: I0220 14:59:35.317115 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:35.317286 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:35.317286 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:35.317286 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:35.317286 master-0 kubenswrapper[7744]: I0220 14:59:35.317216 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:36.317008 master-0 kubenswrapper[7744]: I0220 14:59:36.316878 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:36.317008 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:36.317008 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:36.317008 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:36.317493 master-0 kubenswrapper[7744]: I0220 14:59:36.317008 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:37.317170 master-0 kubenswrapper[7744]: I0220 14:59:37.317085 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:37.317170 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:37.317170 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:37.317170 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:37.318474 master-0 kubenswrapper[7744]: I0220 14:59:37.317183 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:38.316571 master-0 kubenswrapper[7744]: I0220 14:59:38.316284 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:38.316571 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:38.316571 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:38.316571 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:38.316841 master-0 kubenswrapper[7744]: I0220 14:59:38.316604 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:38.580982 master-0 kubenswrapper[7744]: E0220 14:59:38.580735 7744 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 20 14:59:39.316555 master-0 kubenswrapper[7744]: I0220 14:59:39.316448 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:39.316555 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:39.316555 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:39.316555 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:39.316995 master-0 kubenswrapper[7744]: I0220 14:59:39.316591 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:40.113452 master-0 kubenswrapper[7744]: I0220 14:59:40.113361 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" podStartSLOduration=268.208779687 podStartE2EDuration="4m33.113342735s" podCreationTimestamp="2026-02-20 14:55:07 +0000 UTC" firstStartedPulling="2026-02-20 14:55:08.30949312 +0000 UTC m=+507.511693050" lastFinishedPulling="2026-02-20 14:55:13.214056138 +0000 UTC m=+512.416256098" observedRunningTime="2026-02-20 14:59:40.088278994 +0000 UTC m=+779.290478984" watchObservedRunningTime="2026-02-20 14:59:40.113342735 +0000 UTC m=+779.315542655" Feb 20 14:59:40.317471 master-0 kubenswrapper[7744]: I0220 14:59:40.317406 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:40.317471 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:40.317471 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:40.317471 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:40.318089 master-0 kubenswrapper[7744]: I0220 14:59:40.317496 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:40.371755 master-0 kubenswrapper[7744]: I0220 14:59:40.352914 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 20 14:59:40.371755 master-0 kubenswrapper[7744]: I0220 14:59:40.358744 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 20 14:59:41.052905 master-0 kubenswrapper[7744]: I0220 14:59:41.052812 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3753e8e6-e86c-4841-bc82-ce5321b5583f" path="/var/lib/kubelet/pods/3753e8e6-e86c-4841-bc82-ce5321b5583f/volumes" Feb 20 14:59:41.316891 master-0 kubenswrapper[7744]: I0220 14:59:41.316663 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:41.316891 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:41.316891 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:41.316891 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:41.318380 master-0 kubenswrapper[7744]: I0220 14:59:41.317982 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:42.316315 master-0 kubenswrapper[7744]: I0220 14:59:42.316271 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:42.316315 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:42.316315 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:42.316315 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:42.316704 master-0 kubenswrapper[7744]: I0220 14:59:42.316672 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:42.884583 master-0 kubenswrapper[7744]: I0220 14:59:42.880996 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-2sw9z_1fe69517-eec2-4721-933c-fa27cea7ab1f/package-server-manager/0.log" Feb 20 14:59:42.884583 master-0 kubenswrapper[7744]: I0220 14:59:42.882350 7744 generic.go:334] "Generic (PLEG): container finished" podID="1fe69517-eec2-4721-933c-fa27cea7ab1f" containerID="8fa1fcd077e28cf5cfeec8c2cafd29cf0677802573ac33c46747c76a0973c8ec" exitCode=1 Feb 20 14:59:42.884583 master-0 kubenswrapper[7744]: I0220 14:59:42.882401 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" event={"ID":"1fe69517-eec2-4721-933c-fa27cea7ab1f","Type":"ContainerDied","Data":"8fa1fcd077e28cf5cfeec8c2cafd29cf0677802573ac33c46747c76a0973c8ec"} Feb 20 14:59:42.886707 master-0 kubenswrapper[7744]: I0220 14:59:42.884658 7744 scope.go:117] "RemoveContainer" containerID="8fa1fcd077e28cf5cfeec8c2cafd29cf0677802573ac33c46747c76a0973c8ec" Feb 20 14:59:43.316914 master-0 kubenswrapper[7744]: I0220 14:59:43.316821 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:43.316914 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:43.316914 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:43.316914 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:43.317353 master-0 kubenswrapper[7744]: I0220 14:59:43.316995 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:43.893874 master-0 kubenswrapper[7744]: I0220 14:59:43.893789 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-gjdb4_0bedbe69-fc4b-4bd7-bcc2-acead927eda2/machine-api-operator/0.log" Feb 20 14:59:43.894812 master-0 kubenswrapper[7744]: I0220 14:59:43.894458 7744 generic.go:334] "Generic (PLEG): container finished" podID="0bedbe69-fc4b-4bd7-bcc2-acead927eda2" containerID="09c2a559e7cc2a5451aca2755577ab8e7c2b5ea2ef73bac50c4295f2287bdf15" exitCode=255 Feb 20 14:59:43.894812 master-0 kubenswrapper[7744]: I0220 14:59:43.894529 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" event={"ID":"0bedbe69-fc4b-4bd7-bcc2-acead927eda2","Type":"ContainerDied","Data":"09c2a559e7cc2a5451aca2755577ab8e7c2b5ea2ef73bac50c4295f2287bdf15"} Feb 20 14:59:43.895448 master-0 kubenswrapper[7744]: I0220 14:59:43.895391 7744 scope.go:117] "RemoveContainer" containerID="09c2a559e7cc2a5451aca2755577ab8e7c2b5ea2ef73bac50c4295f2287bdf15" Feb 20 14:59:43.897727 master-0 kubenswrapper[7744]: I0220 14:59:43.897614 7744 generic.go:334] "Generic (PLEG): container finished" podID="a4339bd5-b8d1-467e-8158-4464ea901148" containerID="23b61efd81399a78fa532e7f0cf8b35a9b7f7f7e97f61e6f0f85ac41949a2a92" exitCode=0 Feb 20 14:59:43.898007 master-0 kubenswrapper[7744]: I0220 14:59:43.897738 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" event={"ID":"a4339bd5-b8d1-467e-8158-4464ea901148","Type":"ContainerDied","Data":"23b61efd81399a78fa532e7f0cf8b35a9b7f7f7e97f61e6f0f85ac41949a2a92"} Feb 20 14:59:43.898653 master-0 kubenswrapper[7744]: I0220 14:59:43.898463 7744 scope.go:117] "RemoveContainer" containerID="23b61efd81399a78fa532e7f0cf8b35a9b7f7f7e97f61e6f0f85ac41949a2a92" Feb 20 14:59:43.901893 master-0 kubenswrapper[7744]: I0220 14:59:43.901841 7744 generic.go:334] "Generic (PLEG): container finished" podID="4c31b8a7-edcb-403d-9122-7eb740f7d659" containerID="696e06ef6554e221cbbd27e48c3197d621e72c8d19b1df8b12bd4eab6b3279b8" exitCode=0 Feb 20 14:59:43.902086 master-0 kubenswrapper[7744]: I0220 14:59:43.901900 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" event={"ID":"4c31b8a7-edcb-403d-9122-7eb740f7d659","Type":"ContainerDied","Data":"696e06ef6554e221cbbd27e48c3197d621e72c8d19b1df8b12bd4eab6b3279b8"} Feb 20 14:59:43.902086 master-0 kubenswrapper[7744]: I0220 14:59:43.901995 7744 scope.go:117] "RemoveContainer" containerID="941dd44ae98490c4a66ceb486a6367ef40fefdfd465008c4ef290585229b84c1" Feb 20 14:59:43.902597 master-0 kubenswrapper[7744]: I0220 14:59:43.902543 7744 scope.go:117] "RemoveContainer" containerID="696e06ef6554e221cbbd27e48c3197d621e72c8d19b1df8b12bd4eab6b3279b8" Feb 20 14:59:43.910836 master-0 kubenswrapper[7744]: I0220 14:59:43.910163 7744 generic.go:334] "Generic (PLEG): container finished" podID="16d6dd52-d73b-4696-873e-00a6d4bb2c77" containerID="e6e379ec088445dd86d2191d2d0584d608d0fb6a75f60858cd436421f083f620" exitCode=0 Feb 20 14:59:43.910836 master-0 kubenswrapper[7744]: I0220 14:59:43.910234 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" event={"ID":"16d6dd52-d73b-4696-873e-00a6d4bb2c77","Type":"ContainerDied","Data":"e6e379ec088445dd86d2191d2d0584d608d0fb6a75f60858cd436421f083f620"} Feb 20 14:59:43.914129 master-0 kubenswrapper[7744]: I0220 14:59:43.913352 7744 scope.go:117] "RemoveContainer" containerID="e6e379ec088445dd86d2191d2d0584d608d0fb6a75f60858cd436421f083f620" Feb 20 14:59:43.914129 master-0 kubenswrapper[7744]: I0220 14:59:43.913872 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-2sw9z_1fe69517-eec2-4721-933c-fa27cea7ab1f/package-server-manager/0.log" Feb 20 14:59:43.914909 master-0 kubenswrapper[7744]: I0220 14:59:43.914862 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" event={"ID":"1fe69517-eec2-4721-933c-fa27cea7ab1f","Type":"ContainerStarted","Data":"4fd1f054de6dcbc46bdd02fc9bf3ec3e08235db2968aa7d6b81eadf482d090a3"} Feb 20 14:59:43.915446 master-0 kubenswrapper[7744]: I0220 14:59:43.915337 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 14:59:43.918311 master-0 kubenswrapper[7744]: I0220 14:59:43.918250 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-c8w7r_b385880b-a26b-4353-8f6f-b7f926bcc67c/cluster-autoscaler-operator/0.log" Feb 20 14:59:43.918885 master-0 kubenswrapper[7744]: I0220 14:59:43.918819 7744 generic.go:334] "Generic (PLEG): container finished" podID="b385880b-a26b-4353-8f6f-b7f926bcc67c" containerID="fcbb2a13969414b96cd30dbad7457a49997232b9842608fdd68bbd19061a8401" exitCode=255 Feb 20 14:59:43.918885 master-0 kubenswrapper[7744]: I0220 14:59:43.918845 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" event={"ID":"b385880b-a26b-4353-8f6f-b7f926bcc67c","Type":"ContainerDied","Data":"fcbb2a13969414b96cd30dbad7457a49997232b9842608fdd68bbd19061a8401"} Feb 20 14:59:43.919574 master-0 kubenswrapper[7744]: I0220 14:59:43.919506 7744 scope.go:117] "RemoveContainer" containerID="fcbb2a13969414b96cd30dbad7457a49997232b9842608fdd68bbd19061a8401" Feb 20 14:59:43.921441 master-0 kubenswrapper[7744]: I0220 14:59:43.921399 7744 generic.go:334] "Generic (PLEG): container finished" podID="27ab8945-6a5b-4f7d-b893-6358da214499" containerID="3a018b588cd0fab81aef4437e8a3c01bf2d7562f85789ce7770c3b488cc91b89" exitCode=0 Feb 20 14:59:43.922482 master-0 kubenswrapper[7744]: I0220 14:59:43.921482 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" event={"ID":"27ab8945-6a5b-4f7d-b893-6358da214499","Type":"ContainerDied","Data":"3a018b588cd0fab81aef4437e8a3c01bf2d7562f85789ce7770c3b488cc91b89"} Feb 20 14:59:43.925008 master-0 kubenswrapper[7744]: I0220 14:59:43.924896 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-584cc7bcb5-j66jm_45d7ef0c-272b-4d1e-965f-484975d5d25c/openshift-controller-manager-operator/0.log" Feb 20 14:59:43.925008 master-0 kubenswrapper[7744]: I0220 14:59:43.924984 7744 generic.go:334] "Generic (PLEG): container finished" podID="45d7ef0c-272b-4d1e-965f-484975d5d25c" containerID="2b921a59215a9b57fc0e140139af8ee009d893b2733cf5fcafdbd68899442899" exitCode=0 Feb 20 14:59:43.925227 master-0 kubenswrapper[7744]: I0220 14:59:43.925054 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" event={"ID":"45d7ef0c-272b-4d1e-965f-484975d5d25c","Type":"ContainerDied","Data":"2b921a59215a9b57fc0e140139af8ee009d893b2733cf5fcafdbd68899442899"} Feb 20 14:59:43.925666 master-0 kubenswrapper[7744]: I0220 14:59:43.925605 7744 scope.go:117] "RemoveContainer" containerID="2b921a59215a9b57fc0e140139af8ee009d893b2733cf5fcafdbd68899442899" Feb 20 14:59:43.933806 master-0 kubenswrapper[7744]: I0220 14:59:43.933085 7744 generic.go:334] "Generic (PLEG): container finished" podID="989af121-da08-4f40-b08c-dd2aa67bc60c" containerID="832f243cdb2cdff1065e35c1a4b8eb6397a6696e55399d5bf71d3cb4f866d80d" exitCode=0 Feb 20 14:59:43.933806 master-0 kubenswrapper[7744]: I0220 14:59:43.933175 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" event={"ID":"989af121-da08-4f40-b08c-dd2aa67bc60c","Type":"ContainerDied","Data":"832f243cdb2cdff1065e35c1a4b8eb6397a6696e55399d5bf71d3cb4f866d80d"} Feb 20 14:59:43.933806 master-0 kubenswrapper[7744]: I0220 14:59:43.933623 7744 scope.go:117] "RemoveContainer" containerID="832f243cdb2cdff1065e35c1a4b8eb6397a6696e55399d5bf71d3cb4f866d80d" Feb 20 14:59:43.943021 master-0 kubenswrapper[7744]: I0220 14:59:43.937233 7744 scope.go:117] "RemoveContainer" containerID="3a018b588cd0fab81aef4437e8a3c01bf2d7562f85789ce7770c3b488cc91b89" Feb 20 14:59:43.943021 master-0 kubenswrapper[7744]: I0220 14:59:43.940850 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-tj8fx_9fd9f419-2cdc-4991-8fb9-87d76ac58976/network-operator/0.log" Feb 20 14:59:43.943021 master-0 kubenswrapper[7744]: I0220 14:59:43.940913 7744 generic.go:334] "Generic (PLEG): container finished" podID="9fd9f419-2cdc-4991-8fb9-87d76ac58976" containerID="5761b5d97bb857209597024a19cdbe2341d245c395e6ce681c8bc8fd7fa023bd" exitCode=0 Feb 20 14:59:43.943021 master-0 kubenswrapper[7744]: I0220 14:59:43.941031 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" event={"ID":"9fd9f419-2cdc-4991-8fb9-87d76ac58976","Type":"ContainerDied","Data":"5761b5d97bb857209597024a19cdbe2341d245c395e6ce681c8bc8fd7fa023bd"} Feb 20 14:59:43.943021 master-0 kubenswrapper[7744]: I0220 14:59:43.941559 7744 scope.go:117] "RemoveContainer" containerID="5761b5d97bb857209597024a19cdbe2341d245c395e6ce681c8bc8fd7fa023bd" Feb 20 14:59:43.963759 master-0 kubenswrapper[7744]: I0220 14:59:43.963687 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" event={"ID":"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367","Type":"ContainerDied","Data":"361cac7f381ef490c05a6ad20d7d519e61ac704ec32bc6d37576fd4551ff3afc"} Feb 20 14:59:43.963956 master-0 kubenswrapper[7744]: I0220 14:59:43.963635 7744 generic.go:334] "Generic (PLEG): container finished" podID="3bf5be04-e4dd-44d9-be1a-3abe6ddd2367" containerID="361cac7f381ef490c05a6ad20d7d519e61ac704ec32bc6d37576fd4551ff3afc" exitCode=0 Feb 20 14:59:43.964496 master-0 kubenswrapper[7744]: I0220 14:59:43.964447 7744 scope.go:117] "RemoveContainer" containerID="361cac7f381ef490c05a6ad20d7d519e61ac704ec32bc6d37576fd4551ff3afc" Feb 20 14:59:43.970720 master-0 kubenswrapper[7744]: I0220 14:59:43.970685 7744 generic.go:334] "Generic (PLEG): container finished" podID="63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" containerID="ce6cf48b03cf7ea4bb59cbc88338b3797dd3cd5289e6bbf78ef6ac04abd04f98" exitCode=0 Feb 20 14:59:43.970872 master-0 kubenswrapper[7744]: I0220 14:59:43.970756 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" event={"ID":"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd","Type":"ContainerDied","Data":"ce6cf48b03cf7ea4bb59cbc88338b3797dd3cd5289e6bbf78ef6ac04abd04f98"} Feb 20 14:59:43.971182 master-0 kubenswrapper[7744]: I0220 14:59:43.971147 7744 scope.go:117] "RemoveContainer" containerID="ce6cf48b03cf7ea4bb59cbc88338b3797dd3cd5289e6bbf78ef6ac04abd04f98" Feb 20 14:59:43.976134 master-0 kubenswrapper[7744]: I0220 14:59:43.976092 7744 generic.go:334] "Generic (PLEG): container finished" podID="d3ca2d2f-9f31-4524-a28f-cf16b02dd711" containerID="a95da6b755620b3477b82b60290cab82bafb501ad18fb013d6a2d035fb2977b7" exitCode=0 Feb 20 14:59:43.976378 master-0 kubenswrapper[7744]: I0220 14:59:43.976345 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" event={"ID":"d3ca2d2f-9f31-4524-a28f-cf16b02dd711","Type":"ContainerDied","Data":"a95da6b755620b3477b82b60290cab82bafb501ad18fb013d6a2d035fb2977b7"} Feb 20 14:59:43.977245 master-0 kubenswrapper[7744]: I0220 14:59:43.977212 7744 scope.go:117] "RemoveContainer" containerID="a95da6b755620b3477b82b60290cab82bafb501ad18fb013d6a2d035fb2977b7" Feb 20 14:59:43.982875 master-0 kubenswrapper[7744]: I0220 14:59:43.982821 7744 generic.go:334] "Generic (PLEG): container finished" podID="234a44fd-c153-47a6-a11d-7d4b7165c236" containerID="581f236214a140a0dd97c9926ea209ede3f39ed6cfcbab89bbd1dddd4483776d" exitCode=0 Feb 20 14:59:43.983017 master-0 kubenswrapper[7744]: I0220 14:59:43.982890 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" event={"ID":"234a44fd-c153-47a6-a11d-7d4b7165c236","Type":"ContainerDied","Data":"581f236214a140a0dd97c9926ea209ede3f39ed6cfcbab89bbd1dddd4483776d"} Feb 20 14:59:43.983628 master-0 kubenswrapper[7744]: I0220 14:59:43.983592 7744 scope.go:117] "RemoveContainer" containerID="581f236214a140a0dd97c9926ea209ede3f39ed6cfcbab89bbd1dddd4483776d" Feb 20 14:59:44.028519 master-0 kubenswrapper[7744]: I0220 14:59:44.028446 7744 scope.go:117] "RemoveContainer" containerID="233f31cc87ed77a81bb475184c8275cb1327d0aaed87c186b3895bc1d70da1c4" Feb 20 14:59:44.221298 master-0 kubenswrapper[7744]: I0220 14:59:44.221243 7744 scope.go:117] "RemoveContainer" containerID="31ee4b259747c34f0e0b3ef2fb4560b0c5185716f80403e8aa587e56efaa8aa2" Feb 20 14:59:44.319212 master-0 kubenswrapper[7744]: I0220 14:59:44.318904 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:44.319212 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:44.319212 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:44.319212 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:44.319212 master-0 kubenswrapper[7744]: I0220 14:59:44.318960 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:44.332405 master-0 kubenswrapper[7744]: I0220 14:59:44.332367 7744 scope.go:117] "RemoveContainer" containerID="206ff74dbf8ac205b7526aba69f67598c7eb64c83ff678f0e12a41fa367def5c" Feb 20 14:59:44.383049 master-0 kubenswrapper[7744]: I0220 14:59:44.382973 7744 scope.go:117] "RemoveContainer" containerID="e7d3fca444d3332e414ef45d428d9305bcf3afae66213559a3b368f710b1a743" Feb 20 14:59:44.445896 master-0 kubenswrapper[7744]: I0220 14:59:44.445859 7744 scope.go:117] "RemoveContainer" containerID="62a31d32d4ca4d676ab042ba4779a3437daeccc9e4cd7a7e48c41884a5b21dfe" Feb 20 14:59:44.601577 master-0 kubenswrapper[7744]: I0220 14:59:44.601519 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:59:44.601577 master-0 kubenswrapper[7744]: I0220 14:59:44.601572 7744 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:59:44.991173 master-0 kubenswrapper[7744]: I0220 14:59:44.991127 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" event={"ID":"d3ca2d2f-9f31-4524-a28f-cf16b02dd711","Type":"ContainerStarted","Data":"c396e73ee6b7eb5c6449cf276a9d0d5ae9c9bc55bb4f24a00ddad593e1c6275c"} Feb 20 14:59:44.993225 master-0 kubenswrapper[7744]: I0220 14:59:44.993172 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" event={"ID":"234a44fd-c153-47a6-a11d-7d4b7165c236","Type":"ContainerStarted","Data":"103407b542cb92b60a19cb575033cc9b552341ed431c515e2e942eb226538d8d"} Feb 20 14:59:44.994968 master-0 kubenswrapper[7744]: I0220 14:59:44.994916 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" event={"ID":"9fd9f419-2cdc-4991-8fb9-87d76ac58976","Type":"ContainerStarted","Data":"dc441fa27824734a377d9db318c86f20db95ce4983905e77258b9eeaa40c81d4"} Feb 20 14:59:44.996458 master-0 kubenswrapper[7744]: I0220 14:59:44.996427 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" event={"ID":"989af121-da08-4f40-b08c-dd2aa67bc60c","Type":"ContainerStarted","Data":"179a409fb734cc1e38b874ef7dc3085074afe4aed4fb1a3a89836ccbf244466e"} Feb 20 14:59:44.998161 master-0 kubenswrapper[7744]: I0220 14:59:44.998129 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" event={"ID":"16d6dd52-d73b-4696-873e-00a6d4bb2c77","Type":"ContainerStarted","Data":"34881b0a33741767f43b826868d3348dca748d3964c8f347ae447e2ba7dda28a"} Feb 20 14:59:45.000099 master-0 kubenswrapper[7744]: I0220 14:59:45.000069 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-c8w7r_b385880b-a26b-4353-8f6f-b7f926bcc67c/cluster-autoscaler-operator/0.log" Feb 20 14:59:45.000445 master-0 kubenswrapper[7744]: I0220 14:59:45.000417 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" event={"ID":"b385880b-a26b-4353-8f6f-b7f926bcc67c","Type":"ContainerStarted","Data":"9ab968c039881eca411605f2dc6ddf6c3bae4902938cad1735091ee161273d08"} Feb 20 14:59:45.002364 master-0 kubenswrapper[7744]: I0220 14:59:45.002336 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" event={"ID":"45d7ef0c-272b-4d1e-965f-484975d5d25c","Type":"ContainerStarted","Data":"09d216a3abc55643af39c5d59bcb2e247cd57b0e4c4569a1bf7ef453f5b7658a"} Feb 20 14:59:45.004143 master-0 kubenswrapper[7744]: I0220 14:59:45.004118 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" event={"ID":"a4339bd5-b8d1-467e-8158-4464ea901148","Type":"ContainerStarted","Data":"46fafdf5fa767d53d528bf20ba8d233f608a0480ae3b29b71e6e78f155340f4a"} Feb 20 14:59:45.004247 master-0 kubenswrapper[7744]: I0220 14:59:45.004222 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:59:45.005912 master-0 kubenswrapper[7744]: I0220 14:59:45.005882 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" event={"ID":"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd","Type":"ContainerStarted","Data":"00f9e9b6b6ccf56cbc32cbe6a3bf7dcabdcf2702c8bfb772dfa8c5e881fe2a66"} Feb 20 14:59:45.006149 master-0 kubenswrapper[7744]: I0220 14:59:45.006122 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:59:45.007567 master-0 kubenswrapper[7744]: I0220 14:59:45.007529 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" event={"ID":"4c31b8a7-edcb-403d-9122-7eb740f7d659","Type":"ContainerStarted","Data":"8df41532c87905e245b26ec36aa0216e69f949d06e668930ddee22c3fd75c8ba"} Feb 20 14:59:45.009988 master-0 kubenswrapper[7744]: I0220 14:59:45.009941 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-gjdb4_0bedbe69-fc4b-4bd7-bcc2-acead927eda2/machine-api-operator/0.log" Feb 20 14:59:45.010515 master-0 kubenswrapper[7744]: I0220 14:59:45.010480 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" event={"ID":"0bedbe69-fc4b-4bd7-bcc2-acead927eda2","Type":"ContainerStarted","Data":"82af1e0ac2a38d423d2f66ae453fd46fc4c8ae778116720f18e17a37eb6994b6"} Feb 20 14:59:45.012184 master-0 kubenswrapper[7744]: I0220 14:59:45.012156 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" event={"ID":"27ab8945-6a5b-4f7d-b893-6358da214499","Type":"ContainerStarted","Data":"e8c5d6ce583150e5025bcd44242a6fd0048c02eb48e405e4c26fcefe2dcec569"} Feb 20 14:59:45.014724 master-0 kubenswrapper[7744]: I0220 14:59:45.014689 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" event={"ID":"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367","Type":"ContainerStarted","Data":"0e3955eb45775218b3ec78e9d48cc3dcca22b622e9fd2c4efce8aad1a0511807"} Feb 20 14:59:45.315295 master-0 kubenswrapper[7744]: I0220 14:59:45.315246 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:45.315295 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:45.315295 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:45.315295 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:45.315555 master-0 kubenswrapper[7744]: I0220 14:59:45.315311 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:45.381545 master-0 kubenswrapper[7744]: I0220 14:59:45.381481 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 14:59:46.316844 master-0 kubenswrapper[7744]: I0220 14:59:46.316739 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:46.316844 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:46.316844 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:46.316844 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:46.316844 master-0 kubenswrapper[7744]: I0220 14:59:46.316839 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:47.315559 master-0 kubenswrapper[7744]: I0220 14:59:47.315483 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:47.315559 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:47.315559 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:47.315559 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:47.315559 master-0 kubenswrapper[7744]: I0220 14:59:47.315540 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:47.610569 master-0 kubenswrapper[7744]: I0220 14:59:47.610380 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 14:59:48.317109 master-0 kubenswrapper[7744]: I0220 14:59:48.317012 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:48.317109 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:48.317109 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:48.317109 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:48.317967 master-0 kubenswrapper[7744]: I0220 14:59:48.317115 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:49.317059 master-0 kubenswrapper[7744]: I0220 14:59:49.316918 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:49.317059 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:49.317059 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:49.317059 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:49.317059 master-0 kubenswrapper[7744]: I0220 14:59:49.317035 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:50.038454 master-0 kubenswrapper[7744]: I0220 14:59:50.038386 7744 scope.go:117] "RemoveContainer" containerID="f1b1e34a79f20570df08b5141ba77d85f604d72218b6eb7fe601f67b1fcd7a77" Feb 20 14:59:50.038762 master-0 kubenswrapper[7744]: E0220 14:59:50.038721 7744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-2mtj6_openshift-cluster-storage-operator(a1af84e0-776b-4285-906a-6880dbc82a7b)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" podUID="a1af84e0-776b-4285-906a-6880dbc82a7b" Feb 20 14:59:50.318043 master-0 kubenswrapper[7744]: I0220 14:59:50.317859 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:50.318043 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:50.318043 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:50.318043 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:50.318043 master-0 kubenswrapper[7744]: I0220 14:59:50.317978 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:51.317264 master-0 kubenswrapper[7744]: I0220 14:59:51.317157 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:51.317264 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:51.317264 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:51.317264 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:51.317264 master-0 kubenswrapper[7744]: I0220 14:59:51.317246 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:52.316480 master-0 kubenswrapper[7744]: I0220 14:59:52.316327 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:52.316480 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:52.316480 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:52.316480 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:52.316480 master-0 kubenswrapper[7744]: I0220 14:59:52.316447 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:53.317437 master-0 kubenswrapper[7744]: I0220 14:59:53.317314 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:53.317437 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:53.317437 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:53.317437 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:53.317437 master-0 kubenswrapper[7744]: I0220 14:59:53.317414 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:54.317028 master-0 kubenswrapper[7744]: I0220 14:59:54.316961 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:54.317028 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:54.317028 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:54.317028 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:54.317394 master-0 kubenswrapper[7744]: I0220 14:59:54.317062 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:55.317488 master-0 kubenswrapper[7744]: I0220 14:59:55.317320 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:55.317488 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:55.317488 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:55.317488 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:55.317488 master-0 kubenswrapper[7744]: I0220 14:59:55.317454 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:56.317477 master-0 kubenswrapper[7744]: I0220 14:59:56.317373 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:56.317477 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:56.317477 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:56.317477 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:56.318419 master-0 kubenswrapper[7744]: I0220 14:59:56.317479 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:57.316967 master-0 kubenswrapper[7744]: I0220 14:59:57.316863 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:57.316967 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:57.316967 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:57.316967 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:57.316967 master-0 kubenswrapper[7744]: I0220 14:59:57.316987 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:58.316839 master-0 kubenswrapper[7744]: I0220 14:59:58.316727 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:58.316839 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:58.316839 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:58.316839 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:58.316839 master-0 kubenswrapper[7744]: I0220 14:59:58.316823 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 14:59:59.316901 master-0 kubenswrapper[7744]: I0220 14:59:59.316810 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 14:59:59.316901 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 14:59:59.316901 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 14:59:59.316901 master-0 kubenswrapper[7744]: healthz check failed Feb 20 14:59:59.318054 master-0 kubenswrapper[7744]: I0220 14:59:59.316906 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:00:00.327429 master-0 kubenswrapper[7744]: I0220 15:00:00.327312 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 15:00:00.327429 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 15:00:00.327429 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 15:00:00.327429 master-0 kubenswrapper[7744]: healthz check failed Feb 20 15:00:00.328753 master-0 kubenswrapper[7744]: I0220 15:00:00.327684 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:00:01.316725 master-0 kubenswrapper[7744]: I0220 15:00:01.316621 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 15:00:01.316725 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 15:00:01.316725 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 15:00:01.316725 master-0 kubenswrapper[7744]: healthz check failed Feb 20 15:00:01.317431 master-0 kubenswrapper[7744]: I0220 15:00:01.316753 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:00:02.317125 master-0 kubenswrapper[7744]: I0220 15:00:02.317048 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 15:00:02.317125 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 15:00:02.317125 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 15:00:02.317125 master-0 kubenswrapper[7744]: healthz check failed Feb 20 15:00:02.318079 master-0 kubenswrapper[7744]: I0220 15:00:02.317144 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:00:03.316742 master-0 kubenswrapper[7744]: I0220 15:00:03.316639 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 15:00:03.316742 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 15:00:03.316742 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 15:00:03.316742 master-0 kubenswrapper[7744]: healthz check failed Feb 20 15:00:03.317210 master-0 kubenswrapper[7744]: I0220 15:00:03.316744 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:00:04.315906 master-0 kubenswrapper[7744]: I0220 15:00:04.315839 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 15:00:04.315906 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 15:00:04.315906 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 15:00:04.315906 master-0 kubenswrapper[7744]: healthz check failed Feb 20 15:00:04.315906 master-0 kubenswrapper[7744]: I0220 15:00:04.315906 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:00:05.037955 master-0 kubenswrapper[7744]: I0220 15:00:05.037898 7744 scope.go:117] "RemoveContainer" containerID="f1b1e34a79f20570df08b5141ba77d85f604d72218b6eb7fe601f67b1fcd7a77" Feb 20 15:00:05.317366 master-0 kubenswrapper[7744]: I0220 15:00:05.317191 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 15:00:05.317366 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 15:00:05.317366 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 15:00:05.317366 master-0 kubenswrapper[7744]: healthz check failed Feb 20 15:00:05.317366 master-0 kubenswrapper[7744]: I0220 15:00:05.317284 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:00:06.185364 master-0 kubenswrapper[7744]: I0220 15:00:06.185285 7744 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-2mtj6_a1af84e0-776b-4285-906a-6880dbc82a7b/snapshot-controller/3.log" Feb 20 15:00:06.185364 master-0 kubenswrapper[7744]: I0220 15:00:06.185359 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" event={"ID":"a1af84e0-776b-4285-906a-6880dbc82a7b","Type":"ContainerStarted","Data":"2ae4537b93ca1df380fb49c25fa560c619b235ffc48c39d0f2e8fa5a73331fc8"} Feb 20 15:00:06.316771 master-0 kubenswrapper[7744]: I0220 15:00:06.316656 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 15:00:06.316771 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 15:00:06.316771 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 15:00:06.316771 master-0 kubenswrapper[7744]: healthz check failed Feb 20 15:00:06.317311 master-0 kubenswrapper[7744]: I0220 15:00:06.316794 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:00:07.316701 master-0 kubenswrapper[7744]: I0220 15:00:07.316594 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 15:00:07.316701 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 15:00:07.316701 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 15:00:07.316701 master-0 kubenswrapper[7744]: healthz check failed Feb 20 15:00:07.317713 master-0 kubenswrapper[7744]: I0220 15:00:07.316714 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:00:08.317062 master-0 kubenswrapper[7744]: I0220 15:00:08.316983 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 15:00:08.317062 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 15:00:08.317062 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 15:00:08.317062 master-0 kubenswrapper[7744]: healthz check failed Feb 20 15:00:08.318106 master-0 kubenswrapper[7744]: I0220 15:00:08.317069 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:00:09.316812 master-0 kubenswrapper[7744]: I0220 15:00:09.316693 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 15:00:09.316812 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 15:00:09.316812 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 15:00:09.316812 master-0 kubenswrapper[7744]: healthz check failed Feb 20 15:00:09.316812 master-0 kubenswrapper[7744]: I0220 15:00:09.316781 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:00:10.316640 master-0 kubenswrapper[7744]: I0220 15:00:10.316555 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 15:00:10.316640 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 15:00:10.316640 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 15:00:10.316640 master-0 kubenswrapper[7744]: healthz check failed Feb 20 15:00:10.317048 master-0 kubenswrapper[7744]: I0220 15:00:10.316646 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:00:11.316977 master-0 kubenswrapper[7744]: I0220 15:00:11.316834 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 15:00:11.316977 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 15:00:11.316977 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 15:00:11.316977 master-0 kubenswrapper[7744]: healthz check failed Feb 20 15:00:11.318064 master-0 kubenswrapper[7744]: I0220 15:00:11.316981 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:00:12.316117 master-0 kubenswrapper[7744]: I0220 15:00:12.315991 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 15:00:12.316117 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 15:00:12.316117 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 15:00:12.316117 master-0 kubenswrapper[7744]: healthz check failed Feb 20 15:00:12.316640 master-0 kubenswrapper[7744]: I0220 15:00:12.316141 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:00:13.316491 master-0 kubenswrapper[7744]: I0220 15:00:13.316448 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 15:00:13.316491 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 15:00:13.316491 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 15:00:13.316491 master-0 kubenswrapper[7744]: healthz check failed Feb 20 15:00:13.317132 master-0 kubenswrapper[7744]: I0220 15:00:13.317076 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:00:13.317180 master-0 kubenswrapper[7744]: I0220 15:00:13.317168 7744 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:00:13.317876 master-0 kubenswrapper[7744]: I0220 15:00:13.317839 7744 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"4c99e85f05d7056363eecf219cc429ad9226d3b3266d2b4c70190b2024933a11"} pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" containerMessage="Container router failed startup probe, will be restarted" Feb 20 15:00:13.317918 master-0 kubenswrapper[7744]: I0220 15:00:13.317902 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" containerID="cri-o://4c99e85f05d7056363eecf219cc429ad9226d3b3266d2b4c70190b2024933a11" gracePeriod=3600 Feb 20 15:00:14.033233 master-0 kubenswrapper[7744]: I0220 15:00:14.033114 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-retry-1-master-0"] Feb 20 15:00:14.033543 master-0 kubenswrapper[7744]: E0220 15:00:14.033512 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef51d3b-cd8b-4f34-961e-8daebbed3ca6" containerName="installer" Feb 20 15:00:14.033543 master-0 kubenswrapper[7744]: I0220 15:00:14.033533 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef51d3b-cd8b-4f34-961e-8daebbed3ca6" containerName="installer" Feb 20 15:00:14.033765 master-0 kubenswrapper[7744]: E0220 15:00:14.033603 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="380174fb-b30c-4f45-9119-397cdca91756" containerName="installer" Feb 20 15:00:14.033765 master-0 kubenswrapper[7744]: I0220 15:00:14.033618 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="380174fb-b30c-4f45-9119-397cdca91756" containerName="installer" Feb 20 15:00:14.033765 master-0 kubenswrapper[7744]: E0220 15:00:14.033656 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3753e8e6-e86c-4841-bc82-ce5321b5583f" containerName="installer" Feb 20 15:00:14.033765 master-0 kubenswrapper[7744]: I0220 15:00:14.033670 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="3753e8e6-e86c-4841-bc82-ce5321b5583f" containerName="installer" Feb 20 15:00:14.033765 master-0 kubenswrapper[7744]: E0220 15:00:14.033710 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6285323-3e75-4d44-ad05-98890c097dd2" containerName="installer" Feb 20 15:00:14.033765 master-0 kubenswrapper[7744]: I0220 15:00:14.033722 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6285323-3e75-4d44-ad05-98890c097dd2" containerName="installer" Feb 20 15:00:14.034411 master-0 kubenswrapper[7744]: I0220 15:00:14.033984 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6285323-3e75-4d44-ad05-98890c097dd2" containerName="installer" Feb 20 15:00:14.034411 master-0 kubenswrapper[7744]: I0220 15:00:14.034017 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="380174fb-b30c-4f45-9119-397cdca91756" containerName="installer" Feb 20 15:00:14.034411 master-0 kubenswrapper[7744]: I0220 15:00:14.034049 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef51d3b-cd8b-4f34-961e-8daebbed3ca6" containerName="installer" Feb 20 15:00:14.034411 master-0 kubenswrapper[7744]: I0220 15:00:14.034077 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="3753e8e6-e86c-4841-bc82-ce5321b5583f" containerName="installer" Feb 20 15:00:14.035014 master-0 kubenswrapper[7744]: I0220 15:00:14.034969 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:00:14.037750 master-0 kubenswrapper[7744]: I0220 15:00:14.037686 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 20 15:00:14.039169 master-0 kubenswrapper[7744]: I0220 15:00:14.039123 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-w4mx6" Feb 20 15:00:14.053160 master-0 kubenswrapper[7744]: I0220 15:00:14.052870 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-retry-1-master-0"] Feb 20 15:00:14.121762 master-0 kubenswrapper[7744]: I0220 15:00:14.121644 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:00:14.122122 master-0 kubenswrapper[7744]: I0220 15:00:14.121817 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:00:14.122122 master-0 kubenswrapper[7744]: I0220 15:00:14.122086 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:00:14.223977 master-0 kubenswrapper[7744]: I0220 15:00:14.223831 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:00:14.223977 master-0 kubenswrapper[7744]: I0220 15:00:14.223943 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:00:14.223977 master-0 kubenswrapper[7744]: I0220 15:00:14.223980 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:00:14.224388 master-0 kubenswrapper[7744]: I0220 15:00:14.224110 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:00:14.224388 master-0 kubenswrapper[7744]: I0220 15:00:14.224273 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:00:14.254079 master-0 kubenswrapper[7744]: I0220 15:00:14.254022 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:00:14.371508 master-0 kubenswrapper[7744]: I0220 15:00:14.371363 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:00:14.886653 master-0 kubenswrapper[7744]: I0220 15:00:14.886510 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-retry-1-master-0"] Feb 20 15:00:15.265835 master-0 kubenswrapper[7744]: I0220 15:00:15.265724 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"fea431d7-394f-4639-abd6-c70a28921fc6","Type":"ContainerStarted","Data":"c88ebe1ca0622fd22f4a19976f3ec2cf228a80d7134db8d5e9d57aad94e932f3"} Feb 20 15:00:15.885685 master-0 kubenswrapper[7744]: I0220 15:00:15.885604 7744 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 15:00:16.114388 master-0 kubenswrapper[7744]: I0220 15:00:16.114284 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Feb 20 15:00:16.115704 master-0 kubenswrapper[7744]: I0220 15:00:16.115659 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 20 15:00:16.118285 master-0 kubenswrapper[7744]: I0220 15:00:16.118242 7744 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 20 15:00:16.118606 master-0 kubenswrapper[7744]: I0220 15:00:16.118566 7744 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-j2nvb" Feb 20 15:00:16.133746 master-0 kubenswrapper[7744]: I0220 15:00:16.133558 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Feb 20 15:00:16.258430 master-0 kubenswrapper[7744]: I0220 15:00:16.258345 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab3c370c-58b4-4115-a359-b3f55c87284d-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"ab3c370c-58b4-4115-a359-b3f55c87284d\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 20 15:00:16.258687 master-0 kubenswrapper[7744]: I0220 15:00:16.258576 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab3c370c-58b4-4115-a359-b3f55c87284d-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"ab3c370c-58b4-4115-a359-b3f55c87284d\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 20 15:00:16.258687 master-0 kubenswrapper[7744]: I0220 15:00:16.258617 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ab3c370c-58b4-4115-a359-b3f55c87284d-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"ab3c370c-58b4-4115-a359-b3f55c87284d\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 20 15:00:16.276184 master-0 kubenswrapper[7744]: I0220 15:00:16.276102 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"fea431d7-394f-4639-abd6-c70a28921fc6","Type":"ContainerStarted","Data":"91f517d397ca83de4c56e84947b8179187f25ef947f76871a498051ccbc41700"} Feb 20 15:00:16.307738 master-0 kubenswrapper[7744]: I0220 15:00:16.307620 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" podStartSLOduration=2.307593124 podStartE2EDuration="2.307593124s" podCreationTimestamp="2026-02-20 15:00:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:00:16.302261462 +0000 UTC m=+815.504461432" watchObservedRunningTime="2026-02-20 15:00:16.307593124 +0000 UTC m=+815.509793084" Feb 20 15:00:16.360826 master-0 kubenswrapper[7744]: I0220 15:00:16.360728 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab3c370c-58b4-4115-a359-b3f55c87284d-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"ab3c370c-58b4-4115-a359-b3f55c87284d\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 20 15:00:16.360826 master-0 kubenswrapper[7744]: I0220 15:00:16.360806 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ab3c370c-58b4-4115-a359-b3f55c87284d-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"ab3c370c-58b4-4115-a359-b3f55c87284d\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 20 15:00:16.361220 master-0 kubenswrapper[7744]: I0220 15:00:16.360853 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab3c370c-58b4-4115-a359-b3f55c87284d-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"ab3c370c-58b4-4115-a359-b3f55c87284d\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 20 15:00:16.361220 master-0 kubenswrapper[7744]: I0220 15:00:16.360993 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab3c370c-58b4-4115-a359-b3f55c87284d-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"ab3c370c-58b4-4115-a359-b3f55c87284d\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 20 15:00:16.361220 master-0 kubenswrapper[7744]: I0220 15:00:16.361047 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ab3c370c-58b4-4115-a359-b3f55c87284d-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"ab3c370c-58b4-4115-a359-b3f55c87284d\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 20 15:00:16.389361 master-0 kubenswrapper[7744]: I0220 15:00:16.389275 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab3c370c-58b4-4115-a359-b3f55c87284d-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"ab3c370c-58b4-4115-a359-b3f55c87284d\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 20 15:00:16.454903 master-0 kubenswrapper[7744]: I0220 15:00:16.454793 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 20 15:00:17.019065 master-0 kubenswrapper[7744]: I0220 15:00:17.018719 7744 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Feb 20 15:00:17.022868 master-0 kubenswrapper[7744]: W0220 15:00:17.022779 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podab3c370c_58b4_4115_a359_b3f55c87284d.slice/crio-c88e96be470ca889285a29fe125676aab3c03c8788f261a9c66f2a8654e5e5e5 WatchSource:0}: Error finding container c88e96be470ca889285a29fe125676aab3c03c8788f261a9c66f2a8654e5e5e5: Status 404 returned error can't find the container with id c88e96be470ca889285a29fe125676aab3c03c8788f261a9c66f2a8654e5e5e5 Feb 20 15:00:17.288543 master-0 kubenswrapper[7744]: I0220 15:00:17.288480 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"ab3c370c-58b4-4115-a359-b3f55c87284d","Type":"ContainerStarted","Data":"c88e96be470ca889285a29fe125676aab3c03c8788f261a9c66f2a8654e5e5e5"} Feb 20 15:00:18.299524 master-0 kubenswrapper[7744]: I0220 15:00:18.299430 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"ab3c370c-58b4-4115-a359-b3f55c87284d","Type":"ContainerStarted","Data":"00ed587ddf8155d51df42eba4d283cbd6beb09f53d1fc60d2651e845ec7cf08c"} Feb 20 15:00:18.325697 master-0 kubenswrapper[7744]: I0220 15:00:18.325556 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" podStartSLOduration=2.325533072 podStartE2EDuration="2.325533072s" podCreationTimestamp="2026-02-20 15:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:00:18.322696122 +0000 UTC m=+817.524896072" watchObservedRunningTime="2026-02-20 15:00:18.325533072 +0000 UTC m=+817.527733032" Feb 20 15:00:25.038477 master-0 kubenswrapper[7744]: I0220 15:00:25.038392 7744 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a76c28a2-1d48-49cb-8275-540ce323528c" Feb 20 15:00:25.038477 master-0 kubenswrapper[7744]: I0220 15:00:25.038452 7744 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a76c28a2-1d48-49cb-8275-540ce323528c" Feb 20 15:00:25.072443 master-0 kubenswrapper[7744]: I0220 15:00:25.072151 7744 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Feb 20 15:00:25.080063 master-0 kubenswrapper[7744]: I0220 15:00:25.079969 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 20 15:00:25.092036 master-0 kubenswrapper[7744]: I0220 15:00:25.091964 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 20 15:00:25.107939 master-0 kubenswrapper[7744]: I0220 15:00:25.107832 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 20 15:00:25.364002 master-0 kubenswrapper[7744]: I0220 15:00:25.363814 7744 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a76c28a2-1d48-49cb-8275-540ce323528c" Feb 20 15:00:25.364002 master-0 kubenswrapper[7744]: I0220 15:00:25.363858 7744 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a76c28a2-1d48-49cb-8275-540ce323528c" Feb 20 15:00:31.099708 master-0 kubenswrapper[7744]: I0220 15:00:31.099566 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=6.099536445 podStartE2EDuration="6.099536445s" podCreationTimestamp="2026-02-20 15:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:00:31.095516595 +0000 UTC m=+830.297716555" watchObservedRunningTime="2026-02-20 15:00:31.099536445 +0000 UTC m=+830.301736405" Feb 20 15:00:50.562514 master-0 kubenswrapper[7744]: I0220 15:00:50.562401 7744 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 20 15:00:50.563491 master-0 kubenswrapper[7744]: I0220 15:00:50.562806 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" containerID="cri-o://ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2" gracePeriod=30 Feb 20 15:00:50.564182 master-0 kubenswrapper[7744]: I0220 15:00:50.562961 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788" gracePeriod=30 Feb 20 15:00:50.564558 master-0 kubenswrapper[7744]: I0220 15:00:50.564467 7744 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 15:00:50.565418 master-0 kubenswrapper[7744]: E0220 15:00:50.565039 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.565418 master-0 kubenswrapper[7744]: I0220 15:00:50.565069 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.565418 master-0 kubenswrapper[7744]: E0220 15:00:50.565100 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 20 15:00:50.565418 master-0 kubenswrapper[7744]: I0220 15:00:50.565109 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 20 15:00:50.565418 master-0 kubenswrapper[7744]: E0220 15:00:50.565165 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.565418 master-0 kubenswrapper[7744]: I0220 15:00:50.565175 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.565418 master-0 kubenswrapper[7744]: E0220 15:00:50.565189 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.565418 master-0 kubenswrapper[7744]: I0220 15:00:50.565197 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.565418 master-0 kubenswrapper[7744]: E0220 15:00:50.565210 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.565418 master-0 kubenswrapper[7744]: I0220 15:00:50.565218 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.565418 master-0 kubenswrapper[7744]: E0220 15:00:50.565235 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.565418 master-0 kubenswrapper[7744]: I0220 15:00:50.565244 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.566390 master-0 kubenswrapper[7744]: I0220 15:00:50.565577 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.566390 master-0 kubenswrapper[7744]: I0220 15:00:50.565632 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.566390 master-0 kubenswrapper[7744]: I0220 15:00:50.565643 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 20 15:00:50.566390 master-0 kubenswrapper[7744]: I0220 15:00:50.565658 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.566390 master-0 kubenswrapper[7744]: I0220 15:00:50.565669 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.566390 master-0 kubenswrapper[7744]: I0220 15:00:50.565680 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.566390 master-0 kubenswrapper[7744]: E0220 15:00:50.565832 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.566390 master-0 kubenswrapper[7744]: I0220 15:00:50.565839 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.566390 master-0 kubenswrapper[7744]: E0220 15:00:50.565861 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.566390 master-0 kubenswrapper[7744]: I0220 15:00:50.565868 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.566390 master-0 kubenswrapper[7744]: I0220 15:00:50.566019 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.566390 master-0 kubenswrapper[7744]: I0220 15:00:50.566036 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 20 15:00:50.568499 master-0 kubenswrapper[7744]: I0220 15:00:50.568422 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:00:50.622373 master-0 kubenswrapper[7744]: I0220 15:00:50.622283 7744 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 15:00:50.656611 master-0 kubenswrapper[7744]: I0220 15:00:50.656500 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24c827995023caaffd01654949c8d4dd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:00:50.656910 master-0 kubenswrapper[7744]: I0220 15:00:50.656655 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24c827995023caaffd01654949c8d4dd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:00:50.739608 master-0 kubenswrapper[7744]: I0220 15:00:50.739547 7744 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Feb 20 15:00:50.754010 master-0 kubenswrapper[7744]: I0220 15:00:50.753967 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 15:00:50.758668 master-0 kubenswrapper[7744]: I0220 15:00:50.758628 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24c827995023caaffd01654949c8d4dd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:00:50.758773 master-0 kubenswrapper[7744]: I0220 15:00:50.758676 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24c827995023caaffd01654949c8d4dd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:00:50.758864 master-0 kubenswrapper[7744]: I0220 15:00:50.758830 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24c827995023caaffd01654949c8d4dd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:00:50.758944 master-0 kubenswrapper[7744]: I0220 15:00:50.758875 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24c827995023caaffd01654949c8d4dd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:00:50.781422 master-0 kubenswrapper[7744]: I0220 15:00:50.781383 7744 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="11619f3c-1fc4-494e-9114-8108fd006388" Feb 20 15:00:50.860378 master-0 kubenswrapper[7744]: I0220 15:00:50.860278 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 20 15:00:50.860645 master-0 kubenswrapper[7744]: I0220 15:00:50.860626 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 20 15:00:50.860771 master-0 kubenswrapper[7744]: I0220 15:00:50.860755 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 20 15:00:50.860882 master-0 kubenswrapper[7744]: I0220 15:00:50.860866 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 20 15:00:50.861043 master-0 kubenswrapper[7744]: I0220 15:00:50.861023 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 20 15:00:50.861264 master-0 kubenswrapper[7744]: I0220 15:00:50.860384 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:00:50.861414 master-0 kubenswrapper[7744]: I0220 15:00:50.860693 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets" (OuterVolumeSpecName: "secrets") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:00:50.861414 master-0 kubenswrapper[7744]: I0220 15:00:50.860802 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs" (OuterVolumeSpecName: "logs") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:00:50.861414 master-0 kubenswrapper[7744]: I0220 15:00:50.860967 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config" (OuterVolumeSpecName: "config") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:00:50.861414 master-0 kubenswrapper[7744]: I0220 15:00:50.861064 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:00:50.862085 master-0 kubenswrapper[7744]: I0220 15:00:50.862040 7744 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:50.862085 master-0 kubenswrapper[7744]: I0220 15:00:50.862078 7744 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:50.862217 master-0 kubenswrapper[7744]: I0220 15:00:50.862092 7744 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:50.862217 master-0 kubenswrapper[7744]: I0220 15:00:50.862104 7744 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:50.862217 master-0 kubenswrapper[7744]: I0220 15:00:50.862115 7744 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:50.914007 master-0 kubenswrapper[7744]: I0220 15:00:50.913944 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:00:50.940089 master-0 kubenswrapper[7744]: W0220 15:00:50.940024 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24c827995023caaffd01654949c8d4dd.slice/crio-733f20d59a2548ac1c9bcca1dc13fb3a2581f1cde83bb3bdf7f826c178e76f76 WatchSource:0}: Error finding container 733f20d59a2548ac1c9bcca1dc13fb3a2581f1cde83bb3bdf7f826c178e76f76: Status 404 returned error can't find the container with id 733f20d59a2548ac1c9bcca1dc13fb3a2581f1cde83bb3bdf7f826c178e76f76 Feb 20 15:00:51.055039 master-0 kubenswrapper[7744]: I0220 15:00:51.054958 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ad9373c007a4fcd25e70622bdc8deb" path="/var/lib/kubelet/pods/c9ad9373c007a4fcd25e70622bdc8deb/volumes" Feb 20 15:00:51.055822 master-0 kubenswrapper[7744]: I0220 15:00:51.055775 7744 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Feb 20 15:00:51.078254 master-0 kubenswrapper[7744]: I0220 15:00:51.078164 7744 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 20 15:00:51.078254 master-0 kubenswrapper[7744]: I0220 15:00:51.078224 7744 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="11619f3c-1fc4-494e-9114-8108fd006388" Feb 20 15:00:51.082009 master-0 kubenswrapper[7744]: I0220 15:00:51.081944 7744 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 20 15:00:51.082009 master-0 kubenswrapper[7744]: I0220 15:00:51.081993 7744 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="11619f3c-1fc4-494e-9114-8108fd006388" Feb 20 15:00:51.628076 master-0 kubenswrapper[7744]: I0220 15:00:51.628006 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24c827995023caaffd01654949c8d4dd","Type":"ContainerStarted","Data":"180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b"} Feb 20 15:00:51.628076 master-0 kubenswrapper[7744]: I0220 15:00:51.628074 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24c827995023caaffd01654949c8d4dd","Type":"ContainerStarted","Data":"d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5"} Feb 20 15:00:51.628871 master-0 kubenswrapper[7744]: I0220 15:00:51.628094 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24c827995023caaffd01654949c8d4dd","Type":"ContainerStarted","Data":"733f20d59a2548ac1c9bcca1dc13fb3a2581f1cde83bb3bdf7f826c178e76f76"} Feb 20 15:00:51.630280 master-0 kubenswrapper[7744]: I0220 15:00:51.630230 7744 generic.go:334] "Generic (PLEG): container finished" podID="ab3c370c-58b4-4115-a359-b3f55c87284d" containerID="00ed587ddf8155d51df42eba4d283cbd6beb09f53d1fc60d2651e845ec7cf08c" exitCode=0 Feb 20 15:00:51.630372 master-0 kubenswrapper[7744]: I0220 15:00:51.630318 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"ab3c370c-58b4-4115-a359-b3f55c87284d","Type":"ContainerDied","Data":"00ed587ddf8155d51df42eba4d283cbd6beb09f53d1fc60d2651e845ec7cf08c"} Feb 20 15:00:51.633175 master-0 kubenswrapper[7744]: I0220 15:00:51.633119 7744 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788" exitCode=0 Feb 20 15:00:51.633175 master-0 kubenswrapper[7744]: I0220 15:00:51.633153 7744 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2" exitCode=0 Feb 20 15:00:51.633413 master-0 kubenswrapper[7744]: I0220 15:00:51.633195 7744 scope.go:117] "RemoveContainer" containerID="40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788" Feb 20 15:00:51.633413 master-0 kubenswrapper[7744]: I0220 15:00:51.633356 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 20 15:00:51.661432 master-0 kubenswrapper[7744]: I0220 15:00:51.661364 7744 scope.go:117] "RemoveContainer" containerID="29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d" Feb 20 15:00:51.700182 master-0 kubenswrapper[7744]: I0220 15:00:51.700107 7744 scope.go:117] "RemoveContainer" containerID="ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2" Feb 20 15:00:51.736468 master-0 kubenswrapper[7744]: I0220 15:00:51.736428 7744 scope.go:117] "RemoveContainer" containerID="40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788" Feb 20 15:00:51.737023 master-0 kubenswrapper[7744]: E0220 15:00:51.736969 7744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788\": container with ID starting with 40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788 not found: ID does not exist" containerID="40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788" Feb 20 15:00:51.737151 master-0 kubenswrapper[7744]: I0220 15:00:51.737038 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788"} err="failed to get container status \"40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788\": rpc error: code = NotFound desc = could not find container \"40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788\": container with ID starting with 40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788 not found: ID does not exist" Feb 20 15:00:51.737151 master-0 kubenswrapper[7744]: I0220 15:00:51.737067 7744 scope.go:117] "RemoveContainer" containerID="29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d" Feb 20 15:00:51.738432 master-0 kubenswrapper[7744]: E0220 15:00:51.737547 7744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d\": container with ID starting with 29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d not found: ID does not exist" containerID="29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d" Feb 20 15:00:51.738432 master-0 kubenswrapper[7744]: I0220 15:00:51.737587 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d"} err="failed to get container status \"29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d\": rpc error: code = NotFound desc = could not find container \"29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d\": container with ID starting with 29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d not found: ID does not exist" Feb 20 15:00:51.738432 master-0 kubenswrapper[7744]: I0220 15:00:51.737613 7744 scope.go:117] "RemoveContainer" containerID="ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2" Feb 20 15:00:51.738432 master-0 kubenswrapper[7744]: E0220 15:00:51.737977 7744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2\": container with ID starting with ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2 not found: ID does not exist" containerID="ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2" Feb 20 15:00:51.738432 master-0 kubenswrapper[7744]: I0220 15:00:51.737998 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2"} err="failed to get container status \"ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2\": rpc error: code = NotFound desc = could not find container \"ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2\": container with ID starting with ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2 not found: ID does not exist" Feb 20 15:00:51.738432 master-0 kubenswrapper[7744]: I0220 15:00:51.738032 7744 scope.go:117] "RemoveContainer" containerID="40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788" Feb 20 15:00:51.738887 master-0 kubenswrapper[7744]: I0220 15:00:51.738837 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788"} err="failed to get container status \"40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788\": rpc error: code = NotFound desc = could not find container \"40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788\": container with ID starting with 40a64d1b035523aeb3a7f0bbcbe7ebf6b87f50eae3a00e59f87f860298777788 not found: ID does not exist" Feb 20 15:00:51.738887 master-0 kubenswrapper[7744]: I0220 15:00:51.738852 7744 scope.go:117] "RemoveContainer" containerID="29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d" Feb 20 15:00:51.739402 master-0 kubenswrapper[7744]: I0220 15:00:51.739307 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d"} err="failed to get container status \"29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d\": rpc error: code = NotFound desc = could not find container \"29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d\": container with ID starting with 29b40146bf3d0aa19010d06378cf3e0c8fafef999297476a5b9ecba0b355f40d not found: ID does not exist" Feb 20 15:00:51.739672 master-0 kubenswrapper[7744]: I0220 15:00:51.739385 7744 scope.go:117] "RemoveContainer" containerID="ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2" Feb 20 15:00:51.740507 master-0 kubenswrapper[7744]: I0220 15:00:51.740456 7744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2"} err="failed to get container status \"ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2\": rpc error: code = NotFound desc = could not find container \"ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2\": container with ID starting with ee7c24209f258e3f96f54dfa6e2dd9ddef705809075545d7673b369bc8cf23e2 not found: ID does not exist" Feb 20 15:00:52.647756 master-0 kubenswrapper[7744]: I0220 15:00:52.647674 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24c827995023caaffd01654949c8d4dd","Type":"ContainerStarted","Data":"23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639"} Feb 20 15:00:52.647756 master-0 kubenswrapper[7744]: I0220 15:00:52.647759 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24c827995023caaffd01654949c8d4dd","Type":"ContainerStarted","Data":"a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10"} Feb 20 15:00:52.694471 master-0 kubenswrapper[7744]: I0220 15:00:52.693295 7744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.693254407 podStartE2EDuration="2.693254407s" podCreationTimestamp="2026-02-20 15:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:00:52.686091898 +0000 UTC m=+851.888291828" watchObservedRunningTime="2026-02-20 15:00:52.693254407 +0000 UTC m=+851.895454377" Feb 20 15:00:53.064243 master-0 kubenswrapper[7744]: I0220 15:00:53.064172 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 20 15:00:53.214532 master-0 kubenswrapper[7744]: I0220 15:00:53.211209 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab3c370c-58b4-4115-a359-b3f55c87284d-kubelet-dir\") pod \"ab3c370c-58b4-4115-a359-b3f55c87284d\" (UID: \"ab3c370c-58b4-4115-a359-b3f55c87284d\") " Feb 20 15:00:53.214532 master-0 kubenswrapper[7744]: I0220 15:00:53.211314 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab3c370c-58b4-4115-a359-b3f55c87284d-kube-api-access\") pod \"ab3c370c-58b4-4115-a359-b3f55c87284d\" (UID: \"ab3c370c-58b4-4115-a359-b3f55c87284d\") " Feb 20 15:00:53.214532 master-0 kubenswrapper[7744]: I0220 15:00:53.211341 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3c370c-58b4-4115-a359-b3f55c87284d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ab3c370c-58b4-4115-a359-b3f55c87284d" (UID: "ab3c370c-58b4-4115-a359-b3f55c87284d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:00:53.214532 master-0 kubenswrapper[7744]: I0220 15:00:53.211383 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ab3c370c-58b4-4115-a359-b3f55c87284d-var-lock\") pod \"ab3c370c-58b4-4115-a359-b3f55c87284d\" (UID: \"ab3c370c-58b4-4115-a359-b3f55c87284d\") " Feb 20 15:00:53.214532 master-0 kubenswrapper[7744]: I0220 15:00:53.211463 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3c370c-58b4-4115-a359-b3f55c87284d-var-lock" (OuterVolumeSpecName: "var-lock") pod "ab3c370c-58b4-4115-a359-b3f55c87284d" (UID: "ab3c370c-58b4-4115-a359-b3f55c87284d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:00:53.214532 master-0 kubenswrapper[7744]: I0220 15:00:53.211830 7744 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab3c370c-58b4-4115-a359-b3f55c87284d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:53.214532 master-0 kubenswrapper[7744]: I0220 15:00:53.211853 7744 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ab3c370c-58b4-4115-a359-b3f55c87284d-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:53.215756 master-0 kubenswrapper[7744]: I0220 15:00:53.214892 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab3c370c-58b4-4115-a359-b3f55c87284d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ab3c370c-58b4-4115-a359-b3f55c87284d" (UID: "ab3c370c-58b4-4115-a359-b3f55c87284d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:00:53.313905 master-0 kubenswrapper[7744]: I0220 15:00:53.313665 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab3c370c-58b4-4115-a359-b3f55c87284d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:53.431539 master-0 kubenswrapper[7744]: I0220 15:00:53.431468 7744 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 20 15:00:53.431971 master-0 kubenswrapper[7744]: E0220 15:00:53.431914 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3c370c-58b4-4115-a359-b3f55c87284d" containerName="installer" Feb 20 15:00:53.431971 master-0 kubenswrapper[7744]: I0220 15:00:53.431966 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3c370c-58b4-4115-a359-b3f55c87284d" containerName="installer" Feb 20 15:00:53.432232 master-0 kubenswrapper[7744]: I0220 15:00:53.432199 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3c370c-58b4-4115-a359-b3f55c87284d" containerName="installer" Feb 20 15:00:53.432907 master-0 kubenswrapper[7744]: I0220 15:00:53.432866 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:53.433069 master-0 kubenswrapper[7744]: I0220 15:00:53.433011 7744 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 20 15:00:53.433616 master-0 kubenswrapper[7744]: I0220 15:00:53.433507 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" containerID="cri-o://321be2d7453c33396b3363bf789e4d552d4e8d66090aa9915bf60f644a971c6e" gracePeriod=15 Feb 20 15:00:53.433723 master-0 kubenswrapper[7744]: I0220 15:00:53.433530 7744 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3c2b6c4d3887c6ce78fb1f319d3d917dd19b6ede5e9ab3d53c00d05b6ea4ef23" gracePeriod=15 Feb 20 15:00:53.436871 master-0 kubenswrapper[7744]: I0220 15:00:53.436785 7744 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 20 15:00:53.437287 master-0 kubenswrapper[7744]: E0220 15:00:53.437246 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 20 15:00:53.437287 master-0 kubenswrapper[7744]: I0220 15:00:53.437276 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 20 15:00:53.437435 master-0 kubenswrapper[7744]: E0220 15:00:53.437325 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 20 15:00:53.437435 master-0 kubenswrapper[7744]: I0220 15:00:53.437340 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 20 15:00:53.437435 master-0 kubenswrapper[7744]: E0220 15:00:53.437364 7744 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 20 15:00:53.437435 master-0 kubenswrapper[7744]: I0220 15:00:53.437377 7744 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 20 15:00:53.437713 master-0 kubenswrapper[7744]: I0220 15:00:53.437600 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 20 15:00:53.437713 master-0 kubenswrapper[7744]: I0220 15:00:53.437622 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 20 15:00:53.437713 master-0 kubenswrapper[7744]: I0220 15:00:53.437660 7744 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 20 15:00:53.441333 master-0 kubenswrapper[7744]: I0220 15:00:53.441280 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:00:53.507371 master-0 kubenswrapper[7744]: E0220 15:00:53.507248 7744 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:00:53.516277 master-0 kubenswrapper[7744]: I0220 15:00:53.516199 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:53.516424 master-0 kubenswrapper[7744]: I0220 15:00:53.516380 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:00:53.516545 master-0 kubenswrapper[7744]: I0220 15:00:53.516439 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:53.516545 master-0 kubenswrapper[7744]: I0220 15:00:53.516525 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:53.516811 master-0 kubenswrapper[7744]: I0220 15:00:53.516622 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:53.516811 master-0 kubenswrapper[7744]: I0220 15:00:53.516658 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:00:53.516811 master-0 kubenswrapper[7744]: I0220 15:00:53.516689 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:53.516811 master-0 kubenswrapper[7744]: I0220 15:00:53.516724 7744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:00:53.618131 master-0 kubenswrapper[7744]: I0220 15:00:53.618031 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:00:53.618364 master-0 kubenswrapper[7744]: I0220 15:00:53.618150 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:53.618364 master-0 kubenswrapper[7744]: I0220 15:00:53.618273 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:00:53.618364 master-0 kubenswrapper[7744]: I0220 15:00:53.618283 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:53.618364 master-0 kubenswrapper[7744]: I0220 15:00:53.618332 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:53.618364 master-0 kubenswrapper[7744]: I0220 15:00:53.618361 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:53.618675 master-0 kubenswrapper[7744]: I0220 15:00:53.618401 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:53.618675 master-0 kubenswrapper[7744]: I0220 15:00:53.618428 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:00:53.618675 master-0 kubenswrapper[7744]: I0220 15:00:53.618480 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:53.618675 master-0 kubenswrapper[7744]: I0220 15:00:53.618506 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:53.618675 master-0 kubenswrapper[7744]: I0220 15:00:53.618532 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:00:53.618675 master-0 kubenswrapper[7744]: I0220 15:00:53.618596 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:00:53.618675 master-0 kubenswrapper[7744]: I0220 15:00:53.618647 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:53.619233 master-0 kubenswrapper[7744]: I0220 15:00:53.618693 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:00:53.619233 master-0 kubenswrapper[7744]: I0220 15:00:53.618787 7744 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:53.619233 master-0 kubenswrapper[7744]: I0220 15:00:53.619017 7744 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:53.660933 master-0 kubenswrapper[7744]: I0220 15:00:53.660842 7744 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="3c2b6c4d3887c6ce78fb1f319d3d917dd19b6ede5e9ab3d53c00d05b6ea4ef23" exitCode=0 Feb 20 15:00:53.663572 master-0 kubenswrapper[7744]: I0220 15:00:53.663446 7744 generic.go:334] "Generic (PLEG): container finished" podID="fea431d7-394f-4639-abd6-c70a28921fc6" containerID="91f517d397ca83de4c56e84947b8179187f25ef947f76871a498051ccbc41700" exitCode=0 Feb 20 15:00:53.663572 master-0 kubenswrapper[7744]: I0220 15:00:53.663560 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"fea431d7-394f-4639-abd6-c70a28921fc6","Type":"ContainerDied","Data":"91f517d397ca83de4c56e84947b8179187f25ef947f76871a498051ccbc41700"} Feb 20 15:00:53.665310 master-0 kubenswrapper[7744]: I0220 15:00:53.665231 7744 status_manager.go:851] "Failed to get status for pod" podUID="fea431d7-394f-4639-abd6-c70a28921fc6" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:00:53.666752 master-0 kubenswrapper[7744]: I0220 15:00:53.666705 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"ab3c370c-58b4-4115-a359-b3f55c87284d","Type":"ContainerDied","Data":"c88e96be470ca889285a29fe125676aab3c03c8788f261a9c66f2a8654e5e5e5"} Feb 20 15:00:53.666861 master-0 kubenswrapper[7744]: I0220 15:00:53.666776 7744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c88e96be470ca889285a29fe125676aab3c03c8788f261a9c66f2a8654e5e5e5" Feb 20 15:00:53.666861 master-0 kubenswrapper[7744]: I0220 15:00:53.666724 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 20 15:00:53.690908 master-0 kubenswrapper[7744]: I0220 15:00:53.690847 7744 status_manager.go:851] "Failed to get status for pod" podUID="fea431d7-394f-4639-abd6-c70a28921fc6" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:00:53.691580 master-0 kubenswrapper[7744]: I0220 15:00:53.691511 7744 status_manager.go:851] "Failed to get status for pod" podUID="ab3c370c-58b4-4115-a359-b3f55c87284d" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:00:53.808489 master-0 kubenswrapper[7744]: I0220 15:00:53.808381 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:00:53.847400 master-0 kubenswrapper[7744]: W0220 15:00:53.847244 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb342c942d3d92fd08ed7cf68fafb94c.slice/crio-13613c47bf97c812cc9e166f449f1af9864a34c9dcb66bd85e8e3c727e970a41 WatchSource:0}: Error finding container 13613c47bf97c812cc9e166f449f1af9864a34c9dcb66bd85e8e3c727e970a41: Status 404 returned error can't find the container with id 13613c47bf97c812cc9e166f449f1af9864a34c9dcb66bd85e8e3c727e970a41 Feb 20 15:00:53.852999 master-0 kubenswrapper[7744]: E0220 15:00:53.852809 7744 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.1895fc7fb3f5f214 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:eb342c942d3d92fd08ed7cf68fafb94c,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 15:00:53.850444308 +0000 UTC m=+853.052644268,LastTimestamp:2026-02-20 15:00:53.850444308 +0000 UTC m=+853.052644268,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 15:00:54.337642 master-0 kubenswrapper[7744]: I0220 15:00:54.337542 7744 patch_prober.go:28] interesting pod/bootstrap-kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" start-of-body= Feb 20 15:00:54.337849 master-0 kubenswrapper[7744]: I0220 15:00:54.337638 7744 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:00:54.679733 master-0 kubenswrapper[7744]: I0220 15:00:54.679614 7744 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="2359af63f52b488394f4fa66a44d4982b382146adcf63bb193421cfeb1ecf07e" exitCode=0 Feb 20 15:00:54.680548 master-0 kubenswrapper[7744]: I0220 15:00:54.679739 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerDied","Data":"2359af63f52b488394f4fa66a44d4982b382146adcf63bb193421cfeb1ecf07e"} Feb 20 15:00:54.680548 master-0 kubenswrapper[7744]: I0220 15:00:54.679824 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"13613c47bf97c812cc9e166f449f1af9864a34c9dcb66bd85e8e3c727e970a41"} Feb 20 15:00:54.681594 master-0 kubenswrapper[7744]: I0220 15:00:54.681523 7744 status_manager.go:851] "Failed to get status for pod" podUID="fea431d7-394f-4639-abd6-c70a28921fc6" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:00:54.681688 master-0 kubenswrapper[7744]: E0220 15:00:54.681560 7744 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:00:54.682604 master-0 kubenswrapper[7744]: I0220 15:00:54.682521 7744 status_manager.go:851] "Failed to get status for pod" podUID="ab3c370c-58b4-4115-a359-b3f55c87284d" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:00:55.108312 master-0 kubenswrapper[7744]: I0220 15:00:55.108229 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:00:55.110193 master-0 kubenswrapper[7744]: I0220 15:00:55.110111 7744 status_manager.go:851] "Failed to get status for pod" podUID="fea431d7-394f-4639-abd6-c70a28921fc6" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:00:55.111342 master-0 kubenswrapper[7744]: I0220 15:00:55.111294 7744 status_manager.go:851] "Failed to get status for pod" podUID="ab3c370c-58b4-4115-a359-b3f55c87284d" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:00:55.249515 master-0 kubenswrapper[7744]: I0220 15:00:55.249146 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") pod \"fea431d7-394f-4639-abd6-c70a28921fc6\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " Feb 20 15:00:55.249515 master-0 kubenswrapper[7744]: I0220 15:00:55.249312 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-var-lock\") pod \"fea431d7-394f-4639-abd6-c70a28921fc6\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " Feb 20 15:00:55.249515 master-0 kubenswrapper[7744]: I0220 15:00:55.249351 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-kubelet-dir\") pod \"fea431d7-394f-4639-abd6-c70a28921fc6\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " Feb 20 15:00:55.249515 master-0 kubenswrapper[7744]: I0220 15:00:55.249456 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-var-lock" (OuterVolumeSpecName: "var-lock") pod "fea431d7-394f-4639-abd6-c70a28921fc6" (UID: "fea431d7-394f-4639-abd6-c70a28921fc6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:00:55.249850 master-0 kubenswrapper[7744]: I0220 15:00:55.249570 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fea431d7-394f-4639-abd6-c70a28921fc6" (UID: "fea431d7-394f-4639-abd6-c70a28921fc6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:00:55.250207 master-0 kubenswrapper[7744]: I0220 15:00:55.250158 7744 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:55.250207 master-0 kubenswrapper[7744]: I0220 15:00:55.250200 7744 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:55.253443 master-0 kubenswrapper[7744]: I0220 15:00:55.253394 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fea431d7-394f-4639-abd6-c70a28921fc6" (UID: "fea431d7-394f-4639-abd6-c70a28921fc6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:00:55.351571 master-0 kubenswrapper[7744]: I0220 15:00:55.351517 7744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:55.687253 master-0 kubenswrapper[7744]: I0220 15:00:55.687166 7744 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="321be2d7453c33396b3363bf789e4d552d4e8d66090aa9915bf60f644a971c6e" exitCode=0 Feb 20 15:00:55.687253 master-0 kubenswrapper[7744]: I0220 15:00:55.687244 7744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7111b0bf2b7379929af69699174f229cbbc25f01fc7ffc44b3371950f17c6f2" Feb 20 15:00:55.689081 master-0 kubenswrapper[7744]: I0220 15:00:55.688262 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"fea431d7-394f-4639-abd6-c70a28921fc6","Type":"ContainerDied","Data":"c88ebe1ca0622fd22f4a19976f3ec2cf228a80d7134db8d5e9d57aad94e932f3"} Feb 20 15:00:55.689081 master-0 kubenswrapper[7744]: I0220 15:00:55.688304 7744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c88ebe1ca0622fd22f4a19976f3ec2cf228a80d7134db8d5e9d57aad94e932f3" Feb 20 15:00:55.689081 master-0 kubenswrapper[7744]: I0220 15:00:55.688281 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:00:55.689863 master-0 kubenswrapper[7744]: I0220 15:00:55.689833 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"355cedb8d26b37698e3a57c3d09006cbd9f428b85de301bc95a24404f10ef9fd"} Feb 20 15:00:55.689863 master-0 kubenswrapper[7744]: I0220 15:00:55.689861 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"798ea82daeb38f0c7b68436fab2a622bb37f8874bef02285ea669acff721c7d4"} Feb 20 15:00:55.714610 master-0 kubenswrapper[7744]: I0220 15:00:55.714573 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 15:00:55.859164 master-0 kubenswrapper[7744]: I0220 15:00:55.859105 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 20 15:00:55.859425 master-0 kubenswrapper[7744]: I0220 15:00:55.859229 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 20 15:00:55.859425 master-0 kubenswrapper[7744]: I0220 15:00:55.859235 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:00:55.859425 master-0 kubenswrapper[7744]: I0220 15:00:55.859253 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 20 15:00:55.859425 master-0 kubenswrapper[7744]: I0220 15:00:55.859311 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs" (OuterVolumeSpecName: "logs") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:00:55.859425 master-0 kubenswrapper[7744]: I0220 15:00:55.859372 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 20 15:00:55.859425 master-0 kubenswrapper[7744]: I0220 15:00:55.859402 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 20 15:00:55.859425 master-0 kubenswrapper[7744]: I0220 15:00:55.859428 7744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 20 15:00:55.860018 master-0 kubenswrapper[7744]: I0220 15:00:55.859408 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets" (OuterVolumeSpecName: "secrets") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:00:55.860018 master-0 kubenswrapper[7744]: I0220 15:00:55.859499 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:00:55.860018 master-0 kubenswrapper[7744]: I0220 15:00:55.859569 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:00:55.860018 master-0 kubenswrapper[7744]: I0220 15:00:55.859557 7744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config" (OuterVolumeSpecName: "config") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:00:55.860018 master-0 kubenswrapper[7744]: I0220 15:00:55.859941 7744 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:55.860018 master-0 kubenswrapper[7744]: I0220 15:00:55.859958 7744 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:55.860018 master-0 kubenswrapper[7744]: I0220 15:00:55.859971 7744 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:55.860018 master-0 kubenswrapper[7744]: I0220 15:00:55.859983 7744 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:55.860018 master-0 kubenswrapper[7744]: I0220 15:00:55.859994 7744 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:55.860018 master-0 kubenswrapper[7744]: I0220 15:00:55.860004 7744 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:00:56.728165 master-0 kubenswrapper[7744]: I0220 15:00:56.727797 7744 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 20 15:00:56.728165 master-0 kubenswrapper[7744]: I0220 15:00:56.727800 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"77c5708572ab9b4b6918c12a1fcd864571adf469d8703ecc7203af8fab7885f3"} Feb 20 15:00:56.728165 master-0 kubenswrapper[7744]: I0220 15:00:56.727873 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"f7706ff200b2846eeea63820bf2ee306105f8590609e6b62651139a96b21f3a0"} Feb 20 15:00:56.728165 master-0 kubenswrapper[7744]: I0220 15:00:56.727895 7744 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"39189b545322f50b0910ed1efecdaa2e4608924890fcad29e9895c652836077f"} Feb 20 15:00:57.048516 master-0 kubenswrapper[7744]: I0220 15:00:57.047635 7744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="687e92a6cecf1e2beeef16a0b322ad08" path="/var/lib/kubelet/pods/687e92a6cecf1e2beeef16a0b322ad08/volumes" Feb 20 15:00:57.048516 master-0 kubenswrapper[7744]: I0220 15:00:57.048036 7744 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 20 15:00:58.508887 master-0 kubenswrapper[7744]: I0220 15:00:58.508806 7744 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:00:58.550588 master-0 kubenswrapper[7744]: W0220 15:00:58.550517 7744 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c4f5d60772fa42f26e9c219bffa62b9.slice/crio-29c1db2527f092355034b5557942ea50b25282b9b77501d427c1a6d0e01d2771 WatchSource:0}: Error finding container 29c1db2527f092355034b5557942ea50b25282b9b77501d427c1a6d0e01d2771: Status 404 returned error can't find the container with id 29c1db2527f092355034b5557942ea50b25282b9b77501d427c1a6d0e01d2771 Feb 20 15:00:59.763685 master-0 kubenswrapper[7744]: I0220 15:00:59.763620 7744 generic.go:334] "Generic (PLEG): container finished" podID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerID="4c99e85f05d7056363eecf219cc429ad9226d3b3266d2b4c70190b2024933a11" exitCode=0 Feb 20 15:01:00.317860 master-0 kubenswrapper[7744]: I0220 15:01:00.317772 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 15:01:00.317860 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 15:01:00.317860 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 15:01:00.317860 master-0 kubenswrapper[7744]: healthz check failed Feb 20 15:01:00.318161 master-0 kubenswrapper[7744]: I0220 15:01:00.317863 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:01:01.317546 master-0 kubenswrapper[7744]: I0220 15:01:01.317467 7744 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 20 15:01:01.317546 master-0 kubenswrapper[7744]: [-]has-synced failed: reason withheld Feb 20 15:01:01.317546 master-0 kubenswrapper[7744]: [+]process-running ok Feb 20 15:01:01.317546 master-0 kubenswrapper[7744]: healthz check failed Feb 20 15:01:01.318717 master-0 kubenswrapper[7744]: I0220 15:01:01.317559 7744 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 20 15:01:01.594482 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 20 15:01:01.621101 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 20 15:01:01.622300 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 20 15:01:01.626053 master-0 systemd[1]: kubelet.service: Consumed 2min 23.959s CPU time. Feb 20 15:01:01.645272 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 20 15:01:01.796988 master-0 kubenswrapper[28120]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 15:01:01.796988 master-0 kubenswrapper[28120]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 20 15:01:01.796988 master-0 kubenswrapper[28120]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 15:01:01.796988 master-0 kubenswrapper[28120]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 15:01:01.796988 master-0 kubenswrapper[28120]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 20 15:01:01.796988 master-0 kubenswrapper[28120]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 20 15:01:01.797902 master-0 kubenswrapper[28120]: I0220 15:01:01.797104 28120 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 20 15:01:01.799771 master-0 kubenswrapper[28120]: W0220 15:01:01.799729 28120 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 15:01:01.799771 master-0 kubenswrapper[28120]: W0220 15:01:01.799753 28120 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 15:01:01.799771 master-0 kubenswrapper[28120]: W0220 15:01:01.799761 28120 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 15:01:01.799771 master-0 kubenswrapper[28120]: W0220 15:01:01.799768 28120 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 15:01:01.799771 master-0 kubenswrapper[28120]: W0220 15:01:01.799774 28120 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 15:01:01.799771 master-0 kubenswrapper[28120]: W0220 15:01:01.799781 28120 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799788 28120 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799795 28120 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799802 28120 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799808 28120 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799814 28120 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799821 28120 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799827 28120 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799833 28120 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799839 28120 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799844 28120 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799849 28120 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799854 28120 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799859 28120 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799864 28120 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799869 28120 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799874 28120 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799880 28120 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799885 28120 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799890 28120 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 15:01:01.800224 master-0 kubenswrapper[28120]: W0220 15:01:01.799895 28120 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.799900 28120 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.799905 28120 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.799911 28120 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.799916 28120 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.799936 28120 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.799941 28120 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.799946 28120 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.799951 28120 feature_gate.go:330] unrecognized feature gate: Example Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.799956 28120 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.799963 28120 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.799970 28120 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.799976 28120 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.799997 28120 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.800003 28120 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.800008 28120 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.800013 28120 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.800018 28120 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.800023 28120 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.800028 28120 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 15:01:01.801329 master-0 kubenswrapper[28120]: W0220 15:01:01.800033 28120 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800038 28120 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800043 28120 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800048 28120 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800053 28120 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800058 28120 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800062 28120 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800067 28120 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800077 28120 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800083 28120 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800088 28120 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800093 28120 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800100 28120 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800106 28120 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800110 28120 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800115 28120 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800120 28120 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800125 28120 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800134 28120 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800140 28120 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 15:01:01.802434 master-0 kubenswrapper[28120]: W0220 15:01:01.800145 28120 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: W0220 15:01:01.800150 28120 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: W0220 15:01:01.800155 28120 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: W0220 15:01:01.800160 28120 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: W0220 15:01:01.800168 28120 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: W0220 15:01:01.800173 28120 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: W0220 15:01:01.800179 28120 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800274 28120 flags.go:64] FLAG: --address="0.0.0.0" Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800285 28120 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800296 28120 flags.go:64] FLAG: --anonymous-auth="true" Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800303 28120 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800309 28120 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800315 28120 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800322 28120 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800331 28120 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800339 28120 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800353 28120 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800361 28120 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800369 28120 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800376 28120 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800384 28120 flags.go:64] FLAG: --cgroup-root="" Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800391 28120 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800398 28120 flags.go:64] FLAG: --client-ca-file="" Feb 20 15:01:01.803688 master-0 kubenswrapper[28120]: I0220 15:01:01.800405 28120 flags.go:64] FLAG: --cloud-config="" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800410 28120 flags.go:64] FLAG: --cloud-provider="" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800416 28120 flags.go:64] FLAG: --cluster-dns="[]" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800423 28120 flags.go:64] FLAG: --cluster-domain="" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800428 28120 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800433 28120 flags.go:64] FLAG: --config-dir="" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800439 28120 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800445 28120 flags.go:64] FLAG: --container-log-max-files="5" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800452 28120 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800457 28120 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800463 28120 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800469 28120 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800476 28120 flags.go:64] FLAG: --contention-profiling="false" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800482 28120 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800488 28120 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800494 28120 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800500 28120 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800507 28120 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800513 28120 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800518 28120 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800524 28120 flags.go:64] FLAG: --enable-load-reader="false" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800529 28120 flags.go:64] FLAG: --enable-server="true" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800535 28120 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800543 28120 flags.go:64] FLAG: --event-burst="100" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800549 28120 flags.go:64] FLAG: --event-qps="50" Feb 20 15:01:01.805065 master-0 kubenswrapper[28120]: I0220 15:01:01.800557 28120 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800563 28120 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800569 28120 flags.go:64] FLAG: --eviction-hard="" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800581 28120 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800587 28120 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800593 28120 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800598 28120 flags.go:64] FLAG: --eviction-soft="" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800604 28120 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800611 28120 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800617 28120 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800623 28120 flags.go:64] FLAG: --experimental-mounter-path="" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800629 28120 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800635 28120 flags.go:64] FLAG: --fail-swap-on="true" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800641 28120 flags.go:64] FLAG: --feature-gates="" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800647 28120 flags.go:64] FLAG: --file-check-frequency="20s" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800653 28120 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800658 28120 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800664 28120 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800670 28120 flags.go:64] FLAG: --healthz-port="10248" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800676 28120 flags.go:64] FLAG: --help="false" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800681 28120 flags.go:64] FLAG: --hostname-override="" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800687 28120 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800693 28120 flags.go:64] FLAG: --http-check-frequency="20s" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800699 28120 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800705 28120 flags.go:64] FLAG: --image-credential-provider-config="" Feb 20 15:01:01.806667 master-0 kubenswrapper[28120]: I0220 15:01:01.800710 28120 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800716 28120 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800722 28120 flags.go:64] FLAG: --image-service-endpoint="" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800727 28120 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800733 28120 flags.go:64] FLAG: --kube-api-burst="100" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800738 28120 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800744 28120 flags.go:64] FLAG: --kube-api-qps="50" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800752 28120 flags.go:64] FLAG: --kube-reserved="" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800757 28120 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800763 28120 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800769 28120 flags.go:64] FLAG: --kubelet-cgroups="" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800774 28120 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800780 28120 flags.go:64] FLAG: --lock-file="" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800785 28120 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800791 28120 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800797 28120 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800805 28120 flags.go:64] FLAG: --log-json-split-stream="false" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800811 28120 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800816 28120 flags.go:64] FLAG: --log-text-split-stream="false" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800822 28120 flags.go:64] FLAG: --logging-format="text" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800827 28120 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800833 28120 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800839 28120 flags.go:64] FLAG: --manifest-url="" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800844 28120 flags.go:64] FLAG: --manifest-url-header="" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800851 28120 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 20 15:01:01.808107 master-0 kubenswrapper[28120]: I0220 15:01:01.800857 28120 flags.go:64] FLAG: --max-open-files="1000000" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800864 28120 flags.go:64] FLAG: --max-pods="110" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800869 28120 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800875 28120 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800881 28120 flags.go:64] FLAG: --memory-manager-policy="None" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800886 28120 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800892 28120 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800898 28120 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800904 28120 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800916 28120 flags.go:64] FLAG: --node-status-max-images="50" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800942 28120 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800949 28120 flags.go:64] FLAG: --oom-score-adj="-999" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800955 28120 flags.go:64] FLAG: --pod-cidr="" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800960 28120 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800968 28120 flags.go:64] FLAG: --pod-manifest-path="" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800976 28120 flags.go:64] FLAG: --pod-max-pids="-1" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800982 28120 flags.go:64] FLAG: --pods-per-core="0" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800987 28120 flags.go:64] FLAG: --port="10250" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800993 28120 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.800999 28120 flags.go:64] FLAG: --provider-id="" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.801004 28120 flags.go:64] FLAG: --qos-reserved="" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.801010 28120 flags.go:64] FLAG: --read-only-port="10255" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.801016 28120 flags.go:64] FLAG: --register-node="true" Feb 20 15:01:01.809433 master-0 kubenswrapper[28120]: I0220 15:01:01.801021 28120 flags.go:64] FLAG: --register-schedulable="true" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801027 28120 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801036 28120 flags.go:64] FLAG: --registry-burst="10" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801042 28120 flags.go:64] FLAG: --registry-qps="5" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801047 28120 flags.go:64] FLAG: --reserved-cpus="" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801053 28120 flags.go:64] FLAG: --reserved-memory="" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801059 28120 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801065 28120 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801071 28120 flags.go:64] FLAG: --rotate-certificates="false" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801077 28120 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801083 28120 flags.go:64] FLAG: --runonce="false" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801088 28120 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801094 28120 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801100 28120 flags.go:64] FLAG: --seccomp-default="false" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801105 28120 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801111 28120 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801117 28120 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801123 28120 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801128 28120 flags.go:64] FLAG: --storage-driver-password="root" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801134 28120 flags.go:64] FLAG: --storage-driver-secure="false" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801139 28120 flags.go:64] FLAG: --storage-driver-table="stats" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801145 28120 flags.go:64] FLAG: --storage-driver-user="root" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801151 28120 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801157 28120 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801165 28120 flags.go:64] FLAG: --system-cgroups="" Feb 20 15:01:01.811495 master-0 kubenswrapper[28120]: I0220 15:01:01.801171 28120 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: I0220 15:01:01.801179 28120 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: I0220 15:01:01.801185 28120 flags.go:64] FLAG: --tls-cert-file="" Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: I0220 15:01:01.801191 28120 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: I0220 15:01:01.801197 28120 flags.go:64] FLAG: --tls-min-version="" Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: I0220 15:01:01.801203 28120 flags.go:64] FLAG: --tls-private-key-file="" Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: I0220 15:01:01.801209 28120 flags.go:64] FLAG: --topology-manager-policy="none" Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: I0220 15:01:01.801214 28120 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: I0220 15:01:01.801221 28120 flags.go:64] FLAG: --topology-manager-scope="container" Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: I0220 15:01:01.801226 28120 flags.go:64] FLAG: --v="2" Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: I0220 15:01:01.801233 28120 flags.go:64] FLAG: --version="false" Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: I0220 15:01:01.801240 28120 flags.go:64] FLAG: --vmodule="" Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: I0220 15:01:01.801251 28120 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: I0220 15:01:01.801257 28120 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: W0220 15:01:01.801385 28120 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: W0220 15:01:01.801392 28120 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: W0220 15:01:01.801397 28120 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: W0220 15:01:01.801403 28120 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: W0220 15:01:01.801409 28120 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: W0220 15:01:01.801414 28120 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: W0220 15:01:01.801419 28120 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: W0220 15:01:01.801424 28120 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: W0220 15:01:01.801430 28120 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 15:01:01.813034 master-0 kubenswrapper[28120]: W0220 15:01:01.801437 28120 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801443 28120 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801449 28120 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801455 28120 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801460 28120 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801465 28120 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801471 28120 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801476 28120 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801484 28120 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801489 28120 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801496 28120 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801503 28120 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801508 28120 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801513 28120 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801518 28120 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801523 28120 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801529 28120 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801534 28120 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801539 28120 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801544 28120 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 15:01:01.826249 master-0 kubenswrapper[28120]: W0220 15:01:01.801549 28120 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801556 28120 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801561 28120 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801566 28120 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801571 28120 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801576 28120 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801581 28120 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801586 28120 feature_gate.go:330] unrecognized feature gate: Example Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801591 28120 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801596 28120 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801601 28120 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801606 28120 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801611 28120 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801615 28120 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801620 28120 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801625 28120 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801630 28120 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801635 28120 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801640 28120 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 15:01:01.832975 master-0 kubenswrapper[28120]: W0220 15:01:01.801645 28120 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801655 28120 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801661 28120 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801666 28120 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801672 28120 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801677 28120 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801682 28120 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801688 28120 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801693 28120 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801698 28120 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801704 28120 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801710 28120 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801715 28120 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801720 28120 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801727 28120 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801732 28120 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801737 28120 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801742 28120 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801747 28120 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 15:01:01.834587 master-0 kubenswrapper[28120]: W0220 15:01:01.801752 28120 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 15:01:01.836198 master-0 kubenswrapper[28120]: W0220 15:01:01.801757 28120 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 15:01:01.836198 master-0 kubenswrapper[28120]: W0220 15:01:01.801762 28120 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 15:01:01.836198 master-0 kubenswrapper[28120]: W0220 15:01:01.801767 28120 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 15:01:01.836198 master-0 kubenswrapper[28120]: W0220 15:01:01.801772 28120 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 15:01:01.836198 master-0 kubenswrapper[28120]: I0220 15:01:01.801788 28120 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 15:01:01.836198 master-0 kubenswrapper[28120]: I0220 15:01:01.813322 28120 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 20 15:01:01.836198 master-0 kubenswrapper[28120]: I0220 15:01:01.813368 28120 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 20 15:01:01.836198 master-0 kubenswrapper[28120]: W0220 15:01:01.813719 28120 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 15:01:01.836198 master-0 kubenswrapper[28120]: W0220 15:01:01.813738 28120 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 15:01:01.836198 master-0 kubenswrapper[28120]: W0220 15:01:01.813753 28120 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 15:01:01.836198 master-0 kubenswrapper[28120]: W0220 15:01:01.813767 28120 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 15:01:01.836198 master-0 kubenswrapper[28120]: W0220 15:01:01.813779 28120 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 15:01:01.836198 master-0 kubenswrapper[28120]: W0220 15:01:01.813792 28120 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 15:01:01.836198 master-0 kubenswrapper[28120]: W0220 15:01:01.813804 28120 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 15:01:01.836198 master-0 kubenswrapper[28120]: W0220 15:01:01.813816 28120 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.813827 28120 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.813839 28120 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.813861 28120 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.813873 28120 feature_gate.go:330] unrecognized feature gate: Example Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.813886 28120 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.813898 28120 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.813909 28120 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.813951 28120 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.813965 28120 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.813982 28120 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.813999 28120 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.814012 28120 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.814025 28120 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.814038 28120 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.814061 28120 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.814075 28120 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.814086 28120 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.814099 28120 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.814111 28120 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 15:01:01.837059 master-0 kubenswrapper[28120]: W0220 15:01:01.814125 28120 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814136 28120 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814149 28120 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814161 28120 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814173 28120 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814185 28120 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814197 28120 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814233 28120 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814245 28120 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814257 28120 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814269 28120 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814280 28120 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814292 28120 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814303 28120 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814315 28120 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814326 28120 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814339 28120 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814350 28120 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814362 28120 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 15:01:01.838131 master-0 kubenswrapper[28120]: W0220 15:01:01.814374 28120 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814395 28120 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814407 28120 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814418 28120 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814430 28120 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814442 28120 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814454 28120 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814470 28120 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814486 28120 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814499 28120 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814511 28120 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814526 28120 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814537 28120 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814553 28120 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814576 28120 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814590 28120 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814602 28120 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814614 28120 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814629 28120 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 15:01:01.839308 master-0 kubenswrapper[28120]: W0220 15:01:01.814646 28120 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 15:01:01.840514 master-0 kubenswrapper[28120]: W0220 15:01:01.814660 28120 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 15:01:01.840514 master-0 kubenswrapper[28120]: W0220 15:01:01.814675 28120 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 15:01:01.840514 master-0 kubenswrapper[28120]: W0220 15:01:01.814688 28120 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 15:01:01.840514 master-0 kubenswrapper[28120]: W0220 15:01:01.814701 28120 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 15:01:01.840514 master-0 kubenswrapper[28120]: W0220 15:01:01.814718 28120 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 15:01:01.840514 master-0 kubenswrapper[28120]: W0220 15:01:01.814741 28120 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 15:01:01.840514 master-0 kubenswrapper[28120]: I0220 15:01:01.814760 28120 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 15:01:01.840514 master-0 kubenswrapper[28120]: W0220 15:01:01.815417 28120 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 20 15:01:01.840514 master-0 kubenswrapper[28120]: W0220 15:01:01.815447 28120 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 20 15:01:01.840514 master-0 kubenswrapper[28120]: W0220 15:01:01.815460 28120 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 20 15:01:01.840514 master-0 kubenswrapper[28120]: W0220 15:01:01.815473 28120 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 20 15:01:01.840514 master-0 kubenswrapper[28120]: W0220 15:01:01.815486 28120 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 20 15:01:01.840514 master-0 kubenswrapper[28120]: W0220 15:01:01.815501 28120 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 20 15:01:01.840514 master-0 kubenswrapper[28120]: W0220 15:01:01.815523 28120 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 20 15:01:01.840514 master-0 kubenswrapper[28120]: W0220 15:01:01.815536 28120 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815551 28120 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815565 28120 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815580 28120 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815592 28120 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815605 28120 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815618 28120 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815634 28120 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815652 28120 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815665 28120 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815677 28120 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815689 28120 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815711 28120 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815725 28120 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815737 28120 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815749 28120 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815763 28120 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815776 28120 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815789 28120 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 20 15:01:01.841403 master-0 kubenswrapper[28120]: W0220 15:01:01.815801 28120 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.815813 28120 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.815825 28120 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.815837 28120 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.815848 28120 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.815870 28120 feature_gate.go:330] unrecognized feature gate: Example Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.815882 28120 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.815895 28120 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.815907 28120 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.815918 28120 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.815965 28120 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.815978 28120 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.816021 28120 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.816033 28120 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.816049 28120 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.816064 28120 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.816076 28120 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.816087 28120 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.816111 28120 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 20 15:01:01.842443 master-0 kubenswrapper[28120]: W0220 15:01:01.816123 28120 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816135 28120 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816147 28120 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816159 28120 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816170 28120 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816181 28120 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816194 28120 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816205 28120 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816217 28120 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816232 28120 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816248 28120 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816272 28120 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816285 28120 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816297 28120 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816309 28120 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816321 28120 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816337 28120 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816351 28120 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816364 28120 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816376 28120 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 20 15:01:01.843682 master-0 kubenswrapper[28120]: W0220 15:01:01.816387 28120 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 20 15:01:01.844917 master-0 kubenswrapper[28120]: W0220 15:01:01.816399 28120 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 20 15:01:01.844917 master-0 kubenswrapper[28120]: W0220 15:01:01.816411 28120 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 20 15:01:01.844917 master-0 kubenswrapper[28120]: W0220 15:01:01.816434 28120 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 20 15:01:01.844917 master-0 kubenswrapper[28120]: W0220 15:01:01.816448 28120 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 20 15:01:01.844917 master-0 kubenswrapper[28120]: W0220 15:01:01.816460 28120 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 20 15:01:01.844917 master-0 kubenswrapper[28120]: W0220 15:01:01.816472 28120 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 20 15:01:01.844917 master-0 kubenswrapper[28120]: I0220 15:01:01.816491 28120 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 20 15:01:01.844917 master-0 kubenswrapper[28120]: I0220 15:01:01.817026 28120 server.go:940] "Client rotation is on, will bootstrap in background" Feb 20 15:01:01.844917 master-0 kubenswrapper[28120]: I0220 15:01:01.824041 28120 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 20 15:01:01.844917 master-0 kubenswrapper[28120]: I0220 15:01:01.824210 28120 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 20 15:01:01.844917 master-0 kubenswrapper[28120]: I0220 15:01:01.824792 28120 server.go:997] "Starting client certificate rotation" Feb 20 15:01:01.844917 master-0 kubenswrapper[28120]: I0220 15:01:01.824816 28120 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 20 15:01:01.844917 master-0 kubenswrapper[28120]: I0220 15:01:01.827441 28120 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-21 14:36:18 +0000 UTC, rotation deadline is 2026-02-21 09:18:06.282201159 +0000 UTC Feb 20 15:01:01.845798 master-0 kubenswrapper[28120]: I0220 15:01:01.827571 28120 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h17m4.4546368s for next certificate rotation Feb 20 15:01:01.845798 master-0 kubenswrapper[28120]: I0220 15:01:01.828669 28120 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 15:01:01.845798 master-0 kubenswrapper[28120]: I0220 15:01:01.831184 28120 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 20 15:01:01.845798 master-0 kubenswrapper[28120]: I0220 15:01:01.843011 28120 log.go:25] "Validated CRI v1 runtime API" Feb 20 15:01:01.851281 master-0 kubenswrapper[28120]: I0220 15:01:01.851231 28120 log.go:25] "Validated CRI v1 image API" Feb 20 15:01:01.852747 master-0 kubenswrapper[28120]: I0220 15:01:01.852704 28120 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 20 15:01:01.873146 master-0 kubenswrapper[28120]: I0220 15:01:01.873054 28120 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 f887e099-fa60-4eeb-b981-d71fb787fc62:/dev/vda3] Feb 20 15:01:01.874888 master-0 kubenswrapper[28120]: I0220 15:01:01.873108 28120 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/013da989dc1e60fa75e3d1e3955a83dece7eed7353880205a9acd5aa5c2d4d69/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/013da989dc1e60fa75e3d1e3955a83dece7eed7353880205a9acd5aa5c2d4d69/userdata/shm major:0 minor:1124 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0170d69b891340d8304a044f9ba11f3c45572b8e1e7f16d78f09e0c25d8c5a22/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0170d69b891340d8304a044f9ba11f3c45572b8e1e7f16d78f09e0c25d8c5a22/userdata/shm major:0 minor:646 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/04c921d85b432c0d1b6bd571166f434dca8313768c8990c88277ecdb55bd26c7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/04c921d85b432c0d1b6bd571166f434dca8313768c8990c88277ecdb55bd26c7/userdata/shm major:0 minor:537 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/07243cbc35256d0bbc44485dfcf1dcdc835463392fa9dc5f89599380e929e672/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/07243cbc35256d0bbc44485dfcf1dcdc835463392fa9dc5f89599380e929e672/userdata/shm major:0 minor:791 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/07f2250f0416c7a8aaa5ba7190cd272a32f30bcb4026105fc1ebf0050f1e79f2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/07f2250f0416c7a8aaa5ba7190cd272a32f30bcb4026105fc1ebf0050f1e79f2/userdata/shm major:0 minor:324 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0c1b7791952a54d8b3ef36cceac195dbbcc9face3120a05a59672ee12b84ba46/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0c1b7791952a54d8b3ef36cceac195dbbcc9face3120a05a59672ee12b84ba46/userdata/shm major:0 minor:895 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0c48d8481d8bb6541d7d83f4ffc4e7c6003e82f4f8d378fb9a1333d706bc6f14/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0c48d8481d8bb6541d7d83f4ffc4e7c6003e82f4f8d378fb9a1333d706bc6f14/userdata/shm major:0 minor:438 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0ea53368ce61e6c8836a7d0c6d716b7e2c7e18ee974ab80f253b08e24d34227b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0ea53368ce61e6c8836a7d0c6d716b7e2c7e18ee974ab80f253b08e24d34227b/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/118104a32f855cf343fc9a68201c174973d8b0ae6653c1a549eeef25c7c2eefa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/118104a32f855cf343fc9a68201c174973d8b0ae6653c1a549eeef25c7c2eefa/userdata/shm major:0 minor:429 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/13613c47bf97c812cc9e166f449f1af9864a34c9dcb66bd85e8e3c727e970a41/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/13613c47bf97c812cc9e166f449f1af9864a34c9dcb66bd85e8e3c727e970a41/userdata/shm major:0 minor:97 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1489b48b9281848030ac8650ba6a4f51919e00d3276dcba9cb79f43f94b0f041/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1489b48b9281848030ac8650ba6a4f51919e00d3276dcba9cb79f43f94b0f041/userdata/shm major:0 minor:113 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1913b004153de96aee747d5e43e4468694e4be30746f1b0a2aa4f60e2176707c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1913b004153de96aee747d5e43e4468694e4be30746f1b0a2aa4f60e2176707c/userdata/shm major:0 minor:275 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/22094081262cfd9afca75424166ecb944e973d770312e29078a1dee4fb675d30/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/22094081262cfd9afca75424166ecb944e973d770312e29078a1dee4fb675d30/userdata/shm major:0 minor:794 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2210f3254bc0bc47bf63efd7d8223a017f9ce1d63560804be28d1d5db58e4a7d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2210f3254bc0bc47bf63efd7d8223a017f9ce1d63560804be28d1d5db58e4a7d/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/236aeb004972a9d3e9949ce545b3cfedb3b4ea60df38f4b61a82d0b2465524af/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/236aeb004972a9d3e9949ce545b3cfedb3b4ea60df38f4b61a82d0b2465524af/userdata/shm major:0 minor:893 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/26c5fe83ca44257f00aa75056a5ba23aa71fd99df73033faf567ea11ded1340f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/26c5fe83ca44257f00aa75056a5ba23aa71fd99df73033faf567ea11ded1340f/userdata/shm major:0 minor:143 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/29c1db2527f092355034b5557942ea50b25282b9b77501d427c1a6d0e01d2771/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/29c1db2527f092355034b5557942ea50b25282b9b77501d427c1a6d0e01d2771/userdata/shm major:0 minor:1356 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2d789ae2430f40a62d0c76334dce72b1228320484eb36b8f7f3663eb8534eb42/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2d789ae2430f40a62d0c76334dce72b1228320484eb36b8f7f3663eb8534eb42/userdata/shm major:0 minor:1181 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3209ad8e141d4f4023abb0b8711dc267473b98fd78163c32b9a46c610babe186/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3209ad8e141d4f4023abb0b8711dc267473b98fd78163c32b9a46c610babe186/userdata/shm major:0 minor:1226 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/329b7497d730cc1438c1c88bd3563dab745cc5c71baf09835af567df43aee00e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/329b7497d730cc1438c1c88bd3563dab745cc5c71baf09835af567df43aee00e/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/34bf21f0d5e74283c2c3382d9b925b925de6b532a3f67ab7bff4afdbe95f9332/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/34bf21f0d5e74283c2c3382d9b925b925de6b532a3f67ab7bff4afdbe95f9332/userdata/shm major:0 minor:440 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/34cc992d367669608546ba8ae39873d4139dfeeb4850c5979567cde508c8b524/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/34cc992d367669608546ba8ae39873d4139dfeeb4850c5979567cde508c8b524/userdata/shm major:0 minor:772 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3595d9d8fc957b18c48383f1ad0fcfa521ef5e3e33c6ab788b51ff8638981630/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3595d9d8fc957b18c48383f1ad0fcfa521ef5e3e33c6ab788b51ff8638981630/userdata/shm major:0 minor:168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3c3c6a0066a2da65aa0c6f5621f865feea551c3602354f05a3bf53b7f588a01e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3c3c6a0066a2da65aa0c6f5621f865feea551c3602354f05a3bf53b7f588a01e/userdata/shm major:0 minor:1063 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3e54884bb129553f96e22ded74db5788d449f044a28bbdd487ce407f3c14ba01/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3e54884bb129553f96e22ded74db5788d449f044a28bbdd487ce407f3c14ba01/userdata/shm major:0 minor:512 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3f68274f91c27d15a060c5bac225b0b94e8aa70b90454461d048fa9e384a03df/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3f68274f91c27d15a060c5bac225b0b94e8aa70b90454461d048fa9e384a03df/userdata/shm major:0 minor:283 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/437abb0aba17c9c29dae7086b861fc64a62c90a30c1567fbdec9a15f52cef039/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/437abb0aba17c9c29dae7086b861fc64a62c90a30c1567fbdec9a15f52cef039/userdata/shm major:0 minor:802 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/469af398b29095aa460373b4a9d58261db50995525853368aaa76c2198d9753f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/469af398b29095aa460373b4a9d58261db50995525853368aaa76c2198d9753f/userdata/shm major:0 minor:117 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4d7a859ad253e344142e3d8002817623ee421d3b324eff2b6246c1b1fdd11bc1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4d7a859ad253e344142e3d8002817623ee421d3b324eff2b6246c1b1fdd11bc1/userdata/shm major:0 minor:481 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4e9788fdd4565e3a230622830adb39ca18b14112a272177c052904a2d24b6cd0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4e9788fdd4565e3a230622830adb39ca18b14112a272177c052904a2d24b6cd0/userdata/shm major:0 minor:1165 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/51c5a5d32ca643efba642911927baab174d9c9270d18541b0810089261e8c8d5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/51c5a5d32ca643efba642911927baab174d9c9270d18541b0810089261e8c8d5/userdata/shm major:0 minor:805 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/535151362e36c1745033704c37dfb910d9260b348b0c35a197ec5a2c74a4ea53/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/535151362e36c1745033704c37dfb910d9260b348b0c35a197ec5a2c74a4ea53/userdata/shm major:0 minor:1066 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5412cad37cfea94450b3688c380c9cc1161ff7a9a7f0b141297d24e746b33629/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5412cad37cfea94450b3688c380c9cc1161ff7a9a7f0b141297d24e746b33629/userdata/shm major:0 minor:770 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/56784add7fab2d6fa30c1dec4a904d183b8bd0ff401f8eca8e9ad2aff7741c30/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/56784add7fab2d6fa30c1dec4a904d183b8bd0ff401f8eca8e9ad2aff7741c30/userdata/shm major:0 minor:809 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/56dab50a6ee92d8b7787a1ffbdfc72e9a26511781eb108040e7d6dc84a65109f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/56dab50a6ee92d8b7787a1ffbdfc72e9a26511781eb108040e7d6dc84a65109f/userdata/shm major:0 minor:796 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5c30b9cdcf13e6a3816e39ff92455fc96f090fac8eb9899e480122d604e7a1b8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5c30b9cdcf13e6a3816e39ff92455fc96f090fac8eb9899e480122d604e7a1b8/userdata/shm major:0 minor:517 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5ea8ac7578359ce087855682fd87fbd08a72604f8701716ddbb28b051d93bff2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5ea8ac7578359ce087855682fd87fbd08a72604f8701716ddbb28b051d93bff2/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5fc1828a85716c5c152a1e9d497ac8c147726f1a98a02df72c44bdcd9feda4f1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5fc1828a85716c5c152a1e9d497ac8c147726f1a98a02df72c44bdcd9feda4f1/userdata/shm major:0 minor:85 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6315ef904771a7f7ee8f8fb64b568088a83f03dc9235439160e67d9df1c9a04f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6315ef904771a7f7ee8f8fb64b568088a83f03dc9235439160e67d9df1c9a04f/userdata/shm major:0 minor:286 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6ea59bb762ddd917687d0ab9c9b4c4c212079c243fa33d303d25cc82d89c923b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6ea59bb762ddd917687d0ab9c9b4c4c212079c243fa33d303d25cc82d89c923b/userdata/shm major:0 minor:1064 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7190b6f768a0fe97808696f83db6e3236f51dc32c15727d9791bd6e154e97696/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7190b6f768a0fe97808696f83db6e3236f51dc32c15727d9791bd6e154e97696/userdata/shm major:0 minor:293 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/733f20d59a2548ac1c9bcca1dc13fb3a2581f1cde83bb3bdf7f826c178e76f76/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/733f20d59a2548ac1c9bcca1dc13fb3a2581f1cde83bb3bdf7f826c178e76f76/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7cd291b9260d8474da6db1ea27593954a0b8a80d92876d3da551d5f4c38e22a4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7cd291b9260d8474da6db1ea27593954a0b8a80d92876d3da551d5f4c38e22a4/userdata/shm major:0 minor:1167 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7d7dfb1a01a9470453018e9e4e99ad966573e066e4eb9b370f42ef7d7426a75e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7d7dfb1a01a9470453018e9e4e99ad966573e066e4eb9b370f42ef7d7426a75e/userdata/shm major:0 minor:519 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/80505c2710f2e2216eec6a4e82e9601038f01af58386ea11bb977eb9c2b78e51/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/80505c2710f2e2216eec6a4e82e9601038f01af58386ea11bb977eb9c2b78e51/userdata/shm major:0 minor:768 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/80b53aa57494cc0bc6bbacad6b2e04131adc3c0ab6e7a77f83dd0c6c91461d7d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/80b53aa57494cc0bc6bbacad6b2e04131adc3c0ab6e7a77f83dd0c6c91461d7d/userdata/shm major:0 minor:518 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/81b14b205a5b43d7cf78b359f564d3ae3e67aaf00f87262df973d130ce6f30c0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/81b14b205a5b43d7cf78b359f564d3ae3e67aaf00f87262df973d130ce6f30c0/userdata/shm major:0 minor:513 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8468bd2a2161175e696f20868531488b079471cbb37c953cccf04ab9a47ce2b3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8468bd2a2161175e696f20868531488b079471cbb37c953cccf04ab9a47ce2b3/userdata/shm major:0 minor:739 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/84da6dcc282a18c48a027b33cd2404e3592b75c697de5dd4ab39e2cebf5cff28/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/84da6dcc282a18c48a027b33cd2404e3592b75c697de5dd4ab39e2cebf5cff28/userdata/shm major:0 minor:520 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/864b7e188cfb62e2b7e87dc90ff4536aab0f9cd5aed1bd5481272fd1babe2e98/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/864b7e188cfb62e2b7e87dc90ff4536aab0f9cd5aed1bd5481272fd1babe2e98/userdata/shm major:0 minor:1031 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/88c6fd1112c1b3efe31f79a2dc6cd9198555dc6b1c7c6547da60005b56efbb9b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/88c6fd1112c1b3efe31f79a2dc6cd9198555dc6b1c7c6547da60005b56efbb9b/userdata/shm major:0 minor:696 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8df5627ff680da0c81aa3a3c2df511cdff6fa3f30ba3845441250cbb689ca7f4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8df5627ff680da0c81aa3a3c2df511cdff6fa3f30ba3845441250cbb689ca7f4/userdata/shm major:0 minor:909 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8ef8165957098f6be8792289e9cb306a276c73110e287a7b80ba51a3888e812c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8ef8165957098f6be8792289e9cb306a276c73110e287a7b80ba51a3888e812c/userdata/shm major:0 minor:1091 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/92c9b6ef7965615602e16b5814c26d9915a23507222fc502b624945d6f4ccc53/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/92c9b6ef7965615602e16b5814c26d9915a23507222fc502b624945d6f4ccc53/userdata/shm major:0 minor:800 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/92d6a373c92ade68969e49443823f212abf3c0859e9aaf5d10ff5913a474e6f8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/92d6a373c92ade68969e49443823f212abf3c0859e9aaf5d10ff5913a474e6f8/userdata/shm major:0 minor:278 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/934ad9d048e353486054177eacce7219c994c68dfad561ddfd4035fc938101d3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/934ad9d048e353486054177eacce7219c994c68dfad561ddfd4035fc938101d3/userdata/shm major:0 minor:808 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95115710de33578fe832a95630e8d98eba6ecc806a442bdc7740ad889ac1e80b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95115710de33578fe832a95630e8d98eba6ecc806a442bdc7740ad889ac1e80b/userdata/shm major:0 minor:131 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95650a37daeacacf8e69d045d48ba4a17652648a0c83345072715e4ffcfa2dda/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95650a37daeacacf8e69d045d48ba4a17652648a0c83345072715e4ffcfa2dda/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/972260fa4d71d5a14fa2c2c948e5708100e799e6a9e6ff6a656d3e5a79c34eaa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/972260fa4d71d5a14fa2c2c948e5708100e799e6a9e6ff6a656d3e5a79c34eaa/userdata/shm major:0 minor:908 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/996b54ad7bf339a39ffff49432d0181ad23ef73bddec2b3817ca026944ee2962/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/996b54ad7bf339a39ffff49432d0181ad23ef73bddec2b3817ca026944ee2962/userdata/shm major:0 minor:69 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9bd614ac7dafc38d2154363d724a872731a806692546d4bc858006cdc5ade17d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9bd614ac7dafc38d2154363d724a872731a806692546d4bc858006cdc5ade17d/userdata/shm major:0 minor:285 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9df920ca539f41ddc66a331c27bc3a12a40dbc8ec795ca71f8a746f6b5203647/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9df920ca539f41ddc66a331c27bc3a12a40dbc8ec795ca71f8a746f6b5203647/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a3b80d783578c7d5bcce0396d10b0b7507567b7ddeed1d7dec131680bd38e6da/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a3b80d783578c7d5bcce0396d10b0b7507567b7ddeed1d7dec131680bd38e6da/userdata/shm major:0 minor:694 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a9fb4904f90243607c1bd114c0e1c541fb17de9f6f5ce80d7f75369901ce613b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a9fb4904f90243607c1bd114c0e1c541fb17de9f6f5ce80d7f75369901ce613b/userdata/shm major:0 minor:813 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aa71c4fa879120a78bd3b6a5ee4f553adcd2305018af6f53632371d2a776a283/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aa71c4fa879120a78bd3b6a5ee4f553adcd2305018af6f53632371d2a776a283/userdata/shm major:0 minor:552 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/afc706c41127ee1f98bf413cd8a012a0e0a8f183eef4bf77721d14a272ded89e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/afc706c41127ee1f98bf413cd8a012a0e0a8f183eef4bf77721d14a272ded89e/userdata/shm major:0 minor:289 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b4cf8dbc3fd31a273c2cbd586eecdb2a0961392b7bd552bb39381cfb88539e45/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b4cf8dbc3fd31a273c2cbd586eecdb2a0961392b7bd552bb39381cfb88539e45/userdata/shm major:0 minor:280 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ba0f9ce144b093c1fbdb0462da21ced21845e2aa8fb2233766270fcddb816e51/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ba0f9ce144b093c1fbdb0462da21ced21845e2aa8fb2233766270fcddb816e51/userdata/shm major:0 minor:380 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cc5528fa6db2bfe114c1842f536c398cb14a3103bc976fa904abdc30e48bc9b3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cc5528fa6db2bfe114c1842f536c398cb14a3103bc976fa904abdc30e48bc9b3/userdata/shm major:0 minor:818 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/db318f21d539d497ae2372897b56aaa3b6fedeaae97e556d74c5b3c251315d6e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/db318f21d539d497ae2372897b56aaa3b6fedeaae97e556d74c5b3c251315d6e/userdata/shm major:0 minor:912 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dd0998467d8099b6ff8531304dd3f0e97b5c79ad6520753dadef997846c4d469/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dd0998467d8099b6ff8531304dd3f0e97b5c79ad6520753dadef997846c4d469/userdata/shm major:0 minor:911 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e1b8782a8564dd4906c6406ffd3ad6cd072d92723a07ad86ed42c394d07ab355/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e1b8782a8564dd4906c6406ffd3ad6cd072d92723a07ad86ed42c394d07ab355/userdata/shm major:0 minor:406 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e4a8f393be39a3a9efde4bf2412add15fe01a8acdf8e5580190095494f3e6b47/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e4a8f393be39a3a9efde4bf2412add15fe01a8acdf8e5580190095494f3e6b47/userdata/shm major:0 minor:663 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e94527abc555de66f60f9e134865dfe60d787ebd1878546078cb9b2523c30cab/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e94527abc555de66f60f9e134865dfe60d787ebd1878546078cb9b2523c30cab/userdata/shm major:0 minor:277 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ec64844e3e46d42ec4c570bb811039de046f41f872bc256c338ea6312e07ba0d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ec64844e3e46d42ec4c570bb811039de046f41f872bc256c338ea6312e07ba0d/userdata/shm major:0 minor:1293 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ed48d3d3cb753c9bbe342f9ecdd79f0991ed3456ddbdf3081cbeeab5126bcab1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ed48d3d3cb753c9bbe342f9ecdd79f0991ed3456ddbdf3081cbeeab5126bcab1/userdata/shm major:0 minor:806 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f83848e1580bc2bc923ed29b258b640fe63d1b2a36889eeff462ef2f63db0d04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f83848e1580bc2bc923ed29b258b640fe63d1b2a36889eeff462ef2f63db0d04/userdata/shm major:0 minor:777 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fa9d778b1d5703420b9779e5e17c8c6a6104fc97f8264778eb9ed382719853b9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fa9d778b1d5703420b9779e5e17c8c6a6104fc97f8264778eb9ed382719853b9/userdata/shm major:0 minor:652 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0bedbe69-fc4b-4bd7-bcc2-acead927eda2/volumes/kubernetes.io~projected/kube-api-access-gk2lq:{mountpoint:/var/lib/kubelet/pods/0bedbe69-fc4b-4bd7-bcc2-acead927eda2/volumes/kubernetes.io~projected/kube-api-access-gk2lq major:0 minor:790 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0bedbe69-fc4b-4bd7-bcc2-acead927eda2/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/0bedbe69-fc4b-4bd7-bcc2-acead927eda2/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:782 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16d6dd52-d73b-4696-873e-00a6d4bb2c77/volumes/kubernetes.io~projected/kube-api-access-sxncg:{mountpoint:/var/lib/kubelet/pods/16d6dd52-d73b-4696-873e-00a6d4bb2c77/volumes/kubernetes.io~projected/kube-api-access-sxncg major:0 minor:787 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16d6dd52-d73b-4696-873e-00a6d4bb2c77/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/16d6dd52-d73b-4696-873e-00a6d4bb2c77/volumes/kubernetes.io~secret/proxy-tls major:0 minor:778 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19ce4b45-db46-4fc3-8d72-963de22f026b/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/19ce4b45-db46-4fc3-8d72-963de22f026b/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:616 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19ce4b45-db46-4fc3-8d72-963de22f026b/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/19ce4b45-db46-4fc3-8d72-963de22f026b/volumes/kubernetes.io~empty-dir/tmp major:0 minor:624 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19ce4b45-db46-4fc3-8d72-963de22f026b/volumes/kubernetes.io~projected/kube-api-access-45226:{mountpoint:/var/lib/kubelet/pods/19ce4b45-db46-4fc3-8d72-963de22f026b/volumes/kubernetes.io~projected/kube-api-access-45226 major:0 minor:625 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1fe69517-eec2-4721-933c-fa27cea7ab1f/volumes/kubernetes.io~projected/kube-api-access-rnwtd:{mountpoint:/var/lib/kubelet/pods/1fe69517-eec2-4721-933c-fa27cea7ab1f/volumes/kubernetes.io~projected/kube-api-access-rnwtd major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1fe69517-eec2-4721-933c-fa27cea7ab1f/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/1fe69517-eec2-4721-933c-fa27cea7ab1f/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:564 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volumes/kubernetes.io~projected/kube-api-access-gr6nr:{mountpoint:/var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volumes/kubernetes.io~projected/kube-api-access-gr6nr major:0 minor:141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~projected/kube-api-access-gwb5n:{mountpoint:/var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~projected/kube-api-access-gwb5n major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~secret/etcd-client major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~secret/serving-cert major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/26473c28-db42-47e6-9164-8c441ccc48ca/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/26473c28-db42-47e6-9164-8c441ccc48ca/volumes/kubernetes.io~projected/kube-api-access major:0 minor:480 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/26473c28-db42-47e6-9164-8c441ccc48ca/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/26473c28-db42-47e6-9164-8c441ccc48ca/volumes/kubernetes.io~secret/serving-cert major:0 minor:112 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/27ab8945-6a5b-4f7d-b893-6358da214499/volumes/kubernetes.io~projected/kube-api-access-jshgm:{mountpoint:/var/lib/kubelet/pods/27ab8945-6a5b-4f7d-b893-6358da214499/volumes/kubernetes.io~projected/kube-api-access-jshgm major:0 minor:762 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/27ab8945-6a5b-4f7d-b893-6358da214499/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/27ab8945-6a5b-4f7d-b893-6358da214499/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:758 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a/volumes/kubernetes.io~projected/kube-api-access-tl7wm:{mountpoint:/var/lib/kubelet/pods/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a/volumes/kubernetes.io~projected/kube-api-access-tl7wm major:0 minor:786 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:779 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a/volumes/kubernetes.io~secret/srv-cert major:0 minor:792 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31d71c90-cab7-4411-9426-0713cb026294/volumes/kubernetes.io~projected/kube-api-access-57cks:{mountpoint:/var/lib/kubelet/pods/31d71c90-cab7-4411-9426-0713cb026294/volumes/kubernetes.io~projected/kube-api-access-57cks major:0 minor:268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31d71c90-cab7-4411-9426-0713cb026294/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/31d71c90-cab7-4411-9426-0713cb026294/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:505 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31d71c90-cab7-4411-9426-0713cb026294/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/31d71c90-cab7-4411-9426-0713cb026294/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:509 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/32a79fe0-e619-4a66-8617-e8111bdc7e96/volumes/kubernetes.io~projected/kube-api-access-jkq7j:{mountpoint:/var/lib/kubelet/pods/32a79fe0-e619-4a66-8617-e8111bdc7e96/volumes/kubernetes.io~projected/kube-api-access-jkq7j major:0 minor:110 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33675e96-ce49-49be-9117-954ac7cca5d5/volumes/kubernetes.io~projected/kube-api-access-hbw6n:{mountpoint:/var/lib/kubelet/pods/33675e96-ce49-49be-9117-954ac7cca5d5/volumes/kubernetes.io~projected/kube-api-access-hbw6n major:0 minor:167 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33675e96-ce49-49be-9117-954ac7cca5d5/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/33675e96-ce49-49be-9117-954ac7cca5d5/volumes/kubernetes.io~secret/webhook-cert major:0 minor:166 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367/volumes/kubernetes.io~projected/kube-api-access-2vz22:{mountpoint:/var/lib/kubelet/pods/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367/volumes/kubernetes.io~projected/kube-api-access-2vz22 major:0 minor:1030 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367/volumes/kubernetes.io~secret/proxy-tls major:0 minor:1029 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/419f28a9-8fd7-4b59-9554-4d884a1208b5/volumes/kubernetes.io~projected/kube-api-access-fttgr:{mountpoint:/var/lib/kubelet/pods/419f28a9-8fd7-4b59-9554-4d884a1208b5/volumes/kubernetes.io~projected/kube-api-access-fttgr major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/419f28a9-8fd7-4b59-9554-4d884a1208b5/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/419f28a9-8fd7-4b59-9554-4d884a1208b5/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:508 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43e9807a-859c-44c1-8511-0066b0f59ff8/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/43e9807a-859c-44c1-8511-0066b0f59ff8/volumes/kubernetes.io~projected/kube-api-access major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43e9807a-859c-44c1-8511-0066b0f59ff8/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/43e9807a-859c-44c1-8511-0066b0f59ff8/volumes/kubernetes.io~secret/serving-cert major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/448aafd2-ffb3-42c5-8085-f6194d7862e5/volumes/kubernetes.io~projected/kube-api-access-nv57n:{mountpoint:/var/lib/kubelet/pods/448aafd2-ffb3-42c5-8085-f6194d7862e5/volumes/kubernetes.io~projected/kube-api-access-nv57n major:0 minor:644 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45d7ef0c-272b-4d1e-965f-484975d5d25c/volumes/kubernetes.io~projected/kube-api-access-svhtr:{mountpoint:/var/lib/kubelet/pods/45d7ef0c-272b-4d1e-965f-484975d5d25c/volumes/kubernetes.io~projected/kube-api-access-svhtr major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45d7ef0c-272b-4d1e-965f-484975d5d25c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/45d7ef0c-272b-4d1e-965f-484975d5d25c/volumes/kubernetes.io~secret/serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/49044786-483a-406e-8750-f6ded400841d/volumes/kubernetes.io~projected/kube-api-access-jljjg:{mountpoint:/var/lib/kubelet/pods/49044786-483a-406e-8750-f6ded400841d/volumes/kubernetes.io~projected/kube-api-access-jljjg major:0 minor:763 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/49044786-483a-406e-8750-f6ded400841d/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/49044786-483a-406e-8750-f6ded400841d/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:760 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/49defec6-a225-47ab-99ff-7a846f23eb00/volumes/kubernetes.io~projected/kube-api-access-k94cb:{mountpoint:/var/lib/kubelet/pods/49defec6-a225-47ab-99ff-7a846f23eb00/volumes/kubernetes.io~projected/kube-api-access-k94cb major:0 minor:1292 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/49defec6-a225-47ab-99ff-7a846f23eb00/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/49defec6-a225-47ab-99ff-7a846f23eb00/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1287 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:761 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/volumes/kubernetes.io~projected/kube-api-access-mwnq7:{mountpoint:/var/lib/kubelet/pods/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/volumes/kubernetes.io~projected/kube-api-access-mwnq7 major:0 minor:759 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/volumes/kubernetes.io~secret/metrics-tls major:0 minor:756 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4c31b8a7-edcb-403d-9122-7eb740f7d659/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/4c31b8a7-edcb-403d-9122-7eb740f7d659/volumes/kubernetes.io~projected/kube-api-access major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4c31b8a7-edcb-403d-9122-7eb740f7d659/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4c31b8a7-edcb-403d-9122-7eb740f7d659/volumes/kubernetes.io~secret/serving-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665/volumes/kubernetes.io~projected/kube-api-access-lc9pl:{mountpoint:/var/lib/kubelet/pods/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665/volumes/kubernetes.io~projected/kube-api-access-lc9pl major:0 minor:993 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665/volumes/kubernetes.io~secret/cert major:0 minor:1123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4ecbdf77-0c73-487e-943e-5315a0f8b8d4/volumes/kubernetes.io~projected/kube-api-access-ntlv2:{mountpoint:/var/lib/kubelet/pods/4ecbdf77-0c73-487e-943e-5315a0f8b8d4/volumes/kubernetes.io~projected/kube-api-access-ntlv2 major:0 minor:901 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4ecbdf77-0c73-487e-943e-5315a0f8b8d4/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/4ecbdf77-0c73-487e-943e-5315a0f8b8d4/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:900 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4ecbdf77-0c73-487e-943e-5315a0f8b8d4/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/4ecbdf77-0c73-487e-943e-5315a0f8b8d4/volumes/kubernetes.io~secret/webhook-cert major:0 minor:899 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5d2b154b-de63-4c9b-99d8-487fb3035fb9/volumes/kubernetes.io~projected/kube-api-access-mclrj:{mountpoint:/var/lib/kubelet/pods/5d2b154b-de63-4c9b-99d8-487fb3035fb9/volumes/kubernetes.io~projected/kube-api-access-mclrj major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5d2b154b-de63-4c9b-99d8-487fb3035fb9/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/5d2b154b-de63-4c9b-99d8-487fb3035fb9/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ea4c132-b6d0-4dc9-942d-48e359eed418/volumes/kubernetes.io~projected/kube-api-access-7nlf9:{mountpoint:/var/lib/kubelet/pods/5ea4c132-b6d0-4dc9-942d-48e359eed418/volumes/kubernetes.io~projected/kube-api-access-7nlf9 major:0 minor:135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ea4c132-b6d0-4dc9-942d-48e359eed418/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/5ea4c132-b6d0-4dc9-942d-48e359eed418/volumes/kubernetes.io~secret/metrics-certs major:0 minor:511 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5f55b652-bef8-4f50-9d1d-9d0a340c1dea/volumes/kubernetes.io~projected/kube-api-access-rj796:{mountpoint:/var/lib/kubelet/pods/5f55b652-bef8-4f50-9d1d-9d0a340c1dea/volumes/kubernetes.io~projected/kube-api-access-rj796 major:0 minor:1069 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5f55b652-bef8-4f50-9d1d-9d0a340c1dea/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/5f55b652-bef8-4f50-9d1d-9d0a340c1dea/volumes/kubernetes.io~secret/default-certificate major:0 minor:1073 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5f55b652-bef8-4f50-9d1d-9d0a340c1dea/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/5f55b652-bef8-4f50-9d1d-9d0a340c1dea/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1068 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5f55b652-bef8-4f50-9d1d-9d0a340c1dea/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/5f55b652-bef8-4f50-9d1d-9d0a340c1dea/volumes/kubernetes.io~secret/stats-auth major:0 minor:1067 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd/volumes/kubernetes.io~projected/kube-api-access-wxjcq:{mountpoint:/var/lib/kubelet/pods/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd/volumes/kubernetes.io~projected/kube-api-access-wxjcq major:0 minor:688 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd/volumes/kubernetes.io~secret/serving-cert major:0 minor:686 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/64e9eca9-bbdd-4eca-9219-922bbab9b388/volumes/kubernetes.io~projected/kube-api-access-47sqj:{mountpoint:/var/lib/kubelet/pods/64e9eca9-bbdd-4eca-9219-922bbab9b388/volumes/kubernetes.io~projected/kube-api-access-47sqj major:0 minor:801 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/64e9eca9-bbdd-4eca-9219-922bbab9b388/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/64e9eca9-bbdd-4eca-9219-922bbab9b388/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:799 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/64e9eca9-bbdd-4eca-9219-922bbab9b388/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/64e9eca9-bbdd-4eca-9219-922bbab9b388/volumes/kubernetes.io~secret/srv-cert major:0 minor:798 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6949e9d5-460c-4b63-94cb-1b20ad75ee1c/volumes/kubernetes.io~projected/kube-api-access-jpt8j:{mountpoint:/var/lib/kubelet/pods/6949e9d5-460c-4b63-94cb-1b20ad75ee1c/volumes/kubernetes.io~projected/kube-api-access-jpt8j major:0 minor:691 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6949e9d5-460c-4b63-94cb-1b20ad75ee1c/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/6949e9d5-460c-4b63-94cb-1b20ad75ee1c/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:690 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3/volumes/kubernetes.io~projected/kube-api-access-2xd6r:{mountpoint:/var/lib/kubelet/pods/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3/volumes/kubernetes.io~projected/kube-api-access-2xd6r major:0 minor:892 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/787a4fee-6625-4df5-a432-c7e1190da777/volumes/kubernetes.io~projected/kube-api-access-9k6br:{mountpoint:/var/lib/kubelet/pods/787a4fee-6625-4df5-a432-c7e1190da777/volumes/kubernetes.io~projected/kube-api-access-9k6br major:0 minor:405 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/787a4fee-6625-4df5-a432-c7e1190da777/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/787a4fee-6625-4df5-a432-c7e1190da777/volumes/kubernetes.io~secret/signing-key major:0 minor:400 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8157f73d-c757-40c4-80bc-3c9de2f2288a/volumes/kubernetes.io~projected/kube-api-access-bk5m4:{mountpoint:/var/lib/kubelet/pods/8157f73d-c757-40c4-80bc-3c9de2f2288a/volumes/kubernetes.io~projected/kube-api-access-bk5m4 major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8157f73d-c757-40c4-80bc-3c9de2f2288a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8157f73d-c757-40c4-80bc-3c9de2f2288a/volumes/kubernetes.io~secret/serving-cert major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/84a61910-48eb-4c27-8d69-f6aa7ce912ca/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/84a61910-48eb-4c27-8d69-f6aa7ce912ca/volumes/kubernetes.io~projected/ca-certs major:0 minor:436 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/84a61910-48eb-4c27-8d69-f6aa7ce912ca/volumes/kubernetes.io~projected/kube-api-access-l5fng:{mountpoint:/var/lib/kubelet/pods/84a61910-48eb-4c27-8d69-f6aa7ce912ca/volumes/kubernetes.io~projected/kube-api-access-l5fng major:0 minor:437 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/86f6836b-b018-4c7a-87ad-51809a4b9c7a/volumes/kubernetes.io~projected/kube-api-access-wcffg:{mountpoint:/var/lib/kubelet/pods/86f6836b-b018-4c7a-87ad-51809a4b9c7a/volumes/kubernetes.io~projected/kube-api-access-wcffg major:0 minor:785 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/86f6836b-b018-4c7a-87ad-51809a4b9c7a/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/86f6836b-b018-4c7a-87ad-51809a4b9c7a/volumes/kubernetes.io~secret/cert major:0 minor:783 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/86f6836b-b018-4c7a-87ad-51809a4b9c7a/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/86f6836b-b018-4c7a-87ad-51809a4b9c7a/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:784 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/87cf4690-1ec1-44fc-94bd-730d9f2e6762/volumes/kubernetes.io~projected/kube-api-access-r9c94:{mountpoint:/var/lib/kubelet/pods/87cf4690-1ec1-44fc-94bd-730d9f2e6762/volumes/kubernetes.io~projected/kube-api-access-r9c94 major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a278abf-8c59-4454-94d0-a0d0768cbec5/volumes/kubernetes.io~projected/kube-api-access-r9crd:{mountpoint:/var/lib/kubelet/pods/8a278abf-8c59-4454-94d0-a0d0768cbec5/volumes/kubernetes.io~projected/kube-api-access-r9crd major:0 minor:765 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a278abf-8c59-4454-94d0-a0d0768cbec5/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8a278abf-8c59-4454-94d0-a0d0768cbec5/volumes/kubernetes.io~secret/serving-cert major:0 minor:757 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b73ae08-0ad7-4f99-8002-6df0d984cd2c/volumes/kubernetes.io~projected/kube-api-access-mb46b:{mountpoint:/var/lib/kubelet/pods/8b73ae08-0ad7-4f99-8002-6df0d984cd2c/volumes/kubernetes.io~projected/kube-api-access-mb46b major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b73ae08-0ad7-4f99-8002-6df0d984cd2c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8b73ae08-0ad7-4f99-8002-6df0d984cd2c/volumes/kubernetes.io~secret/serving-cert major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e8c5772-b6e2-43d8-b173-af74541855fb/volumes/kubernetes.io~projected/kube-api-access-z67rw:{mountpoint:/var/lib/kubelet/pods/8e8c5772-b6e2-43d8-b173-af74541855fb/volumes/kubernetes.io~projected/kube-api-access-z67rw major:0 minor:536 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e8c5772-b6e2-43d8-b173-af74541855fb/volumes/kubernetes.io~secret/federate-client-tls:{mountpoint:/var/lib/kubelet/pods/8e8c5772-b6e2-43d8-b173-af74541855fb/volumes/kubernetes.io~secret/federate-client-tls major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e8c5772-b6e2-43d8-b173-af74541855fb/volumes/kubernetes.io~secret/secret-telemeter-client:{mountpoint:/var/lib/kubelet/pods/8e8c5772-b6e2-43d8-b173-af74541855fb/volumes/kubernetes.io~secret/secret-telemeter-client major:0 minor:524 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e8c5772-b6e2-43d8-b173-af74541855fb/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/8e8c5772-b6e2-43d8-b173-af74541855fb/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config major:0 minor:529 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e8c5772-b6e2-43d8-b173-af74541855fb/volumes/kubernetes.io~secret/telemeter-client-tls:{mountpoint:/var/lib/kubelet/pods/8e8c5772-b6e2-43d8-b173-af74541855fb/volumes/kubernetes.io~secret/telemeter-client-tls major:0 minor:510 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/900e244c-67aa-402f-b5f0-d37c5c1cedf7/volumes/kubernetes.io~projected/kube-api-access-n85mh:{mountpoint:/var/lib/kubelet/pods/900e244c-67aa-402f-b5f0-d37c5c1cedf7/volumes/kubernetes.io~projected/kube-api-access-n85mh major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/92008ac4-8deb-4fb9-9116-14d2d005bd36/volumes/kubernetes.io~projected/kube-api-access-n4dn4:{mountpoint:/var/lib/kubelet/pods/92008ac4-8deb-4fb9-9116-14d2d005bd36/volumes/kubernetes.io~projected/kube-api-access-n4dn4 major:0 minor:1060 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/929dffba-46da-4d81-a437-bc6a9fe79811/volumes/kubernetes.io~projected/kube-api-access-9mpr8:{mountpoint:/var/lib/kubelet/pods/929dffba-46da-4d81-a437-bc6a9fe79811/volumes/kubernetes.io~projected/kube-api-access-9mpr8 major:0 minor:309 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/93786626-fac4-48f0-bf72-992bc39f4a82/volumes/kubernetes.io~projected/kube-api-access-fm2jn:{mountpoint:/var/lib/kubelet/pods/93786626-fac4-48f0-bf72-992bc39f4a82/volumes/kubernetes.io~projected/kube-api-access-fm2jn major:0 minor:907 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/989af121-da08-4f40-b08c-dd2aa67bc60c/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/989af121-da08-4f40-b08c-dd2aa67bc60c/volumes/kubernetes.io~projected/kube-api-access major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/989af121-da08-4f40-b08c-dd2aa67bc60c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/989af121-da08-4f40-b08c-dd2aa67bc60c/volumes/kubernetes.io~secret/serving-cert major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/996d4949-f92c-42ac-9bda-8c6ec0295e92/volumes/kubernetes.io~projected/kube-api-access-4kfqn:{mountpoint:/var/lib/kubelet/pods/996d4949-f92c-42ac-9bda-8c6ec0295e92/volumes/kubernetes.io~projected/kube-api-access-4kfqn major:0 minor:10 Feb 20 15:01:01.875651 master-0 kubenswrapper[28120]: 90 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/996d4949-f92c-42ac-9bda-8c6ec0295e92/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/996d4949-f92c-42ac-9bda-8c6ec0295e92/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:1083 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99fe3b99-0b40-4887-bcc8-59caa515b99f/volumes/kubernetes.io~projected/kube-api-access-dkc7z:{mountpoint:/var/lib/kubelet/pods/99fe3b99-0b40-4887-bcc8-59caa515b99f/volumes/kubernetes.io~projected/kube-api-access-dkc7z major:0 minor:1158 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99fe3b99-0b40-4887-bcc8-59caa515b99f/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/99fe3b99-0b40-4887-bcc8-59caa515b99f/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1154 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99fe3b99-0b40-4887-bcc8-59caa515b99f/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/99fe3b99-0b40-4887-bcc8-59caa515b99f/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1171 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9fd9f419-2cdc-4991-8fb9-87d76ac58976/volumes/kubernetes.io~projected/kube-api-access-svlzf:{mountpoint:/var/lib/kubelet/pods/9fd9f419-2cdc-4991-8fb9-87d76ac58976/volumes/kubernetes.io~projected/kube-api-access-svlzf major:0 minor:111 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9fd9f419-2cdc-4991-8fb9-87d76ac58976/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/9fd9f419-2cdc-4991-8fb9-87d76ac58976/volumes/kubernetes.io~secret/metrics-tls major:0 minor:77 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1af84e0-776b-4285-906a-6880dbc82a7b/volumes/kubernetes.io~projected/kube-api-access-6lp29:{mountpoint:/var/lib/kubelet/pods/a1af84e0-776b-4285-906a-6880dbc82a7b/volumes/kubernetes.io~projected/kube-api-access-6lp29 major:0 minor:375 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a39c5481-961c-4ac2-8c5b-a2c0165f4188/volumes/kubernetes.io~projected/kube-api-access-tl7tw:{mountpoint:/var/lib/kubelet/pods/a39c5481-961c-4ac2-8c5b-a2c0165f4188/volumes/kubernetes.io~projected/kube-api-access-tl7tw major:0 minor:1164 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a39c5481-961c-4ac2-8c5b-a2c0165f4188/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/a39c5481-961c-4ac2-8c5b-a2c0165f4188/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1161 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a39c5481-961c-4ac2-8c5b-a2c0165f4188/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/a39c5481-961c-4ac2-8c5b-a2c0165f4188/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1162 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a4339bd5-b8d1-467e-8158-4464ea901148/volumes/kubernetes.io~projected/kube-api-access-jvthk:{mountpoint:/var/lib/kubelet/pods/a4339bd5-b8d1-467e-8158-4464ea901148/volumes/kubernetes.io~projected/kube-api-access-jvthk major:0 minor:789 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a4339bd5-b8d1-467e-8158-4464ea901148/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a4339bd5-b8d1-467e-8158-4464ea901148/volumes/kubernetes.io~secret/serving-cert major:0 minor:781 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8c0a6d2-f1f9-49e3-9475-4983b50667bf/volumes/kubernetes.io~projected/kube-api-access-mchbh:{mountpoint:/var/lib/kubelet/pods/a8c0a6d2-f1f9-49e3-9475-4983b50667bf/volumes/kubernetes.io~projected/kube-api-access-mchbh major:0 minor:660 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8c0a6d2-f1f9-49e3-9475-4983b50667bf/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/a8c0a6d2-f1f9-49e3-9475-4983b50667bf/volumes/kubernetes.io~secret/encryption-config major:0 minor:659 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8c0a6d2-f1f9-49e3-9475-4983b50667bf/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/a8c0a6d2-f1f9-49e3-9475-4983b50667bf/volumes/kubernetes.io~secret/etcd-client major:0 minor:662 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8c0a6d2-f1f9-49e3-9475-4983b50667bf/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a8c0a6d2-f1f9-49e3-9475-4983b50667bf/volumes/kubernetes.io~secret/serving-cert major:0 minor:661 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ac3680de-aabf-414b-a340-5e5e6aea4822/volumes/kubernetes.io~projected/kube-api-access-rln42:{mountpoint:/var/lib/kubelet/pods/ac3680de-aabf-414b-a340-5e5e6aea4822/volumes/kubernetes.io~projected/kube-api-access-rln42 major:0 minor:904 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae43311e-14ba-40a1-bdbf-f02d68031757/volumes/kubernetes.io~projected/kube-api-access-mf5p9:{mountpoint:/var/lib/kubelet/pods/ae43311e-14ba-40a1-bdbf-f02d68031757/volumes/kubernetes.io~projected/kube-api-access-mf5p9 major:0 minor:1061 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae43311e-14ba-40a1-bdbf-f02d68031757/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/ae43311e-14ba-40a1-bdbf-f02d68031757/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1062 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae43311e-14ba-40a1-bdbf-f02d68031757/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/ae43311e-14ba-40a1-bdbf-f02d68031757/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1059 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af7b6f34-adca-4bdb-9e41-e2995a1d67a8/volumes/kubernetes.io~projected/kube-api-access-nrrq4:{mountpoint:/var/lib/kubelet/pods/af7b6f34-adca-4bdb-9e41-e2995a1d67a8/volumes/kubernetes.io~projected/kube-api-access-nrrq4 major:0 minor:424 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d/volumes/kubernetes.io~projected/kube-api-access-xtgrt:{mountpoint:/var/lib/kubelet/pods/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d/volumes/kubernetes.io~projected/kube-api-access-xtgrt major:0 minor:880 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b385880b-a26b-4353-8f6f-b7f926bcc67c/volumes/kubernetes.io~projected/kube-api-access-fwclx:{mountpoint:/var/lib/kubelet/pods/b385880b-a26b-4353-8f6f-b7f926bcc67c/volumes/kubernetes.io~projected/kube-api-access-fwclx major:0 minor:788 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b385880b-a26b-4353-8f6f-b7f926bcc67c/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/b385880b-a26b-4353-8f6f-b7f926bcc67c/volumes/kubernetes.io~secret/cert major:0 minor:780 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6e6d218-d969-40b5-a32b-9b2093089dbf/volumes/kubernetes.io~projected/kube-api-access-psd59:{mountpoint:/var/lib/kubelet/pods/b6e6d218-d969-40b5-a32b-9b2093089dbf/volumes/kubernetes.io~projected/kube-api-access-psd59 major:0 minor:130 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1/volumes/kubernetes.io~projected/kube-api-access-pzmqr:{mountpoint:/var/lib/kubelet/pods/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1/volumes/kubernetes.io~projected/kube-api-access-pzmqr major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:504 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdd203e0-3dd9-4e9d-81f1-46f60d235e38/volumes/kubernetes.io~projected/kube-api-access-9zppr:{mountpoint:/var/lib/kubelet/pods/bdd203e0-3dd9-4e9d-81f1-46f60d235e38/volumes/kubernetes.io~projected/kube-api-access-9zppr major:0 minor:1225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdd203e0-3dd9-4e9d-81f1-46f60d235e38/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/bdd203e0-3dd9-4e9d-81f1-46f60d235e38/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdd203e0-3dd9-4e9d-81f1-46f60d235e38/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/bdd203e0-3dd9-4e9d-81f1-46f60d235e38/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdd203e0-3dd9-4e9d-81f1-46f60d235e38/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/bdd203e0-3dd9-4e9d-81f1-46f60d235e38/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdf18981-b755-4b11-8793-38bc5e2e755b/volumes/kubernetes.io~projected/kube-api-access-wr5wk:{mountpoint:/var/lib/kubelet/pods/bdf18981-b755-4b11-8793-38bc5e2e755b/volumes/kubernetes.io~projected/kube-api-access-wr5wk major:0 minor:689 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdf18981-b755-4b11-8793-38bc5e2e755b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/bdf18981-b755-4b11-8793-38bc5e2e755b/volumes/kubernetes.io~secret/serving-cert major:0 minor:685 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0/volumes/kubernetes.io~projected/kube-api-access-tthkk:{mountpoint:/var/lib/kubelet/pods/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0/volumes/kubernetes.io~projected/kube-api-access-tthkk major:0 minor:640 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0/volumes/kubernetes.io~secret/metrics-tls major:0 minor:645 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c0a3548f-299c-4234-9bf1-c93efcb9740b/volumes/kubernetes.io~projected/kube-api-access-7d5fq:{mountpoint:/var/lib/kubelet/pods/c0a3548f-299c-4234-9bf1-c93efcb9740b/volumes/kubernetes.io~projected/kube-api-access-7d5fq major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c0a3548f-299c-4234-9bf1-c93efcb9740b/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/c0a3548f-299c-4234-9bf1-c93efcb9740b/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:507 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c0b78aa6-7bc8-4221-81f5-bf62a7110380/volumes/kubernetes.io~projected/kube-api-access-lhzk6:{mountpoint:/var/lib/kubelet/pods/c0b78aa6-7bc8-4221-81f5-bf62a7110380/volumes/kubernetes.io~projected/kube-api-access-lhzk6 major:0 minor:1163 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c0b78aa6-7bc8-4221-81f5-bf62a7110380/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/c0b78aa6-7bc8-4221-81f5-bf62a7110380/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1159 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c0b78aa6-7bc8-4221-81f5-bf62a7110380/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/c0b78aa6-7bc8-4221-81f5-bf62a7110380/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1160 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb/volumes/kubernetes.io~projected/kube-api-access-b54xg:{mountpoint:/var/lib/kubelet/pods/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb/volumes/kubernetes.io~projected/kube-api-access-b54xg major:0 minor:547 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb/volumes/kubernetes.io~secret/encryption-config major:0 minor:546 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb/volumes/kubernetes.io~secret/etcd-client major:0 minor:495 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb/volumes/kubernetes.io~secret/serving-cert major:0 minor:545 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c81ad608-a8ad-4289-a8d2-d48acb9b540c/volumes/kubernetes.io~projected/kube-api-access-wj4dx:{mountpoint:/var/lib/kubelet/pods/c81ad608-a8ad-4289-a8d2-d48acb9b540c/volumes/kubernetes.io~projected/kube-api-access-wj4dx major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c81ad608-a8ad-4289-a8d2-d48acb9b540c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c81ad608-a8ad-4289-a8d2-d48acb9b540c/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/caef1c17-56b0-479c-b000-caaac3c2b249/volumes/kubernetes.io~projected/kube-api-access-8kgzf:{mountpoint:/var/lib/kubelet/pods/caef1c17-56b0-479c-b000-caaac3c2b249/volumes/kubernetes.io~projected/kube-api-access-8kgzf major:0 minor:764 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/caef1c17-56b0-479c-b000-caaac3c2b249/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/caef1c17-56b0-479c-b000-caaac3c2b249/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:745 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d28490b0-96ca-4fe0-8fae-e6f8390f933b/volumes/kubernetes.io~projected/kube-api-access-qm5p2:{mountpoint:/var/lib/kubelet/pods/d28490b0-96ca-4fe0-8fae-e6f8390f933b/volumes/kubernetes.io~projected/kube-api-access-qm5p2 major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d28490b0-96ca-4fe0-8fae-e6f8390f933b/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/d28490b0-96ca-4fe0-8fae-e6f8390f933b/volumes/kubernetes.io~secret/metrics-tls major:0 minor:506 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3ca2d2f-9f31-4524-a28f-cf16b02dd711/volumes/kubernetes.io~projected/kube-api-access-4jn8g:{mountpoint:/var/lib/kubelet/pods/d3ca2d2f-9f31-4524-a28f-cf16b02dd711/volumes/kubernetes.io~projected/kube-api-access-4jn8g major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3ca2d2f-9f31-4524-a28f-cf16b02dd711/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/d3ca2d2f-9f31-4524-a28f-cf16b02dd711/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de/volumes/kubernetes.io~projected/kube-api-access-wcfnf:{mountpoint:/var/lib/kubelet/pods/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de/volumes/kubernetes.io~projected/kube-api-access-wcfnf major:0 minor:906 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de/volumes/kubernetes.io~secret/proxy-tls major:0 minor:905 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/db9dc349-5216-43ff-8c17-3a9384a010ea/volumes/kubernetes.io~projected/kube-api-access-smglm:{mountpoint:/var/lib/kubelet/pods/db9dc349-5216-43ff-8c17-3a9384a010ea/volumes/kubernetes.io~projected/kube-api-access-smglm major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/db9dc349-5216-43ff-8c17-3a9384a010ea/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/db9dc349-5216-43ff-8c17-3a9384a010ea/volumes/kubernetes.io~secret/serving-cert major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e3cc4073-a926-4aba-81e6-c616c2bb2987/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/e3cc4073-a926-4aba-81e6-c616c2bb2987/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1055 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee3a6748-0bbc-41bf-8726-a8db18faf03b/volumes/kubernetes.io~projected/kube-api-access-mk2pl:{mountpoint:/var/lib/kubelet/pods/ee3a6748-0bbc-41bf-8726-a8db18faf03b/volumes/kubernetes.io~projected/kube-api-access-mk2pl major:0 minor:714 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee3a6748-0bbc-41bf-8726-a8db18faf03b/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/ee3a6748-0bbc-41bf-8726-a8db18faf03b/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:582 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef3a09a5-b019-48a3-97f8-7ddadb37394e/volumes/kubernetes.io~projected/kube-api-access-pcqd4:{mountpoint:/var/lib/kubelet/pods/ef3a09a5-b019-48a3-97f8-7ddadb37394e/volumes/kubernetes.io~projected/kube-api-access-pcqd4 major:0 minor:1072 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef3a09a5-b019-48a3-97f8-7ddadb37394e/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/ef3a09a5-b019-48a3-97f8-7ddadb37394e/volumes/kubernetes.io~secret/certs major:0 minor:1070 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef3a09a5-b019-48a3-97f8-7ddadb37394e/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/ef3a09a5-b019-48a3-97f8-7ddadb37394e/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1071 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fc334fff-c0bf-4905-bcdb-b0d2a35b0590/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/fc334fff-c0bf-4905-bcdb-b0d2a35b0590/volumes/kubernetes.io~projected/ca-certs major:0 minor:434 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fc334fff-c0bf-4905-bcdb-b0d2a35b0590/volumes/kubernetes.io~projected/kube-api-access-9lcqg:{mountpoint:/var/lib/kubelet/pods/fc334fff-c0bf-4905-bcdb-b0d2a35b0590/volumes/kubernetes.io~projected/kube-api-access-9lcqg major:0 minor:435 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fc334fff-c0bf-4905-bcdb-b0d2a35b0590/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/fc334fff-c0bf-4905-bcdb-b0d2a35b0590/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:433 fsType:tmpfs blockSize:0} overlay_0-1001:{mountpoint:/var/lib/containers/storage/overlay/9475b44471ad30e707d649ed1daa32cab16c4f1bb485202ab3fc1815eaf48cd3/merged major:0 minor:1001 fsType:overlay blockSize:0} overlay_0-1002:{mountpoint:/var/lib/containers/storage/overlay/757ee43118ca0d03099687121471c9a0c2928b06db4b436e8e7baab22ca1b7e4/merged major:0 minor:1002 fsType:overlay blockSize:0} overlay_0-1004:{mountpoint:/var/lib/containers/storage/overlay/67a8d5dffd3a147ce115bb3ee5f34a473d5b0ea5021e2b75d66b570c441098cf/merged major:0 minor:1004 fsType:overlay blockSize:0} overlay_0-1017:{mountpoint:/var/lib/containers/storage/overlay/5fd312d40582fa4a8270526bb33da06a2ea59ccb60b6882cc84f2809ece51a32/merged major:0 minor:1017 fsType:overlay blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/41833c2ec1b3af4047582f7cede91a689f9c9c56a7da92514b19696f9de5c8fc/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-1024:{mountpoint:/var/lib/containers/storage/overlay/4ed394411573231053dc2171ac8dbc117ca8933b8ef273039b0397077120181d/merged major:0 minor:1024 fsType:overlay blockSize:0} overlay_0-1026:{mountpoint:/var/lib/containers/storage/overlay/96c8b07ec81d1877475262f25eb9fdd515b9374e6b1af9245a72b61e9e4f2130/merged major:0 minor:1026 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/0a7541542e37ab85a431b559a6cf48b1065fb34a1180cfca4e2cf64da126916b/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-1043:{mountpoint:/var/lib/containers/storage/overlay/1111dfc9e14ee206ac44820e566f453c7cfaa6122c00a544cca4843b3b47ca7c/merged major:0 minor:1043 fsType:overlay blockSize:0} overlay_0-1045:{mountpoint:/var/lib/containers/storage/overlay/2fd022badecca6c4fd05e2b869cf5c2644c9bb0aff46277f0d436a66dbeb2818/merged major:0 minor:1045 fsType:overlay blockSize:0} overlay_0-1047:{mountpoint:/var/lib/containers/storage/overlay/0891636f9bef8681c21fdfdfffbcc0af3a44754635ea8779ed6895a3d95fb0fc/merged major:0 minor:1047 fsType:overlay blockSize:0} overlay_0-1049:{mountpoint:/var/lib/containers/storage/overlay/8562c902d8613abe464881997c27ddb695f77625d76527c9ccba2a504befc2cc/merged major:0 minor:1049 fsType:overlay blockSize:0} overlay_0-105:{mountpoint:/var/lib/containers/storage/overlay/a56161b9a72e4ff361eec7dda4b6c126f9b11f7c96b21a3159484e8c87dff239/merged major:0 minor:105 fsType:overlay blockSize:0} overlay_0-1074:{mountpoint:/var/lib/containers/storage/overlay/fc13be7077334e62127242b4d5b6748f816d23a019744913c9c35df5639c2867/merged major:0 minor:1074 fsType:overlay blockSize:0} overlay_0-1078:{mountpoint:/var/lib/containers/storage/overlay/fb46db10fe34d24c6a2901321be36596fbf83725bef7a5898808616826d4e8e1/merged major:0 minor:1078 fsType:overlay blockSize:0} overlay_0-1079:{mountpoint:/var/lib/containers/storage/overlay/4d7914e066cf3e6cc32512019caa9892684f25e516b458e4a2232c1abe91c4d2/merged major:0 minor:1079 fsType:overlay blockSize:0} overlay_0-1081:{mountpoint:/var/lib/containers/storage/overlay/498ff689cf7853a5773631d15a8efe9199981a06fae931d8f03078fcc6af54c0/merged major:0 minor:1081 fsType:overlay blockSize:0} overlay_0-1084:{mountpoint:/var/lib/containers/storage/overlay/9c51f8cea49d48b23d058004f503580388256aa76a05fb3daa3cef8eb4425571/merged major:0 minor:1084 fsType:overlay blockSize:0} overlay_0-1086:{mountpoint:/var/lib/containers/storage/overlay/71b0d6f73a26fff331b65a50d1be5d1d027dccae7f29a4076a1837c47118c117/merged major:0 minor:1086 fsType:overlay blockSize:0} overlay_0-1088:{mountpoint:/var/lib/containers/storage/overlay/befc7b150599ed9dd1fcf56ee9e83dd14b895d45f316f2c1381cfd630159b00d/merged major:0 minor:1088 fsType:overlay blockSize:0} overlay_0-1097:{mountpoint:/var/lib/containers/storage/overlay/29915366a4c7d158402073c0e07236c5c55561c488e64c0de92e34c0e82da59b/merged major:0 minor:1097 fsType:overlay blockSize:0} overlay_0-1099:{mountpoint:/var/lib/containers/storage/overlay/4f45ee4a5e26cca4f1b1d16caaaf0e5e8347cd58161b4dd4a9d2edbe2aefd0c5/merged major:0 minor:1099 fsType:overlay blockSize:0} overlay_0-1104:{mountpoint:/var/lib/containers/storage/overlay/623b0630d57d15c8716c3c835de3d939e048eeaecb58ef3f67f3f151c925096d/merged major:0 minor:1104 fsType:overlay blockSize:0} overlay_0-1106:{mountpoint:/var/lib/containers/storage/overlay/8b751868bf8c6dc174a0f88cd309348c8b7499dde72fb356453f42988481bfa1/merged major:0 minor:1106 fsType:overlay blockSize:0} overlay_0-1108:{mountpoint:/var/lib/containers/storage/overlay/03aac2ac30981843d8bb6c50c491cf118bce4f5a43c1b31956b1cb467dfc6816/merged major:0 minor:1108 fsType:overlay blockSize:0} overlay_0-1113:{mountpoint:/var/lib/containers/storage/overlay/49b746cde649a37d88b06802d93222c2fe11e306b9b6420e7d82d66d443e434a/merged major:0 minor:1113 fsType:overlay blockSize:0} overlay_0-1115:{mountpoint:/var/lib/containers/storage/overlay/8a6793655bbbb0c1c3f9adeeb6098cb7f976e80c984796b5844201f3f1ab52d7/merged major:0 minor:1115 fsType:overlay blockSize:0} overlay_0-1122:{mountpoint:/var/lib/containers/storage/overlay/aec90d0736654f4aad117c2a3db38d60a953b88e741f24729a9a6be90eb8e089/merged major:0 minor:1122 fsType:overlay blockSize:0} overlay_0-1149:{mountpoint:/var/lib/containers/storage/overlay/4c7f2ecf6ac07298526e7696d731e948c98e06752f943a5319671aa0a5ada193/merged major:0 minor:1149 fsType:overlay blockSize:0} overlay_0-115:{mountpoint:/var/lib/containers/storage/overlay/9b4bb7bf4d5cc47259fef5a25c6f92b4bac2fba022ebe82b81d64bdd4d76a2e1/merged major:0 minor:115 fsType:overlay blockSize:0} overlay_0-1152:{mountpoint:/var/lib/containers/storage/overlay/9fc21ad6d423a691ae7967df7b5e574c8fdced266ea3aa78f9454e02976ec0aa/merged major:0 minor:1152 fsType:overlay blockSize:0} overlay_0-1169:{mountpoint:/var/lib/containers/storage/overlay/034af79d94ff0eb3005b6be325f9347cf60197de34b73d3bc488429d19f382da/merged major:0 minor:1169 fsType:overlay blockSize:0} overlay_0-1172:{mountpoint:/var/lib/containers/storage/overlay/20b972b19cf9c13fee3d9221a64342e952989ea5de3e5d371b441850d376762d/merged major:0 minor:1172 fsType:overlay blockSize:0} overlay_0-1173:{mountpoint:/var/lib/containers/storage/overlay/ded5b0397bd7ce93bdb8a37c8b10b8b04035efec49962e184411d40d862cd8a4/merged major:0 minor:1173 fsType:overlay blockSize:0} overlay_0-1176:{mountpoint:/var/lib/containers/storage/overlay/4b523eacd10cefc144a1e3c2179c18cab2cf2e7389d55b5f1d52452279f65b9e/merged major:0 minor:1176 fsType:overlay blockSize:0} overlay_0-1184:{mountpoint:/var/lib/containers/storage/overlay/71d75341a226b73c014bf4bafe4a496dfeafd67a50f3236140a48dc9bfdec7f0/merged major:0 minor:1184 fsType:overlay blockSize:0} overlay_0-1186:{mountpoint:/var/lib/containers/storage/overlay/ba990fc0cac7b2693217e8b7026c8dfca6e75d5cdf29d07748a88060230095ee/merged major:0 minor:1186 fsType:overlay blockSize:0} overlay_0-119:{mountpoint:/var/lib/containers/storage/overlay/a213ddefcc2eac415c59ebb84984762aa19415b47e14e15d3a32ecbb8c4e8db4/merged major:0 minor:119 fsType:overlay blockSize:0} overlay_0-1191:{mountpoint:/var/lib/containers/storage/overlay/f28b3825ebe5061e6db696b7f3b0ebcd00c356042cf78e6d95846c9824318b7b/merged major:0 minor:1191 fsType:overlay blockSize:0} overlay_0-1193:{mountpoint:/var/lib/containers/storage/overlay/0cdb823facdbeb923b3690591077f6781316710780f21b3e0d5122369ef79eec/merged major:0 minor:1193 fsType:overlay blockSize:0} overlay_0-1195:{mountpoint:/var/lib/containers/storage/overlay/7fb68f5be68a6c041d282b73f71d53bd2ba454ceaeafc05807b9125a99addd53/merged major:0 minor:1195 fsType:overlay blockSize:0} overlay_0-1197:{mountpoint:/var/lib/containers/storage/overlay/052e6156b1401352f8c27cc91678021e1e4508279bb5442c3d696898d5af6147/merged major:0 minor:1197 fsType:overlay blockSize:0} overlay_0-1199:{mountpoint:/var/lib/containers/storage/overlay/56d5b5b7338dba2a3b033848547c9fb4dbc09697151127cc6f8dd2dc2f42f111/merged major:0 minor:1199 fsType:overlay blockSize:0} overlay_0-1214:{mountpoint:/var/lib/containers/storage/overlay/1b179541ae38aa68d1f45941f0296931dbed4e073fa7c5f3f6264a3ccbb0c7bd/merged major:0 minor:1214 fsType:overlay blockSize:0} overlay_0-1228:{mountpoint:/var/lib/containers/storage/overlay/101efc1aa3aa222ca726fa4adeaa9ee3aba634bfc8c89f1af956e3cac34a2cf7/merged major:0 minor:1228 fsType:overlay blockSize:0} overlay_0-123:{mountpoint:/var/lib/containers/storage/overlay/5d36f7a42ddc9f1289f2bfd6dcd693d2f1411b0916aa0c91a9a47541dbadd79e/merged major:0 minor:123 fsType:overlay blockSize:0} overlay_0-1230:{mountpoint:/var/lib/containers/storage/overlay/b731e4127438cc773161118a2a4f96852ad6192d0043dcc33f7122796c8a0447/merged major:0 minor:1230 fsType:overlay blockSize:0} overlay_0-1237:{mountpoint:/var/lib/containers/storage/overlay/8679c2ac75316f2f0762dff16ba11dc62bbcfbd661a79d5a23fe8125d53800b9/merged major:0 minor:1237 fsType:overlay blockSize:0} overlay_0-1239:{mountpoint:/var/lib/containers/storage/overlay/360a92eaa87be91d50efa344339372f39cb84459b9327bccb2c61ebd2c10351e/merged major:0 minor:1239 fsType:overlay blockSize:0} overlay_0-1241:{mountpoint:/var/lib/containers/storage/overlay/929315acb8ba59f61808fe67ffe603499430db8f609feec5500fd0e6389b8bc2/merged major:0 minor:1241 fsType:overlay blockSize:0} overlay_0-125:{mountpoint:/var/lib/containers/storage/overlay/fe9e39115af35746179420e02ea1c68218d84288c1ad2f2f9829c6ff16bda3b1/merged major:0 minor:125 fsType:overlay blockSize:0} overlay_0-1253:{mountpoint:/var/lib/containers/storage/overlay/f43941cda2b0850862abf0abe5283d5c330f7b10742468f1d12ff719ef13bcb1/merged major:0 minor:1253 fsType:overlay blockSize:0} overlay_0-1255:{mountpoint:/var/lib/containers/storage/overlay/a3188a28cd039ccea381924b0cae396e0645acea589c259087f8af6502d5f3a5/merged major:0 minor:1255 fsType:overlay blockSize:0} overlay_0-1260:{mountpoint:/var/lib/containers/storage/overlay/1338c93c6495d5b9cbeccab3d551d50e176394da328675681d962f4a6da70205/merged major:0 minor:1260 fsType:overlay blockSize:0} overlay_0-1263:{mountpoint:/var/lib/containers/storage/overlay/739b2c2339eb090c0baa6b78dd980d04c3f7a3330125ccd0b0c65c1b83b30a8d/merged major:0 minor:1263 fsType:overlay blockSize:0} overlay_0-1274:{mountpoint:/var/lib/containers/storage/overlay/bf2d593184717a0819423224398de8aed995e4fc8178212d9a6328431a1d650f/merged major:0 minor:1274 fsType:overlay blockSize:0} overlay_0-1279:{mountpoint:/var/lib/containers/storage/overlay/e95f1b75a164f67e26070e7ec6db3f8c5ec5187eb92fde5a0f2118228e2f839e/merged major:0 minor:1279 fsType:overlay blockSize:0} overlay_0-128:{mountpoint:/var/lib/containers/storage/overlay/ee773974150abafa3da020aea0e1639771fc0ea7a7c9d08bc4728a379c022239/merged major:0 minor:128 fsType:overlay blockSize:0} overlay_0-1285:{mountpoint:/var/lib/containers/storage/overlay/895d5e76db6405abb3c746f03f910608b72533caefedc93f886b5ab4166dfea1/merged major:0 minor:1285 fsType:overlay blockSize:0} overlay_0-1295:{mountpoint:/var/lib/containers/storage/overlay/ee35f3c87490846c1312934f690a436b02e19a0390372aa262664d4ab3577297/merged major:0 minor:1295 fsType:overlay blockSize:0} overlay_0-1297:{mountpoint:/var/lib/containers/storage/overlay/66df787cee6d830da52ddbfa356301967f3dddd5c8f9a0aabf61725509d1ba08/merged major:0 minor:1297 fsType:overlay blockSize:0} overlay_0-1299:{mountpoint:/var/lib/containers/storage/overlay/398739d476999b43f05b558766c13030f1944f967540df39e4c3b119de686672/merged major:0 minor:1299 fsType:overlay blockSize:0} overlay_0-1305:{mountpoint:/var/lib/containers/storage/overlay/6e6f23a70a8f3baa6c150b5a28e076904f54706c070042c110b33aa6ae3ffff8/merged major:0 minor:1305 fsType:overlay blockSize:0} overlay_0-1311:{mountpoint:/var/lib/containers/storage/overlay/04a9754f05066fa8595c1ed622a2a5a070f44af34063422081fda8fa8bc62a31/merged major:0 minor:1311 fsType:overlay blockSize:0} overlay_0-1319:{mountpoint:/var/lib/containers/storage/overlay/54884e946c8dd6e6ca9b0c630fc1031c74cd4245a5a9143cf0c6bedf78eb9cb4/merged major:0 minor:1319 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/42fea0280817d534a2baf61ef052d3c5fef40dc4470baf5ff1c6372fa88cdd14/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-1332:{mountpoint:/var/lib/containers/storage/overlay/71477da022e42d7ceef62bf5ac623e14948b6e661e0228a579fc1a9b102b650f/merged major:0 minor:1332 fsType:overlay blockSize:0} overlay_0-1336:{mountpoint:/var/lib/containers/storage/overlay/708fdb24d96f6522ced5f7754218ccd4ef7e90e2b1876b023a91c778e35e87c5/merged major:0 minor:1336 fsType:overlay blockSize:0} overlay_0-1339:{mountpoint:/var/lib/containers/storage/overlay/fbe3180bff149d5bc5372ef12eadaae565a9cc55f69da28872a519694323b198/merged major:0 minor:1339 fsType:overlay blockSize:0} overlay_0-1347:{mountpoint:/var/lib/containers/storage/overlay/3a13f3640d70da7429fb0e6a130eea0d3d30aeaf0aa55b00690522d9be1d66de/merged major:0 minor:1347 fsType:overlay blockSize:0} overlay_0-1351:{mountpoint:/var/lib/containers/storage/overlay/3426c8068b453fc7f4e43b3d77c8b4f0d3df481b61106fdaf60633285679b899/merged major:0 minor:1351 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/371c312551cf30b9bd5610ccc6f83ad3a2420bb932045b3bf69f44a34c8fd209/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-1364:{mountpoint:/var/lib/containers/storage/overlay/a76445e9eb28a89e683ef1d492dc1d281266df7b8363ee4a5184fbd40aba2858/merged major:0 minor:1364 fsType:overlay blockSize:0} overlay_0-1366:{mountpoint:/var/lib/containers/storage/overlay/8a12116c65dff6f6097b3de2c0dfd84b2fad7f0462c3f6d5608561f739898c1a/merged major:0 minor:1366 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/88afa83af9754029705c132540fee1717e3f30bda62375056689c08e3fd9c186/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/9cdc891a1b2041c1f0fbd12af9293cc3eba45b2310911c564caaf97784d61b35/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/095b07d1fdec20a84a4b6b8faf0e952e43e5714129579d1b7f0b3385967cc719/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/91b598f62b9d266fdf4e03ff8f12dabff58d2d97e4b1e3aaec93e2f4cfbaf7f6/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/04b49d66ea78cded76006545f5f0991bdf5168cd60c57f78b7e188bc427a24a4/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/ce916713e86b6573843078429d32e9e3c301db603bfaf66103172ba8dd211ec9/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/9660bbdd6383fc73dd97f0f279768413e10d264ac38e590e70cfefe8f0187ce1/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/d1e21a6b71cd6adad9ff667b49d3a0ea53813e7526f0eae3ae00e0876f217363/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/b02be86bbff28746f6aafe50e5c48ca560b1ed8c57bda8e5fcff55adea9d35e9/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/06607e04a8667b0ad462a765b6138c230302bc723f96b3d1e36ecb804f94fcc7/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-176:{mountpoint:/var/lib/containers/storage/overlay/884dddb3ccb62f88a585a5ea28de915faa79eddb5fb65af77030322802394dbf/merged major:0 minor:176 fsType:overlay blockSize:0} overlay_0-178:{mountpoint:/var/lib/containers/storage/overlay/33a2defa9b85296d1a0b80315602adb8a66752f61db42a95e863938dc8d11282/merged major:0 minor:178 fsType:overlay blockSize:0} overlay_0-180:{mountpoint:/var/lib/containers/storage/overlay/f00867832b255e75707a9fabc6305f45d3d2876d766fe496b97e14423b6b5b60/merged major:0 minor:180 fsType:overlay blockSize:0} overlay_0-182:{mountpoint:/var/lib/containers/storage/overlay/f6dae9f7fa172b5cf053523244188662150c9ccf1cc16344dd9ad9fcd780b074/merged major:0 minor:182 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/e494711d932198e113069979d11742defc146f0c350d81d4f13bac816c1fcad0/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-185:{mountpoint:/var/lib/containers/storage/overlay/f998428b0584768efdbc858a1a882e5cd2f98771499299479d585d0048b88938/merged major:0 minor:185 fsType:overlay blockSize:0} overlay_0-187:{mountpoint:/var/lib/containers/storage/overlay/0d32d5800af2e7aa13d372e9ae2e08b99a09aaef00f66abc791dcdbfcfa86c2d/merged major:0 minor:187 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/63c3e916b033be6d0858bfd7ffc02b6682d3131b17372c4ffe0223760d31d475/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-200:{mountpoint:/var/lib/containers/storage/overlay/dd846a6181aff52c28d4ca11e844736e67c8135f32c5d6956d6f7490c1eee840/merged major:0 minor:200 fsType:overlay blockSize:0} overlay_0-202:{mountpoint:/var/lib/containers/storage/overlay/e1e1ab6f1a3a16c6097fc1bc10107c8bdf011b4cf8a44aaae301f1502e2d1be3/merged major:0 minor:202 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/1209a0651f9fdd29c776b6fd6e9aebabc3684884ab3b77aa961f078a888e6426/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-215:{mountpoint:/var/lib/containers/storage/overlay/5abe1a1dfcedfe8aea872180aea9d9066d024b2579673806ae04f105fe9d79f3/merged major:0 minor:215 fsType:overlay blockSize:0} overlay_0-217:{mountpoint:/var/lib/containers/storage/overlay/159e8b8325ea5f0bff51e0166c11e5e19c9d3891873c042f1a1541cc136a56c1/merged major:0 minor:217 fsType:overlay blockSize:0} overlay_0-222:{mountpoint:/var/lib/containers/storage/overlay/19ec1cfddd6029204bd1e471a20b4eff22fd6a6182bd701cc30a9b8a73581ebd/merged major:0 minor:222 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/61ee62664ae85ddf0c90683532d8e9705aa26361a25aa487c11f62ce2327af65/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-270:{mountpoint:/var/lib/containers/storage/overlay/ef949e9f88456f7303370d9cc1840f562391df74e20098f0528d76daaa9d42ea/merged major:0 minor:270 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/5ff857cc9fffaaebde426bf5f86e18c24e6e0e1a2430185a110f213dd393c9a4/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/8ab419c96e042d72e23d71d2b690d3a372d40de7726d974249ce53c590c85b33/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/c9ed36b06f64c2a9d4c2857406c11bb5f39771959fe552fe1bfaee30a0cbf0cd/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/38e3f396ff7a0a130ec51bbc340c958e0cd8a6cd694d91f4adb8e1241a51b37d/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/d9f3d597e0230afc1df35523f75b7f9ce9712bcff99fb0f8b329911d2c53cc06/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/02d6cf3a35343bf224af25b1ed6f931d862a0a43d4ae4e3bd43d36937a49a4ff/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/76470ecdbf0d9605fbd5dbaecdf5ef327ad18aedcbe5b0bcb3d7af83441a0979/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-310:{mountpoint:/var/lib/containers/storage/overlay/cd4193a4b62518ccb300694dfe270ed56295e20833819cd1e84944ae89e1c711/merged major:0 minor:310 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/0430268f4dfe5e77f0af5f198016649a961167f7c17bab2ba16282fc4fb7d1d2/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/36a06de1514a13d3ec4751fe34163c7df6d5a2995aa7fac634c13b0510acc47e/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-318:{mountpoint:/var/lib/containers/storage/overlay/2ff306a0b075c1d2b9642532bab2d974d516d2a6e576cee1b20a1b0a791d3d2e/merged major:0 minor:318 fsType:overlay blockSize:0} overlay_0-320:{mountpoint:/var/lib/containers/storage/overlay/b2c484d3afb38ece6cf1d3adc2ced31eef7c52e33c18c656481ca73d0bfb29c2/merged major:0 minor:320 fsType:overlay blockSize:0} overlay_0-322:{mountpoint:/var/lib/containers/storage/overlay/9c3995c7f1537015ff5ddfd64f26852fd3465bb9be274ae4c7a4f3820db6afd6/merged major:0 minor:322 fsType:overlay blockSize:0} overlay_0-326:{mountpoint:/var/lib/containers/storage/overlay/d598fbe7a822fd7ec584c145fd1523f6c5f2a333e99b0a5af93f18d6cf58902a/merged major:0 minor:326 fsType:overlay blockSize:0} overlay_0-328:{mountpoint:/var/lib/containers/storage/overlay/36a8b12969dd59f6fd4a5a209e0c83de25868a89069ea9761a4c1d6f4c62f5a8/merged major:0 minor:328 fsType:overlay blockSize:0} overlay_0-330:{mountpoint:/var/lib/containers/storage/overlay/929d637ecc6fd5e860f325a85e9e873533bbe70ceb849f069e8dc0147519954b/merged major:0 minor:330 fsType:overlay blockSize:0} overlay_0-331:{mountpoint:/var/lib/containers/storage/overlay/73ed807e4e98f3aa6e8f982ad1f9d33c678cf1d42de3887d59fa5eddc2f84624/merged major:0 minor:331 fsType:overlay blockSize:0} overlay_0-337:{mountpoint:/var/lib/containers/storage/overlay/bbc855eab34fd44fb0937ef41302d3cb969bec4d45875e6d2971f0c4cd7d8ce4/merged major:0 minor:337 fsType:overlay blockSize:0} overlay_0-339:{mountpoint:/var/lib/containers/storage/overlay/3c917783ae54aa678eda7c101167a7ca3bf85eb1079cb772d12841f71d59d8b9/merged major:0 minor:339 fsType:overlay blockSize:0} overlay_0-340:{mountpoint:/var/lib/containers/storage/overlay/4b7a60bf736d74cc45121fc6c86643a3711aa4b36815f66fd8031ce5b09dd436/merged major:0 minor:340 fsType:overlay blockSize:0} overlay_0-344:{mountpoint:/var/lib/containers/storage/overlay/bc6937b116065623e34b0b5a578f51053c16c8d63bd009fd1d680a9cc97174ee/merged major:0 minor:344 fsType:overlay blockSize:0} overlay_0-347:{mountpoint:/var/lib/containers/storage/overlay/10a64a75aa23c51a2977102b725c47144382bec3c75fd821397daa6c406a2a3e/merged major:0 minor:347 fsType:overlay blockSize:0} overlay_0-354:{mountpoint:/var/lib/containers/storage/overlay/5d4a760f83fb012632ea9a0382493d5daedeb572ee7e48406ca2c80972b92a9c/merged major:0 minor:354 fsType:overlay blockSize:0} overlay_0-358:{mountpoint:/var/lib/containers/storage/overlay/c29a4bea951a54ebbce2f13407c1d164f20c2c857becb1b8d59ac77af984cbea/merged major:0 minor:358 fsType:overlay blockSize:0} overlay_0-359:{mountpoint:/var/lib/containers/storage/overlay/aaab575422771923ae50a64d13689d86efa2495449ba4224493950bc306d2cf6/merged major:0 minor:359 fsType:overlay blockSize:0} overlay_0-362:{mountpoint:/var/lib/containers/storage/overlay/34029c94357cdcf37798948af2ed25e2404dfca2ca0ae4d103a09d8e31576a8f/merged major:0 minor:362 fsType:overlay blockSize:0} overlay_0-364:{mountpoint:/var/lib/containers/storage/overlay/038a2a0019063436fe247f7876c8a07f131cfdcb03b438ef79a40ce7a12b1ce9/merged major:0 minor:364 fsType:overlay blockSize:0} overlay_0-366:{mountpoint:/var/lib/containers/storage/overlay/6acb0b402706e1f8212516a3207ea0e4ce90645f29f5cfc11809b71d44eeece7/merged major:0 minor:366 fsType:overlay blockSize:0} overlay_0-373:{mountpoint:/var/lib/containers/storage/overlay/154090aad2d9f9ba5732b356f6db011854d1c575e54625e9066462bfdb0cb3aa/merged major:0 minor:373 fsType:overlay blockSize:0} overlay_0-379:{mountpoint:/var/lib/containers/storage/overlay/db26073eab5183b7b5207a83be06441be0699d0cc34f6af0e7d9537fbb29b265/merged major:0 minor:379 fsType:overlay blockSize:0} overlay_0-382:{mountpoint:/var/lib/containers/storage/overlay/073c6ea703a79dbaf5a713eb93c567c1d5213b4cff9d8c5ba670781e9d8ea5fb/merged major:0 minor:382 fsType:overlay blockSize:0} overlay_0-384:{mountpoint:/var/lib/containers/storage/overlay/fd8bf2f33640c0d4fecda60724758486b0a3cbeb09bab0a32d5920b3bfbe90ac/merged major:0 minor:384 fsType:overlay blockSize:0} overlay_0-387:{mountpoint:/var/lib/containers/storage/overlay/51a0f50628f89cb581b6af0c073b7ea6169ac263cad78fbb5e1801139b3b39ca/merged major:0 minor:387 fsType:overlay blockSize:0} overlay_0-390:{mountpoint:/var/lib/containers/storage/overlay/57ee7739e700b1a82808cde00b21273477cbe3807783b554278b7a2c785a08ff/merged major:0 minor:390 fsType:overlay blockSize:0} overlay_0-392:{mountpoint:/var/lib/containers/storage/overlay/5f30c4e42453da0a6744e58a5f4dc49fb4d1390014f989afc70c44adf7f7d7a2/merged major:0 minor:392 fsType:overlay blockSize:0} overlay_0-394:{mountpoint:/var/lib/containers/storage/overlay/c59ffba411a50566fcbe24798443d2fc8d42a397ad76d5346aca31b6dd648db1/merged major:0 minor:394 fsType:overlay blockSize:0} overlay_0-395:{mountpoint:/var/lib/containers/storage/overlay/7ceb217ef9210cd21b7715f9d64cace47d6c73a1dd41472360411ec44bbc2a9e/merged major:0 minor:395 fsType:overlay blockSize:0} overlay_0-397:{mountpoint:/var/lib/containers/storage/overlay/862e0dad5d9bd688106e3b591c3e1e7bce4bab0b7ef998197b3267163650a65f/merged major:0 minor:397 fsType:overlay blockSize:0} overlay_0-401:{mountpoint:/var/lib/containers/storage/overlay/c38589b616e62ea4091fff268ba22fbdc90730de7ecd4760dd131f1f1f3fd293/merged major:0 minor:401 fsType:overlay blockSize:0} overlay_0-403:{mountpoint:/var/lib/containers/storage/overlay/8a5289924e53b41722340247719445471dfd6ab3830a5efdabb6d3e2c2923d11/merged major:0 minor:403 fsType:overlay blockSize:0} overlay_0-408:{mountpoint:/var/lib/containers/storage/overlay/2fdf47384431005d0a95f91e875346557bd7558a36e26c844557364b49284758/merged major:0 minor:408 fsType:overlay blockSize:0} overlay_0-41:{mountpoint:/var/lib/containers/storage/overlay/8b344146eb81b132abc42176804a8b7d989f46ae1e71c550bb373de90518b192/merged major:0 minor:41 fsType:overlay blockSize:0} overlay_0-410:{mountpoint:/var/lib/containers/storage/overlay/507f43565af39b852c8f483ae556699e1d98d56a27f194d9f657fa0da2cc453c/merged major:0 minor:410 fsType:overlay blockSize:0} overlay_0-412:{mountpoint:/var/lib/containers/storage/overlay/bc9adcb70d4fa4d5418a5d726582aab5cfc116f99a81fd4d985b41a3fd4ee3b3/merged major:0 minor:412 fsType:overlay blockSize:0} overlay_0-414:{mountpoint:/var/lib/containers/storage/overlay/dfe2be3bf063a8001edcb68cebffc22a99f4ccbacf9f22134882a91e608e4ec5/merged major:0 minor:414 fsType:overlay blockSize:0} overlay_0-427:{mountpoint:/var/lib/containers/storage/overlay/987f4dfb75e77990c9c4a199514e4f4e7c472c3b7712f18bedd7dc921fde7ff6/merged major:0 minor:427 fsType:overlay blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/53c76ecc43293420c85c5dcdbb521c3df32ae40ce32a708738331606439e98a7/merged major:0 minor:43 fsType:overlay blockSize:0} overlay_0-431:{mountpoint:/var/lib/containers/storage/overlay/5eb2953f214f082bc59d54ff9cfa677da7447d739e6ff1d88f218af4cf0a45d2/merged major:0 minor:431 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/eded65a62615fb59c8e3aef9be0057a8ecf8fa293dd3ce6ccbf3f37b567a8b88/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-442:{mountpoint:/var/lib/containers/storage/overlay/829b3656affcf4ee40fcb79e54e21d37245787c2b2ce9a9119d0da0cb838be7a/merged major:0 minor:442 fsType:overlay blockSize:0} overlay_0-444:{mountpoint:/var/lib/containers/storage/overlay/7d51572143372c9bbb7a1e5351329273b9a4f85ccef61c2d28919f9d1439bcdb/merged major:0 minor:444 fsType:overlay blockSize:0} overlay_0-446:{mountpoint:/var/lib/containers/storage/overlay/8a8d1e7013a077e09e8f6fd9971aa1dd19f0925f0cb494ef352d3f333f5546cb/merged major:0 minor:446 fsType:overlay blockSize:0} overlay_0-452:{mountpoint:/var/lib/containers/storage/overlay/15e1ea9e90932a0f3a063ae20ed679b6dd1c611e315f61b6679dd7b875724d75/merged major:0 minor:452 fsType:overlay blockSize:0} overlay_0-454:{mountpoint:/var/lib/containers/storage/overlay/148cf749330f21b89d541a43aaeaa90fbf8016f610b67baa0c6fec3473ed29f5/merged major:0 minor:454 fsType:overlay blockSize:0} overlay_0-456:{mountpoint:/var/lib/containers/storage/overlay/999bee9ba30756aafd4a48b3cb1d08c2e2ce35405b2fbe3e1d22553e3411295b/merged major:0 minor:456 fsType:overlay blockSize:0} overlay_0-458:{mountpoint:/var/lib/containers/storage/overlay/e738b6fdf33afff38652689a7298c10609b5e2f69a71c7cd6e42d0d89ff0b21f/merged major:0 minor:458 fsType:overlay blockSize:0} overlay_0-463:{mountpoint:/var/lib/containers/storage/overlay/e30996a89b6aa6e81bfd41998a230c17e6c3c755f2821a01ef17541f0ead6e11/merged major:0 minor:463 fsType:overlay blockSize:0} overlay_0-466:{mountpoint:/var/lib/containers/storage/overlay/1e96b7b47bcdb5bd3cf22d23e9d7dfa2d990dd4e46925b512958403d949cddda/merged major:0 minor:466 fsType:overlay blockSize:0} overlay_0-467:{mountpoint:/var/lib/containers/storage/overlay/fb6863cc00ddf004dde15c13f784cf0a2ed2c58ea270c4801d1b262bae6134fb/merged major:0 minor:467 fsType:overlay blockSize:0} overlay_0-469:{mountpoint:/var/lib/containers/storage/overlay/192eb0138f8514c96c556c001d314e2f9347071d692595efe9974552d3c10e6b/merged major:0 minor:469 fsType:overlay blockSize:0} overlay_0-483:{mountpoint:/var/lib/containers/storage/overlay/1e967f2c93f1614cfd1ba14e2ace3ec3fe008116b6fc3c479e46a82329ad695d/merged major:0 minor:483 fsType:overlay blockSize:0} overlay_0-487:{mountpoint:/var/lib/containers/storage/overlay/c5b483cc2470cc80bd9ead02f9aa73882192f1890d0b8703d7b3e35b0bbb08f9/merged major:0 minor:487 fsType:overlay blockSize:0} overlay_0-492:{mountpoint:/var/lib/containers/storage/overlay/6bd28d9133b4ad3717bb364e138ae54599ec27abefef7a3ab5120cd2d91650a8/merged major:0 minor:492 fsType:overlay blockSize:0} overlay_0-496:{mountpoint:/var/lib/containers/storage/overlay/e47c3fc727768ec10689b23d824ce61e9ca91ee88368646b4fefb647d4dfd1d0/merged major:0 minor:496 fsType:overlay blockSize:0} overlay_0-498:{mountpoint:/var/lib/containers/storage/overlay/da9b5e7c74796b596d66383d66f5104d28602fda5caab846fd9299a60db48226/merged major:0 minor:498 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/65a742bd3be350d693f7c74be705857bf3c22896bb0d0bc7b1bb5890961766b7/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-502:{mountpoint:/var/lib/containers/storage/overlay/1d1ecac5f0476cf5688f3e84f141694fa32d1d0f55d10f7ec3438afeb4f0e7d4/merged major:0 minor:502 fsType:overlay blockSize:0} overlay_0-515:{mountpoint:/var/lib/containers/storage/overlay/03000590f03f823431274f4c82f1aec6b68d2653d0762a556bc832a6981b64d8/merged major:0 minor:515 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/6bf87efb77eed3332d836786b400db0d959de42d31190910766fcbdf820d47fd/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-528:{mountpoint:/var/lib/containers/storage/overlay/32c046417a351b5cda1fc393efdf1f8a50314f1d04345f163c77885ef39c1716/merged major:0 minor:528 fsType:overlay blockSize:0} overlay_0-53:{mountpoint:/var/lib/containers/storage/overlay/7bcc5717fbea6e2f5d69172a30296360b12d49 Feb 20 15:01:01.876333 master-0 kubenswrapper[28120]: d86e67396fd94fb5a6668a42b2/merged major:0 minor:53 fsType:overlay blockSize:0} overlay_0-530:{mountpoint:/var/lib/containers/storage/overlay/cb23bab5336cc6b2b768333bdc95d27f43f45659e0360da300aef723d7838e92/merged major:0 minor:530 fsType:overlay blockSize:0} overlay_0-532:{mountpoint:/var/lib/containers/storage/overlay/89ee8f202242c20c6f0247edae5c87b0b4f78fffc69e4e32693c8cfb217a985a/merged major:0 minor:532 fsType:overlay blockSize:0} overlay_0-534:{mountpoint:/var/lib/containers/storage/overlay/444d20de3ac8c79a54fbb6c209e70750e43f41686b3f31d9a6853cdc3753fe9e/merged major:0 minor:534 fsType:overlay blockSize:0} overlay_0-538:{mountpoint:/var/lib/containers/storage/overlay/12407a62113cb5b366713a22e21a7a23adab852bc416a310380d4377c0d7dc08/merged major:0 minor:538 fsType:overlay blockSize:0} overlay_0-540:{mountpoint:/var/lib/containers/storage/overlay/f1aa267d4ecbcf2832126bad5c43476f194b654c204b1bd8ce44c39910dd3057/merged major:0 minor:540 fsType:overlay blockSize:0} overlay_0-542:{mountpoint:/var/lib/containers/storage/overlay/04dd53c780a25edffc8491afd5b2f0ad6b16f9bf2814b8519b877eadda552c95/merged major:0 minor:542 fsType:overlay blockSize:0} overlay_0-548:{mountpoint:/var/lib/containers/storage/overlay/b5edcc1b5c84639be7bc0184c46257800dd3b46bb57b50e044ad7bcbb0c60e4f/merged major:0 minor:548 fsType:overlay blockSize:0} overlay_0-553:{mountpoint:/var/lib/containers/storage/overlay/83f46bfe24ad0db21f3c2441415a047f9e989ebf2d408cb3da390ab46dce7150/merged major:0 minor:553 fsType:overlay blockSize:0} overlay_0-557:{mountpoint:/var/lib/containers/storage/overlay/8aa53400b4eb34a61a7ea358504bbdaa6da819d14ddc60c26403949cadf3cffa/merged major:0 minor:557 fsType:overlay blockSize:0} overlay_0-559:{mountpoint:/var/lib/containers/storage/overlay/1bdcd681045d3ead30e6e55e6cdc8a3fef31813f65644a70d4eba0dcee032fb0/merged major:0 minor:559 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/50437323d4e3b3adf92daf2177a64a9f1a69c6578c4a581827d93ffa23511a17/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-561:{mountpoint:/var/lib/containers/storage/overlay/ea3de176a92926724fa3a67f567557dd6b14d55359a777941c237be83051e7ec/merged major:0 minor:561 fsType:overlay blockSize:0} overlay_0-567:{mountpoint:/var/lib/containers/storage/overlay/7e7544899786a1d0358263bf8d0069c73bc5f3e22676bcdc9bdf96fd8db02215/merged major:0 minor:567 fsType:overlay blockSize:0} overlay_0-568:{mountpoint:/var/lib/containers/storage/overlay/619ebce85985e43537d32908753da99f394d3a5af8635c1975b9d7ad3d968714/merged major:0 minor:568 fsType:overlay blockSize:0} overlay_0-570:{mountpoint:/var/lib/containers/storage/overlay/eb957edf0bced6ba21b98956bb0c16d510ec242c95a20dc1ade8551e7ef88f3b/merged major:0 minor:570 fsType:overlay blockSize:0} overlay_0-572:{mountpoint:/var/lib/containers/storage/overlay/734f953c22b261ce6c5396e1ff11e2c299a3abf909daa044c9aefd77e68ce569/merged major:0 minor:572 fsType:overlay blockSize:0} overlay_0-578:{mountpoint:/var/lib/containers/storage/overlay/0a039de31f9c81a3001edc3939bcbc50b569cbaf42807c801ad0b835920e1b74/merged major:0 minor:578 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/a8a3c06fd7934077fef989b864ce6c1e8643e3f9a1f6b7f8ad3e457aa4eacb03/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-600:{mountpoint:/var/lib/containers/storage/overlay/d4fda914a5f50630acdb4d3c597fc417ca371e62e95d12bf97aac6895efdc459/merged major:0 minor:600 fsType:overlay blockSize:0} overlay_0-612:{mountpoint:/var/lib/containers/storage/overlay/de91be750b4bedae1f5c27dfa4530a2e1cbd088e2dc55dd4efb8ae1a793b7624/merged major:0 minor:612 fsType:overlay blockSize:0} overlay_0-614:{mountpoint:/var/lib/containers/storage/overlay/5e2c51d7534c9f269fa69db04920b9aef227cb1c778e86aa1128d19acfbf0cb6/merged major:0 minor:614 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/02ae7df61298da85844631b075cc9b116989a3dabe7a91f5ad056d6a718d20ce/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-626:{mountpoint:/var/lib/containers/storage/overlay/acb68be04d396ee253cf7ac4a6cf53a10ade08fef87a2e43c8c0abb36f96f996/merged major:0 minor:626 fsType:overlay blockSize:0} overlay_0-631:{mountpoint:/var/lib/containers/storage/overlay/aae3ac4a71316ffbebc3b985aec7be49ed1012db45ce574b83b49b0ba83c12ad/merged major:0 minor:631 fsType:overlay blockSize:0} overlay_0-635:{mountpoint:/var/lib/containers/storage/overlay/63c33d36e961a2d88760570b376db1d560a863f6e8cc35c836b35f6dca40a8fd/merged major:0 minor:635 fsType:overlay blockSize:0} overlay_0-637:{mountpoint:/var/lib/containers/storage/overlay/f60ae9222c84831999bd4bb10bfdf8641bfd0f557ff0c138ae6be1c739abd67a/merged major:0 minor:637 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/36d1a9bda3403720c8f2f1887bddf8ffde06e08a675664ee7b1091e89c922e8a/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-648:{mountpoint:/var/lib/containers/storage/overlay/e6b829c6e70870da4d2c5c3740098fc625d7b89696d30e1381abe0a5d3547620/merged major:0 minor:648 fsType:overlay blockSize:0} overlay_0-650:{mountpoint:/var/lib/containers/storage/overlay/1f567835dcf2e5162f295e893ddf7083db321f6efec1e9e358157021902e3feb/merged major:0 minor:650 fsType:overlay blockSize:0} overlay_0-657:{mountpoint:/var/lib/containers/storage/overlay/d22e51fd0818cc30292df70aefc483f9d629d7c46a08bee70865e78bd94b64dc/merged major:0 minor:657 fsType:overlay blockSize:0} overlay_0-665:{mountpoint:/var/lib/containers/storage/overlay/d3a5032a8014f72dbbff968f91aed424ce8cb46ce4a7903f932e543baf4527d0/merged major:0 minor:665 fsType:overlay blockSize:0} overlay_0-67:{mountpoint:/var/lib/containers/storage/overlay/18ec591a2dce64641a10afdd9a74f090dc5ce1cfc39f3771731d1c04f98e2fc4/merged major:0 minor:67 fsType:overlay blockSize:0} overlay_0-670:{mountpoint:/var/lib/containers/storage/overlay/88c6306bb5212c65192af7221402649458c9fa8f7fcd3863ef4a6ee5e24dd5f5/merged major:0 minor:670 fsType:overlay blockSize:0} overlay_0-682:{mountpoint:/var/lib/containers/storage/overlay/c28f97a1894031ceb4e7377835809d303662ffe0a53ec2e3ccee7466b38e7416/merged major:0 minor:682 fsType:overlay blockSize:0} overlay_0-683:{mountpoint:/var/lib/containers/storage/overlay/38f23e0fa7a2cbc2d5ddf640f22cdc07f4e9f537c65f163e6b953d7e9a900017/merged major:0 minor:683 fsType:overlay blockSize:0} overlay_0-684:{mountpoint:/var/lib/containers/storage/overlay/148ea490df52539aed59f34647caf665a9574abf897810c7ad06d9a0324730bd/merged major:0 minor:684 fsType:overlay blockSize:0} overlay_0-692:{mountpoint:/var/lib/containers/storage/overlay/17b10f521c3acf21f59d7439108d36ad3ff99cc2226cbbf72a83e8aced394084/merged major:0 minor:692 fsType:overlay blockSize:0} overlay_0-698:{mountpoint:/var/lib/containers/storage/overlay/0f03fa3ce8d342135cf7b4d4d07b28729dfa93bbd543d69cc7fb66b1f3622dbd/merged major:0 minor:698 fsType:overlay blockSize:0} overlay_0-708:{mountpoint:/var/lib/containers/storage/overlay/fce843ae4c75eb3746aea995057f152b61f2727776a31c2ee28ad9cdd43cda18/merged major:0 minor:708 fsType:overlay blockSize:0} overlay_0-711:{mountpoint:/var/lib/containers/storage/overlay/fc0fc363f63c55f95638ee8b9d9698065626282ffe5f7890f5a28789da274993/merged major:0 minor:711 fsType:overlay blockSize:0} overlay_0-716:{mountpoint:/var/lib/containers/storage/overlay/d8bd3f229f12aa3741c39020868b90a1000a23898edc330396e6f44b4126b993/merged major:0 minor:716 fsType:overlay blockSize:0} overlay_0-718:{mountpoint:/var/lib/containers/storage/overlay/8819acb04c42997069b032a08296ec41407f4170326903ebc65b277beb02b00d/merged major:0 minor:718 fsType:overlay blockSize:0} overlay_0-719:{mountpoint:/var/lib/containers/storage/overlay/e901c1b901dba59fe4abb7987332dcdd496441cb3fe3476d98af1f46fa2cfa9c/merged major:0 minor:719 fsType:overlay blockSize:0} overlay_0-727:{mountpoint:/var/lib/containers/storage/overlay/df79e5a0611f883ced1eec94366599d38223106a7a6c13204b786ed68e77db32/merged major:0 minor:727 fsType:overlay blockSize:0} overlay_0-731:{mountpoint:/var/lib/containers/storage/overlay/85969b275969dbd5f373ef6611de69db82a6868f1de29a469db5e3f3f8d483f8/merged major:0 minor:731 fsType:overlay blockSize:0} overlay_0-736:{mountpoint:/var/lib/containers/storage/overlay/b836f4c2495be15b6108a56780d1693a816a9bb8f58e14bbfd524409cda72e18/merged major:0 minor:736 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/fed0dfb3f9f31c13a5ad1bb8ecbcff3a5e369ac77237d956d8a732dbda6ef1de/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-741:{mountpoint:/var/lib/containers/storage/overlay/e47f253ecc2b28244f0ddaeb6c15e8e24d0eb39728bc33405ac991e99a5ce69d/merged major:0 minor:741 fsType:overlay blockSize:0} overlay_0-743:{mountpoint:/var/lib/containers/storage/overlay/49017cff8f8306eaffdf473db98a741e7f1072d94390650df40f0b19836e59d2/merged major:0 minor:743 fsType:overlay blockSize:0} overlay_0-746:{mountpoint:/var/lib/containers/storage/overlay/da1a63f3856da1ac4cb58f8b6a7b258d403526dc0d54ca845a949b8dc1da8da9/merged major:0 minor:746 fsType:overlay blockSize:0} overlay_0-748:{mountpoint:/var/lib/containers/storage/overlay/16ecfea7f49de29d1108b8c7291090ccf9e16ce770b48a2c4f28f1f725c1f90a/merged major:0 minor:748 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/cb84f525cc83b57d8574b946f3c0db51799b48ef01a6cd5239ed3cd97e2f5ede/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-767:{mountpoint:/var/lib/containers/storage/overlay/77daede5f82be7a1b68ea998fc531698de55908ee4163d1bc367f213ea50ac30/merged major:0 minor:767 fsType:overlay blockSize:0} overlay_0-773:{mountpoint:/var/lib/containers/storage/overlay/546736f72899db30d8d9dd2d59b59ccfb5596bc3bda4852674a42c1c4688dcec/merged major:0 minor:773 fsType:overlay blockSize:0} overlay_0-774:{mountpoint:/var/lib/containers/storage/overlay/587b5597341c9180773dbd82e46899ecf6b3760c767898a3418d2f1e2d557ee9/merged major:0 minor:774 fsType:overlay blockSize:0} overlay_0-776:{mountpoint:/var/lib/containers/storage/overlay/f85d46bec5851ff37c6f9586cc9c820bc1c6466ce18c7b5f07f7d25d5d1b3e54/merged major:0 minor:776 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/63542e62fe1ae5883637f9e3194ec246da80e466d48c124ba0c4228ffb68271f/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/f8fa6da59dcbf6f1f440366483e8253abb1072d8b6222d52a8c49fc094f70b19/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-812:{mountpoint:/var/lib/containers/storage/overlay/cdfabd306f87e6b7eb1aef4e3fb06a3dace31e2b7c5104847932d66b4836753e/merged major:0 minor:812 fsType:overlay blockSize:0} overlay_0-820:{mountpoint:/var/lib/containers/storage/overlay/d98ca2e9656346c4a2e97bf59b375d322ada4027933aa7825c91d5e6ecb9af2f/merged major:0 minor:820 fsType:overlay blockSize:0} overlay_0-822:{mountpoint:/var/lib/containers/storage/overlay/5cbc506e3c575f57d72c6f15c01fee1ecf34fea7c95e4439494ddaca66ab6300/merged major:0 minor:822 fsType:overlay blockSize:0} overlay_0-824:{mountpoint:/var/lib/containers/storage/overlay/e05696a38eb7339cacb07609f54edf4706f7a1ec769de4000ff387366c3ebfa3/merged major:0 minor:824 fsType:overlay blockSize:0} overlay_0-826:{mountpoint:/var/lib/containers/storage/overlay/ead449cb46685635090653ea0294c854908221aab5ee12b157ea83b6d060a133/merged major:0 minor:826 fsType:overlay blockSize:0} overlay_0-83:{mountpoint:/var/lib/containers/storage/overlay/182cb6786952ea795a61d0cd5d7c4435d0ee2d053f7d7e08d2edffefba71efad/merged major:0 minor:83 fsType:overlay blockSize:0} overlay_0-836:{mountpoint:/var/lib/containers/storage/overlay/9002882fadc8226128bff35243e9d38aca08ff267302c353dba91d28fd4f92ce/merged major:0 minor:836 fsType:overlay blockSize:0} overlay_0-838:{mountpoint:/var/lib/containers/storage/overlay/e8e9307ecf5a17e00688f2984c19c8b54215dc0daa5c5ac915dad8d1a3cd1073/merged major:0 minor:838 fsType:overlay blockSize:0} overlay_0-840:{mountpoint:/var/lib/containers/storage/overlay/37d4dc2705a9f9bf138691a968eca5cdf3a865d9b3b5c6220605be8d918b05cc/merged major:0 minor:840 fsType:overlay blockSize:0} overlay_0-842:{mountpoint:/var/lib/containers/storage/overlay/d6347d9f05f159f61d443946353df5a64a38498c308444a4e3012ae7d754375c/merged major:0 minor:842 fsType:overlay blockSize:0} overlay_0-844:{mountpoint:/var/lib/containers/storage/overlay/f69288b24e3167297f4e5359f20aaae8afa952493962288683a7d73a7c080cf2/merged major:0 minor:844 fsType:overlay blockSize:0} overlay_0-846:{mountpoint:/var/lib/containers/storage/overlay/5eb57d20b01a704ba9b4835ccb505e77c70523f26112749c752b6881c168a1ef/merged major:0 minor:846 fsType:overlay blockSize:0} overlay_0-848:{mountpoint:/var/lib/containers/storage/overlay/2611cc59345ef63085a30237c7708f2b07f1f715577903c7a70783217a5631e1/merged major:0 minor:848 fsType:overlay blockSize:0} overlay_0-850:{mountpoint:/var/lib/containers/storage/overlay/588b33fc86d6eff7743244fb27f8650db97655f2d6cad25c4d08ab73e55e5728/merged major:0 minor:850 fsType:overlay blockSize:0} overlay_0-852:{mountpoint:/var/lib/containers/storage/overlay/2f82fccb9c3c23875deb76c2f5ae5989a6bf149ef71668b8ba2ed224fbd14995/merged major:0 minor:852 fsType:overlay blockSize:0} overlay_0-853:{mountpoint:/var/lib/containers/storage/overlay/3a226b3a1d9816c7563bfcc9d36f0e454d7d931146644ee2e8a26f0ec5312906/merged major:0 minor:853 fsType:overlay blockSize:0} overlay_0-856:{mountpoint:/var/lib/containers/storage/overlay/b2544e0f528e932a71b0abef8775f04fad814d71bfce9c3d1092066004762e25/merged major:0 minor:856 fsType:overlay blockSize:0} overlay_0-858:{mountpoint:/var/lib/containers/storage/overlay/2391d62d9d79eea490be57a7e63e185a84cd57c2ba8c3cc04e0f758ced37c9ec/merged major:0 minor:858 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/8cf3ee44838404f421205a3506a77cab5b4bb70b3f37838da99ec7c25aa53c1d/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-860:{mountpoint:/var/lib/containers/storage/overlay/5a9de4b39c96fe1f208f6fef17449badee285420ab78ab6cc28332669b48f940/merged major:0 minor:860 fsType:overlay blockSize:0} overlay_0-87:{mountpoint:/var/lib/containers/storage/overlay/0c6debf95e12f6b7240b8b8f9dd437750ffa07dc880aed6893f482747c8c938e/merged major:0 minor:87 fsType:overlay blockSize:0} overlay_0-870:{mountpoint:/var/lib/containers/storage/overlay/785c3505535a18ee796b680af9b16d2e67ce3ce1d1284a363d8dd24a6a133393/merged major:0 minor:870 fsType:overlay blockSize:0} overlay_0-872:{mountpoint:/var/lib/containers/storage/overlay/219c267c9cec8a23cf12e78e0a28ecc1456c4e1fc3f9daa3cbcb98d44089ef4a/merged major:0 minor:872 fsType:overlay blockSize:0} overlay_0-873:{mountpoint:/var/lib/containers/storage/overlay/d0917c91545bc4590d82105f12433736ac062a4187fe93ff07b0c5db2f1e079b/merged major:0 minor:873 fsType:overlay blockSize:0} overlay_0-897:{mountpoint:/var/lib/containers/storage/overlay/97f935bc01658204432f96eacb0add7b3aaf78bf770b5a2222bb452cbb0873a8/merged major:0 minor:897 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/ba2d6e4f1fc6c1054168528d5febd46b8087d10fd759b19f844b8df8e908551c/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-902:{mountpoint:/var/lib/containers/storage/overlay/77cc12dcce7c307a285c01967e708711d602f44d2d77c9f0623c4e2c08a89ac2/merged major:0 minor:902 fsType:overlay blockSize:0} overlay_0-916:{mountpoint:/var/lib/containers/storage/overlay/f3f785a8b3e70986cae1b08834de87f9a9df50c4bac97acaa48af0acf583a073/merged major:0 minor:916 fsType:overlay blockSize:0} overlay_0-918:{mountpoint:/var/lib/containers/storage/overlay/4e11a29fb24d703e5eafe1d29a2188c38435f737b36b851c5e89a76800189739/merged major:0 minor:918 fsType:overlay blockSize:0} overlay_0-920:{mountpoint:/var/lib/containers/storage/overlay/fe16cf4f4ef008824e0d8b9d966bab6d43050c6e44b60b810e088a12c8da0141/merged major:0 minor:920 fsType:overlay blockSize:0} overlay_0-922:{mountpoint:/var/lib/containers/storage/overlay/f481213dd7666696a6b0104354acaf62451b8cc5bcc8cb99be44c197e8ca6db5/merged major:0 minor:922 fsType:overlay blockSize:0} overlay_0-925:{mountpoint:/var/lib/containers/storage/overlay/91545e230d762b10076fcf5517a66c9931a12d67389a2016d6ff9f3207673fc2/merged major:0 minor:925 fsType:overlay blockSize:0} overlay_0-926:{mountpoint:/var/lib/containers/storage/overlay/a45fb05cbaebb0c4a310865072495b630897a64cd5354ccca9b0ebec680118e7/merged major:0 minor:926 fsType:overlay blockSize:0} overlay_0-928:{mountpoint:/var/lib/containers/storage/overlay/a81d93dd427d0c98b694aae6405883a33e76f4314d32e0d3748db58a091e1c12/merged major:0 minor:928 fsType:overlay blockSize:0} overlay_0-930:{mountpoint:/var/lib/containers/storage/overlay/4c9d17f56b60dae90b5e6ce276c01a2af1e14e223656bc12d3b508db4be8b972/merged major:0 minor:930 fsType:overlay blockSize:0} overlay_0-932:{mountpoint:/var/lib/containers/storage/overlay/010ea6cf584a56c9396512d5bb8efb9b932ff06349e6df3aabbcc90020fc8c3f/merged major:0 minor:932 fsType:overlay blockSize:0} overlay_0-933:{mountpoint:/var/lib/containers/storage/overlay/4e803583384f090669f7d7ca62b65009ec800d42f3e6646433e21f1cc446d1b2/merged major:0 minor:933 fsType:overlay blockSize:0} overlay_0-935:{mountpoint:/var/lib/containers/storage/overlay/b6be2d427a68463736146536755217ffddf878b72bb0fb7cd6b206ca534e9d27/merged major:0 minor:935 fsType:overlay blockSize:0} overlay_0-940:{mountpoint:/var/lib/containers/storage/overlay/8e68cc5f5829e167b8c184473de2b7d8df19b6be7e1ce040cb467279d4831bea/merged major:0 minor:940 fsType:overlay blockSize:0} overlay_0-944:{mountpoint:/var/lib/containers/storage/overlay/0aa4ad3f36ad09fb74f4a97bb06a02d369d2069383f3a8feaafc0728f7170b8b/merged major:0 minor:944 fsType:overlay blockSize:0} overlay_0-945:{mountpoint:/var/lib/containers/storage/overlay/a6a95d804fff2410d89f047228b2f83c1609f95119ab1fbfd49aec98aab83852/merged major:0 minor:945 fsType:overlay blockSize:0} overlay_0-947:{mountpoint:/var/lib/containers/storage/overlay/1818fec5be132ffcb5a70ed6b23e1f91c12b1c7760e24824f706ce694bdbb77a/merged major:0 minor:947 fsType:overlay blockSize:0} overlay_0-949:{mountpoint:/var/lib/containers/storage/overlay/85df64222beb5c20b1cb4549b7d801d25e7c8d8e315b9174aa5ef75d7459cdc3/merged major:0 minor:949 fsType:overlay blockSize:0} overlay_0-953:{mountpoint:/var/lib/containers/storage/overlay/13672f8267cdd7bb5c5e5c71f8948c60fec1bd2c0c6369ea3adfd37874136a34/merged major:0 minor:953 fsType:overlay blockSize:0} overlay_0-955:{mountpoint:/var/lib/containers/storage/overlay/a6094eb19787e11482f0ddb390e2c1f98c0df671ab866cf3bad803f2ef27d622/merged major:0 minor:955 fsType:overlay blockSize:0} overlay_0-960:{mountpoint:/var/lib/containers/storage/overlay/64c010827c3e6256a358b45f8b24a3d1a45828b6106f9a10751c1b92af26222e/merged major:0 minor:960 fsType:overlay blockSize:0} overlay_0-963:{mountpoint:/var/lib/containers/storage/overlay/269a837cfcbd535c403165b867812e978c9c3c61eea16c440230199de9b4f088/merged major:0 minor:963 fsType:overlay blockSize:0} overlay_0-965:{mountpoint:/var/lib/containers/storage/overlay/edb23305a9e62f2f8ee80467fe357e18c8b9d44d2383a7f85726d1ee14cdf82b/merged major:0 minor:965 fsType:overlay blockSize:0} overlay_0-966:{mountpoint:/var/lib/containers/storage/overlay/3d1a2ceef02093e2997a95b1a4dbe688d0f500bc161853ce0aa296f770a350a2/merged major:0 minor:966 fsType:overlay blockSize:0} overlay_0-971:{mountpoint:/var/lib/containers/storage/overlay/0438a0bf0fa3718b8672aeccb4f01524587857bac462a6c4cc11a727108db046/merged major:0 minor:971 fsType:overlay blockSize:0} overlay_0-973:{mountpoint:/var/lib/containers/storage/overlay/01f172e973bfed23a2e5d30c5eb4c0a629288672d9cff4176e82d545fa44938a/merged major:0 minor:973 fsType:overlay blockSize:0} overlay_0-976:{mountpoint:/var/lib/containers/storage/overlay/cc635a91b3f39d570d4e777daaf5ad8344239eea04bd3816236eb85451336c5a/merged major:0 minor:976 fsType:overlay blockSize:0} overlay_0-98:{mountpoint:/var/lib/containers/storage/overlay/db8ffde7512681e256488381fe50aa975aec923dd64b5f363c68deda5673dfb6/merged major:0 minor:98 fsType:overlay blockSize:0} overlay_0-99:{mountpoint:/var/lib/containers/storage/overlay/8658f956ff7cfd8055fd92a3974bb9deaae8560c2c4b254665225c6621d802ce/merged major:0 minor:99 fsType:overlay blockSize:0}] Feb 20 15:01:01.948964 master-0 kubenswrapper[28120]: I0220 15:01:01.946002 28120 manager.go:217] Machine: {Timestamp:2026-02-20 15:01:01.943754533 +0000 UTC m=+0.204548146 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:48b9dd3ce20842759e3dc6160315340b SystemUUID:48b9dd3c-e208-4275-9e3d-c6160315340b BootID:509e02d8-f41f-4d6f-8d1a-4efa2a52c9c0 Filesystems:[{Device:/var/lib/kubelet/pods/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:1123 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/13613c47bf97c812cc9e166f449f1af9864a34c9dcb66bd85e8e3c727e970a41/userdata/shm DeviceMajor:0 DeviceMinor:97 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95115710de33578fe832a95630e8d98eba6ecc806a442bdc7740ad889ac1e80b/userdata/shm DeviceMajor:0 DeviceMinor:131 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-222 DeviceMajor:0 DeviceMinor:222 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-670 DeviceMajor:0 DeviceMinor:670 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-922 DeviceMajor:0 DeviceMinor:922 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3209ad8e141d4f4023abb0b8711dc267473b98fd78163c32b9a46c610babe186/userdata/shm DeviceMajor:0 DeviceMinor:1226 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de/volumes/kubernetes.io~projected/kube-api-access-wcfnf DeviceMajor:0 DeviceMinor:906 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6ea59bb762ddd917687d0ab9c9b4c4c212079c243fa33d303d25cc82d89c923b/userdata/shm DeviceMajor:0 DeviceMinor:1064 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1173 DeviceMajor:0 DeviceMinor:1173 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:248 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-344 DeviceMajor:0 DeviceMinor:344 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/80b53aa57494cc0bc6bbacad6b2e04131adc3c0ab6e7a77f83dd0c6c91461d7d/userdata/shm DeviceMajor:0 DeviceMinor:518 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7d7dfb1a01a9470453018e9e4e99ad966573e066e4eb9b370f42ef7d7426a75e/userdata/shm DeviceMajor:0 DeviceMinor:519 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-570 DeviceMajor:0 DeviceMinor:570 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1197 DeviceMajor:0 DeviceMinor:1197 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1230 DeviceMajor:0 DeviceMinor:1230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-708 DeviceMajor:0 DeviceMinor:708 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:249 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8157f73d-c757-40c4-80bc-3c9de2f2288a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:251 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-408 DeviceMajor:0 DeviceMinor:408 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0bedbe69-fc4b-4bd7-bcc2-acead927eda2/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:782 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4ecbdf77-0c73-487e-943e-5315a0f8b8d4/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:900 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-635 DeviceMajor:0 DeviceMinor:635 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ef3a09a5-b019-48a3-97f8-7ddadb37394e/volumes/kubernetes.io~projected/kube-api-access-pcqd4 DeviceMajor:0 DeviceMinor:1072 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-125 DeviceMajor:0 DeviceMinor:125 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7cd291b9260d8474da6db1ea27593954a0b8a80d92876d3da551d5f4c38e22a4/userdata/shm DeviceMajor:0 DeviceMinor:1167 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/81b14b205a5b43d7cf78b359f564d3ae3e67aaf00f87262df973d130ce6f30c0/userdata/shm DeviceMajor:0 DeviceMinor:513 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:545 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/733f20d59a2548ac1c9bcca1dc13fb3a2581f1cde83bb3bdf7f826c178e76f76/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/989af121-da08-4f40-b08c-dd2aa67bc60c/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:256 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-362 DeviceMajor:0 DeviceMinor:362 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-401 DeviceMajor:0 DeviceMinor:401 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1263 DeviceMajor:0 DeviceMinor:1263 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-530 DeviceMajor:0 DeviceMinor:530 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/26473c28-db42-47e6-9164-8c441ccc48ca/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:480 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/99fe3b99-0b40-4887-bcc8-59caa515b99f/volumes/kubernetes.io~projected/kube-api-access-dkc7z DeviceMajor:0 DeviceMinor:1158 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-553 DeviceMajor:0 DeviceMinor:553 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ee3a6748-0bbc-41bf-8726-a8db18faf03b/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:582 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/64e9eca9-bbdd-4eca-9219-922bbab9b388/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:799 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-683 DeviceMajor:0 DeviceMinor:683 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/99fe3b99-0b40-4887-bcc8-59caa515b99f/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1154 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fc334fff-c0bf-4905-bcdb-b0d2a35b0590/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:433 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/31d71c90-cab7-4411-9426-0713cb026294/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:509 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-949 DeviceMajor:0 DeviceMinor:949 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-698 DeviceMajor:0 DeviceMinor:698 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-320 DeviceMajor:0 DeviceMinor:320 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/31d71c90-cab7-4411-9426-0713cb026294/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:505 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-354 DeviceMajor:0 DeviceMinor:354 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-966 DeviceMajor:0 DeviceMinor:966 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/45d7ef0c-272b-4d1e-965f-484975d5d25c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/996d4949-f92c-42ac-9bda-8c6ec0295e92/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:1083 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-776 DeviceMajor:0 DeviceMinor:776 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ba0f9ce144b093c1fbdb0462da21ced21845e2aa8fb2233766270fcddb816e51/userdata/shm DeviceMajor:0 DeviceMinor:380 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4d7a859ad253e344142e3d8002817623ee421d3b324eff2b6246c1b1fdd11bc1/userdata/shm DeviceMajor:0 DeviceMinor:481 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/56dab50a6ee92d8b7787a1ffbdfc72e9a26511781eb108040e7d6dc84a65109f/userdata/shm DeviceMajor:0 DeviceMinor:796 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/34cc992d367669608546ba8ae39873d4139dfeeb4850c5979567cde508c8b524/userdata/shm DeviceMajor:0 DeviceMinor:772 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/34bf21f0d5e74283c2c3382d9b925b925de6b532a3f67ab7bff4afdbe95f9332/userdata/shm DeviceMajor:0 DeviceMinor:440 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1043 DeviceMajor:0 DeviceMinor:1043 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a3b80d783578c7d5bcce0396d10b0b7507567b7ddeed1d7dec131680bd38e6da/userdata/shm DeviceMajor:0 DeviceMinor:694 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1299 DeviceMajor:0 DeviceMinor:1299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-270 DeviceMajor:0 DeviceMinor:270 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9bd614ac7dafc38d2154363d724a872731a806692546d4bc858006cdc5ade17d/userdata/shm DeviceMajor:0 DeviceMinor:285 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1172 DeviceMajor:0 DeviceMinor:1172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e8c5772-b6e2-43d8-b173-af74541855fb/volumes/kubernetes.io~secret/federate-client-tls DeviceMajor:0 DeviceMinor:264 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-935 DeviceMajor:0 DeviceMinor:935 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5f55b652-bef8-4f50-9d1d-9d0a340c1dea/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1073 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1241 DeviceMajor:0 DeviceMinor:1241 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-743 DeviceMajor:0 DeviceMinor:743 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1339 DeviceMajor:0 DeviceMinor:1339 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-442 DeviceMajor:0 DeviceMinor:442 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d28490b0-96ca-4fe0-8fae-e6f8390f933b/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:506 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-657 DeviceMajor:0 DeviceMinor:657 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4e9788fdd4565e3a230622830adb39ca18b14112a272177c052904a2d24b6cd0/userdata/shm DeviceMajor:0 DeviceMinor:1165 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-67 DeviceMajor:0 DeviceMinor:67 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:792 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-916 DeviceMajor:0 DeviceMinor:916 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-926 DeviceMajor:0 DeviceMinor:926 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-711 DeviceMajor:0 DeviceMinor:711 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-444 DeviceMajor:0 DeviceMinor:444 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-339 DeviceMajor:0 DeviceMinor:339 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/86f6836b-b018-4c7a-87ad-51809a4b9c7a/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:784 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/93786626-fac4-48f0-bf72-992bc39f4a82/volumes/kubernetes.io~projected/kube-api-access-fm2jn DeviceMajor:0 DeviceMinor:907 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/43e9807a-859c-44c1-8511-0066b0f59ff8/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-858 DeviceMajor:0 DeviceMinor:858 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-955 DeviceMajor:0 DeviceMinor:955 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/49defec6-a225-47ab-99ff-7a846f23eb00/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1287 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-412 DeviceMajor:0 DeviceMinor:412 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d3ca2d2f-9f31-4524-a28f-cf16b02dd711/volumes/kubernetes.io~projected/kube-api-access-4jn8g DeviceMajor:0 DeviceMinor:259 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:263 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7190b6f768a0fe97808696f83db6e3236f51dc32c15727d9791bd6e154e97696/userdata/shm DeviceMajor:0 DeviceMinor:293 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8468bd2a2161175e696f20868531488b079471cbb37c953cccf04ab9a47ce2b3/userdata/shm DeviceMajor:0 DeviceMinor:739 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-844 DeviceMajor:0 DeviceMinor:844 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e94527abc555de66f60f9e134865dfe60d787ebd1878546078cb9b2523c30cab/userdata/shm DeviceMajor:0 DeviceMinor:277 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ac3680de-aabf-414b-a340-5e5e6aea4822/volumes/kubernetes.io~projected/kube-api-access-rln42 DeviceMajor:0 DeviceMinor:904 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd/volumes/kubernetes.io~projected/kube-api-access-wxjcq DeviceMajor:0 DeviceMinor:688 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-53 DeviceMajor:0 DeviceMinor:53 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/45d7ef0c-272b-4d1e-965f-484975d5d25c/volumes/kubernetes.io~projected/kube-api-access-svhtr DeviceMajor:0 DeviceMinor:242 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-463 DeviceMajor:0 DeviceMinor:463 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-648 DeviceMajor:0 DeviceMinor:648 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0bedbe69-fc4b-4bd7-bcc2-acead927eda2/volumes/kubernetes.io~projected/kube-api-access-gk2lq DeviceMajor:0 DeviceMinor:790 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2d789ae2430f40a62d0c76334dce72b1228320484eb36b8f7f3663eb8534eb42/userdata/shm DeviceMajor:0 DeviceMinor:1181 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-337 DeviceMajor:0 DeviceMinor:337 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0170d69b891340d8304a044f9ba11f3c45572b8e1e7f16d78f09e0c25d8c5a22/userdata/shm DeviceMajor:0 DeviceMinor:646 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1279 DeviceMajor:0 DeviceMinor:1279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b6e6d218-d969-40b5-a32b-9b2093089dbf/volumes/kubernetes.io~projected/kube-api-access-psd59 DeviceMajor:0 DeviceMinor:130 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0c48d8481d8bb6541d7d83f4ffc4e7c6003e82f4f8d378fb9a1333d706bc6f14/userdata/shm DeviceMajor:0 DeviceMinor:438 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bdd203e0-3dd9-4e9d-81f1-46f60d235e38/volumes/kubernetes.io~projected/kube-api-access-9zppr DeviceMajor:0 DeviceMinor:1225 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5f55b652-bef8-4f50-9d1d-9d0a340c1dea/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1068 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/26c5fe83ca44257f00aa75056a5ba23aa71fd99df73033faf567ea11ded1340f/userdata/shm DeviceMajor:0 DeviceMinor:143 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4c31b8a7-edcb-403d-9122-7eb740f7d659/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6949e9d5-460c-4b63-94cb-1b20ad75ee1c/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:690 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-846 DeviceMajor:0 DeviceMinor:846 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/07f2250f0416c7a8aaa5ba7190cd272a32f30bcb4026105fc1ebf0050f1e79f2/userdata/shm DeviceMajor:0 DeviceMinor:324 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/013da989dc1e60fa75e3d1e3955a83dece7eed7353880205a9acd5aa5c2d4d69/userdata/shm DeviceMajor:0 DeviceMinor:1124 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/49044786-483a-406e-8750-f6ded400841d/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:760 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bdd203e0-3dd9-4e9d-81f1-46f60d235e38/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1223 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-812 DeviceMajor:0 DeviceMinor:812 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-932 DeviceMajor:0 DeviceMinor:932 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-119 DeviceMajor:0 DeviceMinor:119 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-310 DeviceMajor:0 DeviceMinor:310 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-534 DeviceMajor:0 DeviceMinor:534 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-359 DeviceMajor:0 DeviceMinor:359 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-487 DeviceMajor:0 DeviceMinor:487 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-940 DeviceMajor:0 DeviceMinor:940 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4ecbdf77-0c73-487e-943e-5315a0f8b8d4/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:899 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bdd203e0-3dd9-4e9d-81f1-46f60d235e38/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1224 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-200 DeviceMajor:0 DeviceMinor:200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-187 DeviceMajor:0 DeviceMinor:187 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1081 DeviceMajor:0 DeviceMinor:1081 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-774 DeviceMajor:0 DeviceMinor:774 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/04c921d85b432c0d1b6bd571166f434dca8313768c8990c88277ecdb55bd26c7/userdata/shm DeviceMajor:0 DeviceMinor:537 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c81ad608-a8ad-4289-a8d2-d48acb9b540c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9df920ca539f41ddc66a331c27bc3a12a40dbc8ec795ca71f8a746f6b5203647/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-446 DeviceMajor:0 DeviceMinor:446 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-498 DeviceMajor:0 DeviceMinor:498 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ef3a09a5-b019-48a3-97f8-7ddadb37394e/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1070 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1088 DeviceMajor:0 DeviceMinor:1088 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-650 DeviceMajor:0 DeviceMinor:650 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c0b78aa6-7bc8-4221-81f5-bf62a7110380/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1159 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1228 DeviceMajor:0 DeviceMinor:1228 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/787a4fee-6625-4df5-a432-c7e1190da777/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:400 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb/volumes/kubernetes.io~projected/kube-api-access-b54xg DeviceMajor:0 DeviceMinor:547 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2210f3254bc0bc47bf63efd7d8223a017f9ce1d63560804be28d1d5db58e4a7d/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/29c1db2527f092355034b5557942ea50b25282b9b77501d427c1a6d0e01d2771/userdata/shm DeviceMajor:0 DeviceMinor:1356 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:756 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1253 DeviceMajor:0 DeviceMinor:1253 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5d2b154b-de63-4c9b-99d8-487fb3035fb9/volumes/kubernetes.io~projected/kube-api-access-mclrj DeviceMajor:0 DeviceMinor:139 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-561 DeviceMajor:0 DeviceMinor:561 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6949e9d5-460c-4b63-94cb-1b20ad75ee1c/volumes/kubernetes.io~projected/kube-api-access-jpt8j DeviceMajor:0 DeviceMinor:691 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-840 DeviceMajor:0 DeviceMinor:840 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fc334fff-c0bf-4905-bcdb-b0d2a35b0590/volumes/kubernetes.io~projected/kube-api-access-9lcqg DeviceMajor:0 DeviceMinor:435 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cc5528fa6db2bfe114c1842f536c398cb14a3103bc976fa904abdc30e48bc9b3/userdata/shm DeviceMajor:0 DeviceMinor:818 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bdd203e0-3dd9-4e9d-81f1-46f60d235e38/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1219 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1255 DeviceMajor:0 DeviceMinor:1255 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a4339bd5-b8d1-467e-8158-4464ea901148/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:781 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1364 DeviceMajor:0 DeviceMinor:1364 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/51c5a5d32ca643efba642911927baab174d9c9270d18541b0810089261e8c8d5/userdata/shm DeviceMajor:0 DeviceMinor:805 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/996d4949-f92c-42ac-9bda-8c6ec0295e92/volumes/kubernetes.io~projected/kube-api-access-4kfqn DeviceMajor:0 DeviceMinor:1090 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1191 DeviceMajor:0 DeviceMinor:1191 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-944 DeviceMajor:0 DeviceMinor:944 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9fd9f419-2cdc-4991-8fb9-87d76ac58976/volumes/kubernetes.io~projected/kube-api-access-svlzf DeviceMajor:0 DeviceMinor:111 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-185 DeviceMajor:0 DeviceMinor:185 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1311 DeviceMajor:0 DeviceMinor:1311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-965 DeviceMajor:0 DeviceMinor:965 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/929dffba-46da-4d81-a437-bc6a9fe79811/volumes/kubernetes.io~projected/kube-api-access-9mpr8 DeviceMajor:0 DeviceMinor:309 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/787a4fee-6625-4df5-a432-c7e1190da777/volumes/kubernetes.io~projected/kube-api-access-9k6br DeviceMajor:0 DeviceMinor:405 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-414 DeviceMajor:0 DeviceMinor:414 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c0a3548f-299c-4234-9bf1-c93efcb9740b/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:507 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-532 DeviceMajor:0 DeviceMinor:532 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:686 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/996b54ad7bf339a39ffff49432d0181ad23ef73bddec2b3817ca026944ee2962/userdata/shm DeviceMajor:0 DeviceMinor:69 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1489b48b9281848030ac8650ba6a4f51919e00d3276dcba9cb79f43f94b0f041/userdata/shm DeviceMajor:0 DeviceMinor:113 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6315ef904771a7f7ee8f8fb64b568088a83f03dc9235439160e67d9df1c9a04f/userdata/shm DeviceMajor:0 DeviceMinor:286 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/84da6dcc282a18c48a027b33cd2404e3592b75c697de5dd4ab39e2cebf5cff28/userdata/shm DeviceMajor:0 DeviceMinor:520 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a8c0a6d2-f1f9-49e3-9475-4983b50667bf/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:661 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-637 DeviceMajor:0 DeviceMinor:637 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1047 DeviceMajor:0 DeviceMinor:1047 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-318 DeviceMajor:0 DeviceMinor:318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-492 DeviceMajor:0 DeviceMinor:492 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:761 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-953 DeviceMajor:0 DeviceMinor:953 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-403 DeviceMajor:0 DeviceMinor:403 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-328 DeviceMajor:0 DeviceMinor:328 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/234a44fd-c153-47a6-a11d-7d4b7165c236/volumes/kubernetes.io~projected/kube-api-access-gwb5n DeviceMajor:0 DeviceMinor:272 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-572 DeviceMajor:0 DeviceMinor:572 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-631 DeviceMajor:0 DeviceMinor:631 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b385880b-a26b-4353-8f6f-b7f926bcc67c/volumes/kubernetes.io~projected/kube-api-access-fwclx DeviceMajor:0 DeviceMinor:788 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1017 DeviceMajor:0 DeviceMinor:1017 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/99fe3b99-0b40-4887-bcc8-59caa515b99f/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1171 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-427 DeviceMajor:0 DeviceMinor:427 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ed48d3d3cb753c9bbe342f9ecdd79f0991ed3456ddbdf3081cbeeab5126bcab1/userdata/shm DeviceMajor:0 DeviceMinor:806 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-963 DeviceMajor:0 DeviceMinor:963 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-973 DeviceMajor:0 DeviceMinor:973 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1045 DeviceMajor:0 DeviceMinor:1045 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1004 DeviceMajor:0 DeviceMinor:1004 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-373 DeviceMajor:0 DeviceMinor:373 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1336 DeviceMajor:0 DeviceMinor:1336 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/33675e96-ce49-49be-9117-954ac7cca5d5/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:166 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5ea4c132-b6d0-4dc9-942d-48e359eed418/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:511 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-469 DeviceMajor:0 DeviceMinor:469 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a39c5481-961c-4ac2-8c5b-a2c0165f4188/volumes/kubernetes.io~projected/kube-api-access-tl7tw DeviceMajor:0 DeviceMinor:1164 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-128 DeviceMajor:0 DeviceMinor:128 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-502 DeviceMajor:0 DeviceMinor:502 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-115 DeviceMajor:0 DeviceMinor:115 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4c31b8a7-edcb-403d-9122-7eb740f7d659/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-600 DeviceMajor:0 DeviceMinor:600 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/volumes/kubernetes.io~projected/kube-api-access-mwnq7 DeviceMajor:0 DeviceMinor:759 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/49044786-483a-406e-8750-f6ded400841d/volumes/kubernetes.io~projected/kube-api-access-jljjg DeviceMajor:0 DeviceMinor:763 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-379 DeviceMajor:0 DeviceMinor:379 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1285 DeviceMajor:0 DeviceMinor:1285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1351 DeviceMajor:0 DeviceMinor:1351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-180 DeviceMajor:0 DeviceMinor:180 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/118104a32f855cf343fc9a68201c174973d8b0ae6653c1a549eeef25c7c2eefa/userdata/shm DeviceMajor:0 DeviceMinor:429 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5fc1828a85716c5c152a1e9d497ac8c147726f1a98a02df72c44bdcd9feda4f1/userdata/shm DeviceMajor:0 DeviceMinor:85 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-559 DeviceMajor:0 DeviceMinor:559 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/19ce4b45-db46-4fc3-8d72-963de22f026b/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:616 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-665 DeviceMajor:0 DeviceMinor:665 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1193 DeviceMajor:0 DeviceMinor:1193 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1239 DeviceMajor:0 DeviceMinor:1239 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-322 DeviceMajor:0 DeviceMinor:322 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a8c0a6d2-f1f9-49e3-9475-4983b50667bf/volumes/kubernetes.io~projected/kube-api-access-mchbh DeviceMajor:0 DeviceMinor:660 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/64e9eca9-bbdd-4eca-9219-922bbab9b388/volumes/kubernetes.io~projected/kube-api-access-47sqj DeviceMajor:0 DeviceMinor:801 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a39c5481-961c-4ac2-8c5b-a2c0165f4188/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1162 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c0b78aa6-7bc8-4221-81f5-bf62a7110380/volumes/kubernetes.io~projected/kube-api-access-lhzk6 DeviceMajor:0 DeviceMinor:1163 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fc334fff-c0bf-4905-bcdb-b0d2a35b0590/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:434 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-918 DeviceMajor:0 DeviceMinor:918 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1214 DeviceMajor:0 DeviceMinor:1214 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e8c5772-b6e2-43d8-b173-af74541855fb/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:529 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-87 DeviceMajor:0 DeviceMinor:87 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-395 DeviceMajor:0 DeviceMinor:395 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9fd9f419-2cdc-4991-8fb9-87d76ac58976/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:77 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fa9d778b1d5703420b9779e5e17c8c6a6104fc97f8264778eb9ed382719853b9/userdata/shm DeviceMajor:0 DeviceMinor:652 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-897 DeviceMajor:0 DeviceMinor:897 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ae43311e-14ba-40a1-bdbf-f02d68031757/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1062 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1184 DeviceMajor:0 DeviceMinor:1184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-945 DeviceMajor:0 DeviceMinor:945 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-767 DeviceMajor:0 DeviceMinor:767 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1186 DeviceMajor:0 DeviceMinor:1186 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-215 DeviceMajor:0 DeviceMinor:215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8a278abf-8c59-4454-94d0-a0d0768cbec5/volumes/kubernetes.io~projected/kube-api-access-r9crd DeviceMajor:0 DeviceMinor:765 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/22094081262cfd9afca75424166ecb944e973d770312e29078a1dee4fb675d30/userdata/shm DeviceMajor:0 DeviceMinor:794 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-848 DeviceMajor:0 DeviceMinor:848 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:905 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a1af84e0-776b-4285-906a-6880dbc82a7b/volumes/kubernetes.io~projected/kube-api-access-6lp29 DeviceMajor:0 DeviceMinor:375 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8a278abf-8c59-4454-94d0-a0d0768cbec5/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:757 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-920 DeviceMajor:0 DeviceMinor:920 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/49defec6-a225-47ab-99ff-7a846f23eb00/volumes/kubernetes.io~projected/kube-api-access-k94cb DeviceMajor:0 DeviceMinor:1292 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/33675e96-ce49-49be-9117-954ac7cca5d5/volumes/kubernetes.io~projected/kube-api-access-hbw6n DeviceMajor:0 DeviceMinor:167 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-178 DeviceMajor:0 DeviceMinor:178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/87cf4690-1ec1-44fc-94bd-730d9f2e6762/volumes/kubernetes.io~projected/kube-api-access-r9c94 DeviceMajor:0 DeviceMinor:254 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-410 DeviceMajor:0 DeviceMinor:410 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volumes/kubernetes.io~projected/kube-api-access-gr6nr DeviceMajor:0 DeviceMinor:141 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/64e9eca9-bbdd-4eca-9219-922bbab9b388/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:798 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/934ad9d048e353486054177eacce7219c994c68dfad561ddfd4035fc938101d3/userdata/shm DeviceMajor:0 DeviceMinor:808 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:1029 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1079 DeviceMajor:0 DeviceMinor:1079 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8157f73d-c757-40c4-80bc-3c9de2f2288a/volumes/kubernetes.io~projected/kube-api-access-bk5m4 DeviceMajor:0 DeviceMinor:262 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-387 DeviceMajor:0 DeviceMinor:387 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-431 DeviceMajor:0 DeviceMinor:431 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-452 DeviceMajor:0 DeviceMinor:452 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/80505c2710f2e2216eec6a4e82e9601038f01af58386ea11bb977eb9c2b78e51/userdata/shm DeviceMajor:0 DeviceMinor:768 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-947 DeviceMajor:0 DeviceMinor:947 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1078 DeviceMajor:0 DeviceMinor:1078 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/caef1c17-56b0-479c-b000-caaac3c2b249/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:745 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/989af121-da08-4f40-b08c-dd2aa67bc60c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b4cf8dbc3fd31a273c2cbd586eecdb2a0961392b7bd552bb39381cfb88539e45/userdata/shm DeviceMajor:0 DeviceMinor:280 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4ecbdf77-0c73-487e-943e-5315a0f8b8d4/volumes/kubernetes.io~projected/kube-api-access-ntlv2 DeviceMajor:0 DeviceMinor:901 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-458 DeviceMajor:0 DeviceMinor:458 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-496 DeviceMajor:0 DeviceMinor:496 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1274 DeviceMajor:0 DeviceMinor:1274 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-105 DeviceMajor:0 DeviceMinor:105 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/31d71c90-cab7-4411-9426-0713cb026294/volumes/kubernetes.io~projected/kube-api-access-57cks DeviceMajor:0 DeviceMinor:268 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/07243cbc35256d0bbc44485dfcf1dcdc835463392fa9dc5f89599380e929e672/userdata/shm DeviceMajor:0 DeviceMinor:791 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-718 DeviceMajor:0 DeviceMinor:718 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-773 DeviceMajor:0 DeviceMinor:773 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-467 DeviceMajor:0 DeviceMinor:467 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0ea53368ce61e6c8836a7d0 Feb 20 15:01:01.949804 master-0 kubenswrapper[28120]: c6d716b7e2c7e18ee974ab80f253b08e24d34227b/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-454 DeviceMajor:0 DeviceMinor:454 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-382 DeviceMajor:0 DeviceMinor:382 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1237 DeviceMajor:0 DeviceMinor:1237 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/329b7497d730cc1438c1c88bd3563dab745cc5c71baf09835af567df43aee00e/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3c3c6a0066a2da65aa0c6f5621f865feea551c3602354f05a3bf53b7f588a01e/userdata/shm DeviceMajor:0 DeviceMinor:1063 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-719 DeviceMajor:0 DeviceMinor:719 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1113 DeviceMajor:0 DeviceMinor:1113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c0b78aa6-7bc8-4221-81f5-bf62a7110380/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1160 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-340 DeviceMajor:0 DeviceMinor:340 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/db9dc349-5216-43ff-8c17-3a9384a010ea/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:252 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-626 DeviceMajor:0 DeviceMinor:626 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/448aafd2-ffb3-42c5-8085-f6194d7862e5/volumes/kubernetes.io~projected/kube-api-access-nv57n DeviceMajor:0 DeviceMinor:644 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5f55b652-bef8-4f50-9d1d-9d0a340c1dea/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1067 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-394 DeviceMajor:0 DeviceMinor:394 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/419f28a9-8fd7-4b59-9554-4d884a1208b5/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:508 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ef3a09a5-b019-48a3-97f8-7ddadb37394e/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1071 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-716 DeviceMajor:0 DeviceMinor:716 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5d2b154b-de63-4c9b-99d8-487fb3035fb9/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:138 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/af7b6f34-adca-4bdb-9e41-e2995a1d67a8/volumes/kubernetes.io~projected/kube-api-access-nrrq4 DeviceMajor:0 DeviceMinor:424 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/88c6fd1112c1b3efe31f79a2dc6cd9198555dc6b1c7c6547da60005b56efbb9b/userdata/shm DeviceMajor:0 DeviceMinor:696 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-873 DeviceMajor:0 DeviceMinor:873 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1152 DeviceMajor:0 DeviceMinor:1152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1332 DeviceMajor:0 DeviceMinor:1332 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e1b8782a8564dd4906c6406ffd3ad6cd072d92723a07ad86ed42c394d07ab355/userdata/shm DeviceMajor:0 DeviceMinor:406 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-557 DeviceMajor:0 DeviceMinor:557 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-466 DeviceMajor:0 DeviceMinor:466 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e8c5772-b6e2-43d8-b173-af74541855fb/volumes/kubernetes.io~secret/secret-telemeter-client DeviceMajor:0 DeviceMinor:524 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/16d6dd52-d73b-4696-873e-00a6d4bb2c77/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:778 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/86f6836b-b018-4c7a-87ad-51809a4b9c7a/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:783 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/16d6dd52-d73b-4696-873e-00a6d4bb2c77/volumes/kubernetes.io~projected/kube-api-access-sxncg DeviceMajor:0 DeviceMinor:787 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-746 DeviceMajor:0 DeviceMinor:746 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/56784add7fab2d6fa30c1dec4a904d183b8bd0ff401f8eca8e9ad2aff7741c30/userdata/shm DeviceMajor:0 DeviceMinor:809 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-822 DeviceMajor:0 DeviceMinor:822 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1295 DeviceMajor:0 DeviceMinor:1295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1/volumes/kubernetes.io~projected/kube-api-access-pzmqr DeviceMajor:0 DeviceMinor:255 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/db9dc349-5216-43ff-8c17-3a9384a010ea/volumes/kubernetes.io~projected/kube-api-access-smglm DeviceMajor:0 DeviceMinor:269 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1fe69517-eec2-4721-933c-fa27cea7ab1f/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:564 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-358 DeviceMajor:0 DeviceMinor:358 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:140 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/43e9807a-859c-44c1-8511-0066b0f59ff8/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:241 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/27ab8945-6a5b-4f7d-b893-6358da214499/volumes/kubernetes.io~projected/kube-api-access-jshgm DeviceMajor:0 DeviceMinor:762 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bdf18981-b755-4b11-8793-38bc5e2e755b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:685 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-99 DeviceMajor:0 DeviceMinor:99 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-397 DeviceMajor:0 DeviceMinor:397 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-692 DeviceMajor:0 DeviceMinor:692 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/437abb0aba17c9c29dae7086b861fc64a62c90a30c1567fbdec9a15f52cef039/userdata/shm DeviceMajor:0 DeviceMinor:802 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1122 DeviceMajor:0 DeviceMinor:1122 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1366 DeviceMajor:0 DeviceMinor:1366 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-331 DeviceMajor:0 DeviceMinor:331 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/84a61910-48eb-4c27-8d69-f6aa7ce912ca/volumes/kubernetes.io~projected/kube-api-access-l5fng DeviceMajor:0 DeviceMinor:437 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/19ce4b45-db46-4fc3-8d72-963de22f026b/volumes/kubernetes.io~projected/kube-api-access-45226 DeviceMajor:0 DeviceMinor:625 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-856 DeviceMajor:0 DeviceMinor:856 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1026 DeviceMajor:0 DeviceMinor:1026 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1176 DeviceMajor:0 DeviceMinor:1176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-202 DeviceMajor:0 DeviceMinor:202 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1913b004153de96aee747d5e43e4468694e4be30746f1b0a2aa4f60e2176707c/userdata/shm DeviceMajor:0 DeviceMinor:275 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-528 DeviceMajor:0 DeviceMinor:528 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-384 DeviceMajor:0 DeviceMinor:384 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/236aeb004972a9d3e9949ce545b3cfedb3b4ea60df38f4b61a82d0b2465524af/userdata/shm DeviceMajor:0 DeviceMinor:893 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-330 DeviceMajor:0 DeviceMinor:330 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3595d9d8fc957b18c48383f1ad0fcfa521ef5e3e33c6ab788b51ff8638981630/userdata/shm DeviceMajor:0 DeviceMinor:168 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/84a61910-48eb-4c27-8d69-f6aa7ce912ca/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:436 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8df5627ff680da0c81aa3a3c2df511cdff6fa3f30ba3845441250cbb689ca7f4/userdata/shm DeviceMajor:0 DeviceMinor:909 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1074 DeviceMajor:0 DeviceMinor:1074 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e8c5772-b6e2-43d8-b173-af74541855fb/volumes/kubernetes.io~secret/telemeter-client-tls DeviceMajor:0 DeviceMinor:510 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d3ca2d2f-9f31-4524-a28f-cf16b02dd711/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:247 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c0a3548f-299c-4234-9bf1-c93efcb9740b/volumes/kubernetes.io~projected/kube-api-access-7d5fq DeviceMajor:0 DeviceMinor:261 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:495 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e4a8f393be39a3a9efde4bf2412add15fe01a8acdf8e5580190095494f3e6b47/userdata/shm DeviceMajor:0 DeviceMinor:663 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-826 DeviceMajor:0 DeviceMinor:826 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d/volumes/kubernetes.io~projected/kube-api-access-xtgrt DeviceMajor:0 DeviceMinor:880 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a39c5481-961c-4ac2-8c5b-a2c0165f4188/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1161 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e3cc4073-a926-4aba-81e6-c616c2bb2987/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1055 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95650a37daeacacf8e69d045d48ba4a17652648a0c83345072715e4ffcfa2dda/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-515 DeviceMajor:0 DeviceMinor:515 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/27ab8945-6a5b-4f7d-b893-6358da214499/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:758 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1002 DeviceMajor:0 DeviceMinor:1002 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-612 DeviceMajor:0 DeviceMinor:612 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-727 DeviceMajor:0 DeviceMinor:727 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1106 DeviceMajor:0 DeviceMinor:1106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1319 DeviceMajor:0 DeviceMinor:1319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b385880b-a26b-4353-8f6f-b7f926bcc67c/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:780 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-960 DeviceMajor:0 DeviceMinor:960 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5ea8ac7578359ce087855682fd87fbd08a72604f8701716ddbb28b051d93bff2/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-217 DeviceMajor:0 DeviceMinor:217 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/26473c28-db42-47e6-9164-8c441ccc48ca/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:112 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-392 DeviceMajor:0 DeviceMinor:392 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1199 DeviceMajor:0 DeviceMinor:1199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d28490b0-96ca-4fe0-8fae-e6f8390f933b/volumes/kubernetes.io~projected/kube-api-access-qm5p2 DeviceMajor:0 DeviceMinor:260 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-540 DeviceMajor:0 DeviceMinor:540 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-567 DeviceMajor:0 DeviceMinor:567 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-731 DeviceMajor:0 DeviceMinor:731 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-736 DeviceMajor:0 DeviceMinor:736 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/469af398b29095aa460373b4a9d58261db50995525853368aaa76c2198d9753f/userdata/shm DeviceMajor:0 DeviceMinor:117 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a4339bd5-b8d1-467e-8158-4464ea901148/volumes/kubernetes.io~projected/kube-api-access-jvthk DeviceMajor:0 DeviceMinor:789 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1097 DeviceMajor:0 DeviceMinor:1097 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ae43311e-14ba-40a1-bdbf-f02d68031757/volumes/kubernetes.io~projected/kube-api-access-mf5p9 DeviceMajor:0 DeviceMinor:1061 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:504 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1104 DeviceMajor:0 DeviceMinor:1104 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/419f28a9-8fd7-4b59-9554-4d884a1208b5/volumes/kubernetes.io~projected/kube-api-access-fttgr DeviceMajor:0 DeviceMinor:265 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dd0998467d8099b6ff8531304dd3f0e97b5c79ad6520753dadef997846c4d469/userdata/shm DeviceMajor:0 DeviceMinor:911 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-850 DeviceMajor:0 DeviceMinor:850 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-853 DeviceMajor:0 DeviceMinor:853 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e8c5772-b6e2-43d8-b173-af74541855fb/volumes/kubernetes.io~projected/kube-api-access-z67rw DeviceMajor:0 DeviceMinor:536 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8b73ae08-0ad7-4f99-8002-6df0d984cd2c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:246 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/86f6836b-b018-4c7a-87ad-51809a4b9c7a/volumes/kubernetes.io~projected/kube-api-access-wcffg DeviceMajor:0 DeviceMinor:785 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/21384bd0-495c-406a-9462-e9e740c04686/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/1fe69517-eec2-4721-933c-fa27cea7ab1f/volumes/kubernetes.io~projected/kube-api-access-rnwtd DeviceMajor:0 DeviceMinor:253 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/972260fa4d71d5a14fa2c2c948e5708100e799e6a9e6ff6a656d3e5a79c34eaa/userdata/shm DeviceMajor:0 DeviceMinor:908 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5f55b652-bef8-4f50-9d1d-9d0a340c1dea/volumes/kubernetes.io~projected/kube-api-access-rj796 DeviceMajor:0 DeviceMinor:1069 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-682 DeviceMajor:0 DeviceMinor:682 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-364 DeviceMajor:0 DeviceMinor:364 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-390 DeviceMajor:0 DeviceMinor:390 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/92c9b6ef7965615602e16b5814c26d9915a23507222fc502b624945d6f4ccc53/userdata/shm DeviceMajor:0 DeviceMinor:800 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-820 DeviceMajor:0 DeviceMinor:820 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-98 DeviceMajor:0 DeviceMinor:98 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-748 DeviceMajor:0 DeviceMinor:748 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-548 DeviceMajor:0 DeviceMinor:548 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-842 DeviceMajor:0 DeviceMinor:842 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/864b7e188cfb62e2b7e87dc90ff4536aab0f9cd5aed1bd5481272fd1babe2e98/userdata/shm DeviceMajor:0 DeviceMinor:1031 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-538 DeviceMajor:0 DeviceMinor:538 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1084 DeviceMajor:0 DeviceMinor:1084 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1347 DeviceMajor:0 DeviceMinor:1347 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3/volumes/kubernetes.io~projected/kube-api-access-2xd6r DeviceMajor:0 DeviceMinor:892 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-933 DeviceMajor:0 DeviceMinor:933 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-930 DeviceMajor:0 DeviceMinor:930 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8b73ae08-0ad7-4f99-8002-6df0d984cd2c/volumes/kubernetes.io~projected/kube-api-access-mb46b DeviceMajor:0 DeviceMinor:258 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-326 DeviceMajor:0 DeviceMinor:326 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-578 DeviceMajor:0 DeviceMinor:578 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/19ce4b45-db46-4fc3-8d72-963de22f026b/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:624 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:779 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665/volumes/kubernetes.io~projected/kube-api-access-lc9pl DeviceMajor:0 DeviceMinor:993 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1260 DeviceMajor:0 DeviceMinor:1260 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-182 DeviceMajor:0 DeviceMinor:182 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c81ad608-a8ad-4289-a8d2-d48acb9b540c/volumes/kubernetes.io~projected/kube-api-access-wj4dx DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-902 DeviceMajor:0 DeviceMinor:902 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/535151362e36c1745033704c37dfb910d9260b348b0c35a197ec5a2c74a4ea53/userdata/shm DeviceMajor:0 DeviceMinor:1066 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1297 DeviceMajor:0 DeviceMinor:1297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1305 DeviceMajor:0 DeviceMinor:1305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-83 DeviceMajor:0 DeviceMinor:83 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5ea4c132-b6d0-4dc9-942d-48e359eed418/volumes/kubernetes.io~projected/kube-api-access-7nlf9 DeviceMajor:0 DeviceMinor:135 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aa71c4fa879120a78bd3b6a5ee4f553adcd2305018af6f53632371d2a776a283/userdata/shm DeviceMajor:0 DeviceMinor:552 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-684 DeviceMajor:0 DeviceMinor:684 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-870 DeviceMajor:0 DeviceMinor:870 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-872 DeviceMajor:0 DeviceMinor:872 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bdf18981-b755-4b11-8793-38bc5e2e755b/volumes/kubernetes.io~projected/kube-api-access-wr5wk DeviceMajor:0 DeviceMinor:689 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:546 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:645 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a8c0a6d2-f1f9-49e3-9475-4983b50667bf/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:662 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ae43311e-14ba-40a1-bdbf-f02d68031757/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1059 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1195 DeviceMajor:0 DeviceMinor:1195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-971 DeviceMajor:0 DeviceMinor:971 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/caef1c17-56b0-479c-b000-caaac3c2b249/volumes/kubernetes.io~projected/kube-api-access-8kgzf DeviceMajor:0 DeviceMinor:764 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-41 DeviceMajor:0 DeviceMinor:41 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-123 DeviceMajor:0 DeviceMinor:123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-347 DeviceMajor:0 DeviceMinor:347 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a9fb4904f90243607c1bd114c0e1c541fb17de9f6f5ce80d7f75369901ce613b/userdata/shm DeviceMajor:0 DeviceMinor:813 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-852 DeviceMajor:0 DeviceMinor:852 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/db318f21d539d497ae2372897b56aaa3b6fedeaae97e556d74c5b3c251315d6e/userdata/shm DeviceMajor:0 DeviceMinor:912 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-928 DeviceMajor:0 DeviceMinor:928 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ec64844e3e46d42ec4c570bb811039de046f41f872bc256c338ea6312e07ba0d/userdata/shm DeviceMajor:0 DeviceMinor:1293 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-824 DeviceMajor:0 DeviceMinor:824 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1099 DeviceMajor:0 DeviceMinor:1099 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1115 DeviceMajor:0 DeviceMinor:1115 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-176 DeviceMajor:0 DeviceMinor:176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1024 DeviceMajor:0 DeviceMinor:1024 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1108 DeviceMajor:0 DeviceMinor:1108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/afc706c41127ee1f98bf413cd8a012a0e0a8f183eef4bf77721d14a272ded89e/userdata/shm DeviceMajor:0 DeviceMinor:289 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0/volumes/kubernetes.io~projected/kube-api-access-tthkk DeviceMajor:0 DeviceMinor:640 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-836 DeviceMajor:0 DeviceMinor:836 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0c1b7791952a54d8b3ef36cceac195dbbcc9face3120a05a59672ee12b84ba46/userdata/shm DeviceMajor:0 DeviceMinor:895 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1049 DeviceMajor:0 DeviceMinor:1049 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-542 DeviceMajor:0 DeviceMinor:542 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-483 DeviceMajor:0 DeviceMinor:483 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/900e244c-67aa-402f-b5f0-d37c5c1cedf7/volumes/kubernetes.io~projected/kube-api-access-n85mh DeviceMajor:0 DeviceMinor:257 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a/volumes/kubernetes.io~projected/kube-api-access-tl7wm DeviceMajor:0 DeviceMinor:786 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367/volumes/kubernetes.io~projected/kube-api-access-2vz22 DeviceMajor:0 DeviceMinor:1030 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8ef8165957098f6be8792289e9cb306a276c73110e287a7b80ba51a3888e812c/userdata/shm DeviceMajor:0 DeviceMinor:1091 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1149 DeviceMajor:0 DeviceMinor:1149 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/92d6a373c92ade68969e49443823f212abf3c0859e9aaf5d10ff5913a474e6f8/userdata/shm DeviceMajor:0 DeviceMinor:278 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3e54884bb129553f96e22ded74db5788d449f044a28bbdd487ce407f3c14ba01/userdata/shm DeviceMajor:0 DeviceMinor:512 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-614 DeviceMajor:0 DeviceMinor:614 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-925 DeviceMajor:0 DeviceMinor:925 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5c30b9cdcf13e6a3816e39ff92455fc96f090fac8eb9899e480122d604e7a1b8/userdata/shm DeviceMajor:0 DeviceMinor:517 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-568 DeviceMajor:0 DeviceMinor:568 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3f68274f91c27d15a060c5bac225b0b94e8aa70b90454461d048fa9e384a03df/userdata/shm DeviceMajor:0 DeviceMinor:283 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ee3a6748-0bbc-41bf-8726-a8db18faf03b/volumes/kubernetes.io~projected/kube-api-access-mk2pl DeviceMajor:0 DeviceMinor:714 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5412cad37cfea94450b3688c380c9cc1161ff7a9a7f0b141297d24e746b33629/userdata/shm DeviceMajor:0 DeviceMinor:770 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-838 DeviceMajor:0 DeviceMinor:838 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1001 DeviceMajor:0 DeviceMinor:1001 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/32a79fe0-e619-4a66-8617-e8111bdc7e96/volumes/kubernetes.io~projected/kube-api-access-jkq7j DeviceMajor:0 DeviceMinor:110 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-456 DeviceMajor:0 DeviceMinor:456 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-976 DeviceMajor:0 DeviceMinor:976 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-366 DeviceMajor:0 DeviceMinor:366 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-860 DeviceMajor:0 DeviceMinor:860 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/92008ac4-8deb-4fb9-9116-14d2d005bd36/volumes/kubernetes.io~projected/kube-api-access-n4dn4 DeviceMajor:0 DeviceMinor:1060 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a8c0a6d2-f1f9-49e3-9475-4983b50667bf/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:659 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1086 DeviceMajor:0 DeviceMinor:1086 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f83848e1580bc2bc923ed29b258b640fe63d1b2a36889eeff462ef2f63db0d04/userdata/shm DeviceMajor:0 DeviceMinor:777 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1169 DeviceMajor:0 DeviceMinor:1169 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-741 DeviceMajor:0 DeviceMinor:741 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:013da989dc1e60f MacAddress:5e:91:4c:8a:3d:0a Speed:10000 Mtu:8900} {Name:04c921d85b432c0 MacAddress:4e:b0:95:ef:11:3e Speed:10000 Mtu:8900} {Name:07243cbc35256d0 MacAddress:72:e9:f1:dc:a8:37 Speed:10000 Mtu:8900} {Name:07f2250f0416c7a MacAddress:92:a4:0f:f0:99:e2 Speed:10000 Mtu:8900} {Name:0c1b7791952a54d MacAddress:1a:73:ea:46:77:c6 Speed:10000 Mtu:8900} {Name:0c48d8481d8bb65 MacAddress:0a:d5:6f:c4:3f:44 Speed:10000 Mtu:8900} {Name:118104a32f855cf MacAddress:c6:e6:6f:53:12:6f Speed:10000 Mtu:8900} {Name:1913b004153de96 MacAddress:fa:52:7d:53:19:f5 Speed:10000 Mtu:8900} {Name:22094081262cfd9 MacAddress:3e:86:ba:9d:5c:05 Speed:10000 Mtu:8900} {Name:236aeb004972a9d MacAddress:42:f2:4d:ba:ca:2c Speed:10000 Mtu:8900} {Name:3209ad8e141d4f4 MacAddress:9a:ec:6e:7e:b7:91 Speed:10000 Mtu:8900} {Name:34bf21f0d5e7428 MacAddress:1e:0d:7b:ca:7e:47 Speed:10000 Mtu:8900} {Name:3c3c6a0066a2da6 MacAddress:a6:25:ef:9d:8c:03 Speed:10000 Mtu:8900} {Name:3e54884bb129553 MacAddress:f2:99:1c:36:be:d0 Speed:10000 Mtu:8900} {Name:3f68274f91c27d1 MacAddress:f6:85:07:7c:af:c7 Speed:10000 Mtu:8900} {Name:437abb0aba17c9c MacAddress:0e:6a:9d:33:c0:10 Speed:10000 Mtu:8900} {Name:4e9788fdd4565e3 MacAddress:ca:a2:2b:95:fb:63 Speed:10000 Mtu:8900} {Name:51c5a5d32ca643e MacAddress:02:9c:55:cb:e9:48 Speed:10000 Mtu:8900} {Name:535151362e36c17 MacAddress:72:9a:58:2b:1a:cb Speed:10000 Mtu:8900} {Name:5412cad37cfea94 MacAddress:0e:bc:9d:cc:4a:47 Speed:10000 Mtu:8900} {Name:56784add7fab2d6 MacAddress:ea:5e:ea:a7:ff:50 Speed:10000 Mtu:8900} {Name:56dab50a6ee92d8 MacAddress:02:1d:bc:f6:ab:18 Speed:10000 Mtu:8900} {Name:5c30b9cdcf13e6a MacAddress:0a:19:7e:ad:50:e0 Speed:10000 Mtu:8900} {Name:5fc1828a85716c5 MacAddress:f6:ad:c5:60:a2:c8 Speed:10000 Mtu:8900} {Name:6315ef904771a7f MacAddress:4e:fc:5a:bc:2f:59 Speed:10000 Mtu:8900} {Name:6ea59bb762ddd91 MacAddress:c2:71:5a:55:65:ea Speed:10000 Mtu:8900} {Name:7190b6f768a0fe9 MacAddress:d6:5f:5d:88:d5:65 Speed:10000 Mtu:8900} {Name:7cd291b9260d847 MacAddress:e6:09:32:28:a4:0f Speed:10000 Mtu:8900} {Name:7d7dfb1a01a9470 MacAddress:32:e9:38:59:c6:4a Speed:10000 Mtu:8900} {Name:80b53aa57494cc0 MacAddress:0a:62:d0:a6:23:88 Speed:10000 Mtu:8900} {Name:81b14b205a5b43d MacAddress:62:d5:08:90:74:c8 Speed:10000 Mtu:8900} {Name:8468bd2a2161175 MacAddress:22:92:55:58:0f:6b Speed:10000 Mtu:8900} {Name:84da6dcc282a18c MacAddress:46:52:1a:6d:86:01 Speed:10000 Mtu:8900} {Name:864b7e188cfb62e MacAddress:96:e3:bc:f8:fc:07 Speed:10000 Mtu:8900} {Name:88c6fd1112c1b3e MacAddress:7a:4d:48:20:4b:81 Speed:10000 Mtu:8900} {Name:8df5627ff680da0 MacAddress:7e:fe:a7:4b:71:c9 Speed:10000 Mtu:8900} {Name:92c9b6ef7965615 MacAddress:ee:b3:a6:43:2c:a4 Speed:10000 Mtu:8900} {Name:92d6a373c92ade6 MacAddress:12:49:03:4d:f8:d5 Speed:10000 Mtu:8900} {Name:934ad9d048e3534 MacAddress:02:7e:13:10:ed:c0 Speed:10000 Mtu:8900} {Name:95650a37daeacac MacAddress:de:b2:a0:e7:da:18 Speed:10000 Mtu:8900} {Name:972260fa4d71d5a MacAddress:ee:32:86:85:64:23 Speed:10000 Mtu:8900} {Name:9bd614ac7dafc38 MacAddress:42:07:17:76:7c:36 Speed:10000 Mtu:8900} {Name:9df920ca539f41d MacAddress:de:0c:a4:5b:fe:69 Speed:10000 Mtu:8900} {Name:a3b80d783578c7d MacAddress:ea:d8:84:50:01:f1 Speed:10000 Mtu:8900} {Name:a9fb4904f902436 MacAddress:66:55:ab:b0:16:88 Speed:10000 Mtu:8900} {Name:aa71c4fa879120a MacAddress:82:db:85:47:f3:ef Speed:10000 Mtu:8900} {Name:afc706c41127ee1 MacAddress:36:dc:a5:b3:e6:db Speed:10000 Mtu:8900} {Name:b4cf8dbc3fd31a2 MacAddress:96:7e:ea:03:b5:07 Speed:10000 Mtu:8900} {Name:ba0f9ce144b093c MacAddress:32:04:bb:9a:e7:fc Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:62:9a:a7:52:77:d8 Speed:0 Mtu:8900} {Name:cc5528fa6db2bfe MacAddress:da:96:99:a5:7c:16 Speed:10000 Mtu:8900} {Name:db318f21d539d49 MacAddress:de:17:16:3e:f1:27 Speed:10000 Mtu:8900} {Name:e1b8782a8564dd4 MacAddress:4a:13:93:8d:1c:c4 Speed:10000 Mtu:8900} {Name:e4a8f393be39a3a MacAddress:7e:86:cd:b8:7b:5b Speed:10000 Mtu:8900} {Name:e94527abc555de6 MacAddress:36:97:86:64:85:c0 Speed:10000 Mtu:8900} {Name:ec64844e3e46d42 MacAddress:46:6a:3d:20:c8:9b Speed:10000 Mtu:8900} {Name:ed48d3d3cb753c9 MacAddress:d6:4d:15:d9:b9:0a Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:2c:8d:77 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:3f:57:48 Speed:-1 Mtu:9000} {Name:fa9d778b1d57034 MacAddress:42:4d:ae:70:b5:f9 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:1a:32:2a:84:99:77 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 20 15:01:01.949804 master-0 kubenswrapper[28120]: I0220 15:01:01.948946 28120 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 20 15:01:01.949804 master-0 kubenswrapper[28120]: I0220 15:01:01.949053 28120 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 20 15:01:01.949804 master-0 kubenswrapper[28120]: I0220 15:01:01.949651 28120 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 20 15:01:01.950594 master-0 kubenswrapper[28120]: I0220 15:01:01.949957 28120 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 20 15:01:01.950594 master-0 kubenswrapper[28120]: I0220 15:01:01.950009 28120 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 20 15:01:01.950594 master-0 kubenswrapper[28120]: I0220 15:01:01.950355 28120 topology_manager.go:138] "Creating topology manager with none policy" Feb 20 15:01:01.950594 master-0 kubenswrapper[28120]: I0220 15:01:01.950377 28120 container_manager_linux.go:303] "Creating device plugin manager" Feb 20 15:01:01.950594 master-0 kubenswrapper[28120]: I0220 15:01:01.950393 28120 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 20 15:01:01.950594 master-0 kubenswrapper[28120]: I0220 15:01:01.950434 28120 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 20 15:01:01.950594 master-0 kubenswrapper[28120]: I0220 15:01:01.950495 28120 state_mem.go:36] "Initialized new in-memory state store" Feb 20 15:01:01.951031 master-0 kubenswrapper[28120]: I0220 15:01:01.950777 28120 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 20 15:01:01.951031 master-0 kubenswrapper[28120]: I0220 15:01:01.950895 28120 kubelet.go:418] "Attempting to sync node with API server" Feb 20 15:01:01.951031 master-0 kubenswrapper[28120]: I0220 15:01:01.950915 28120 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 20 15:01:01.951031 master-0 kubenswrapper[28120]: I0220 15:01:01.950963 28120 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 20 15:01:01.951031 master-0 kubenswrapper[28120]: I0220 15:01:01.950985 28120 kubelet.go:324] "Adding apiserver pod source" Feb 20 15:01:01.951031 master-0 kubenswrapper[28120]: I0220 15:01:01.951014 28120 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 20 15:01:01.952745 master-0 kubenswrapper[28120]: I0220 15:01:01.952504 28120 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 20 15:01:01.952863 master-0 kubenswrapper[28120]: I0220 15:01:01.952787 28120 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 20 15:01:01.953485 master-0 kubenswrapper[28120]: I0220 15:01:01.953384 28120 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 20 15:01:01.953637 master-0 kubenswrapper[28120]: I0220 15:01:01.953607 28120 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 20 15:01:01.953768 master-0 kubenswrapper[28120]: I0220 15:01:01.953732 28120 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 20 15:01:01.953862 master-0 kubenswrapper[28120]: I0220 15:01:01.953763 28120 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 20 15:01:01.953862 master-0 kubenswrapper[28120]: I0220 15:01:01.953797 28120 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 20 15:01:01.953862 master-0 kubenswrapper[28120]: I0220 15:01:01.953816 28120 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 20 15:01:01.953862 master-0 kubenswrapper[28120]: I0220 15:01:01.953845 28120 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 20 15:01:01.953862 master-0 kubenswrapper[28120]: I0220 15:01:01.953861 28120 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 20 15:01:01.954169 master-0 kubenswrapper[28120]: I0220 15:01:01.953877 28120 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 20 15:01:01.954169 master-0 kubenswrapper[28120]: I0220 15:01:01.953895 28120 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 20 15:01:01.954169 master-0 kubenswrapper[28120]: I0220 15:01:01.953911 28120 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 20 15:01:01.954169 master-0 kubenswrapper[28120]: I0220 15:01:01.953966 28120 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 20 15:01:01.954169 master-0 kubenswrapper[28120]: I0220 15:01:01.954002 28120 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 20 15:01:01.954169 master-0 kubenswrapper[28120]: I0220 15:01:01.954061 28120 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 20 15:01:01.955571 master-0 kubenswrapper[28120]: I0220 15:01:01.954814 28120 server.go:1280] "Started kubelet" Feb 20 15:01:01.957057 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 20 15:01:01.964553 master-0 kubenswrapper[28120]: I0220 15:01:01.960382 28120 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 20 15:01:01.964553 master-0 kubenswrapper[28120]: I0220 15:01:01.960384 28120 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 20 15:01:01.964553 master-0 kubenswrapper[28120]: I0220 15:01:01.960440 28120 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 20 15:01:01.964553 master-0 kubenswrapper[28120]: I0220 15:01:01.962728 28120 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 20 15:01:01.964832 master-0 kubenswrapper[28120]: I0220 15:01:01.964752 28120 server.go:449] "Adding debug handlers to kubelet server" Feb 20 15:01:01.965683 master-0 kubenswrapper[28120]: I0220 15:01:01.965550 28120 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 20 15:01:01.970700 master-0 kubenswrapper[28120]: I0220 15:01:01.970609 28120 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 20 15:01:01.982010 master-0 kubenswrapper[28120]: I0220 15:01:01.981499 28120 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 20 15:01:01.982010 master-0 kubenswrapper[28120]: I0220 15:01:01.981563 28120 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 20 15:01:01.982010 master-0 kubenswrapper[28120]: I0220 15:01:01.981679 28120 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-21 14:36:18 +0000 UTC, rotation deadline is 2026-02-21 09:00:53.313501582 +0000 UTC Feb 20 15:01:01.982010 master-0 kubenswrapper[28120]: I0220 15:01:01.981731 28120 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h59m51.331774684s for next certificate rotation Feb 20 15:01:01.982010 master-0 kubenswrapper[28120]: I0220 15:01:01.981977 28120 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 20 15:01:01.982010 master-0 kubenswrapper[28120]: I0220 15:01:01.981999 28120 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 20 15:01:01.982648 master-0 kubenswrapper[28120]: I0220 15:01:01.982143 28120 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 20 15:01:01.986805 master-0 kubenswrapper[28120]: I0220 15:01:01.986716 28120 factory.go:55] Registering systemd factory Feb 20 15:01:01.986805 master-0 kubenswrapper[28120]: I0220 15:01:01.986772 28120 factory.go:221] Registration of the systemd container factory successfully Feb 20 15:01:01.989986 master-0 kubenswrapper[28120]: I0220 15:01:01.989901 28120 factory.go:153] Registering CRI-O factory Feb 20 15:01:01.989986 master-0 kubenswrapper[28120]: I0220 15:01:01.989969 28120 factory.go:221] Registration of the crio container factory successfully Feb 20 15:01:01.990303 master-0 kubenswrapper[28120]: I0220 15:01:01.990079 28120 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 20 15:01:01.990303 master-0 kubenswrapper[28120]: I0220 15:01:01.990109 28120 factory.go:103] Registering Raw factory Feb 20 15:01:01.990303 master-0 kubenswrapper[28120]: I0220 15:01:01.990134 28120 manager.go:1196] Started watching for new ooms in manager Feb 20 15:01:01.992610 master-0 kubenswrapper[28120]: I0220 15:01:01.992550 28120 manager.go:319] Starting recovery of all containers Feb 20 15:01:02.017091 master-0 kubenswrapper[28120]: I0220 15:01:02.016275 28120 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 20 15:01:02.019527 master-0 kubenswrapper[28120]: E0220 15:01:02.019456 28120 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 20 15:01:02.034504 master-0 kubenswrapper[28120]: I0220 15:01:02.034376 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86f6836b-b018-4c7a-87ad-51809a4b9c7a" volumeName="kubernetes.io/projected/86f6836b-b018-4c7a-87ad-51809a4b9c7a-kube-api-access-wcffg" seLinuxMountContext="" Feb 20 15:01:02.034504 master-0 kubenswrapper[28120]: I0220 15:01:02.034496 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99fe3b99-0b40-4887-bcc8-59caa515b99f" volumeName="kubernetes.io/projected/99fe3b99-0b40-4887-bcc8-59caa515b99f-kube-api-access-dkc7z" seLinuxMountContext="" Feb 20 15:01:02.034504 master-0 kubenswrapper[28120]: I0220 15:01:02.034511 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdd203e0-3dd9-4e9d-81f1-46f60d235e38" volumeName="kubernetes.io/empty-dir/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-audit-log" seLinuxMountContext="" Feb 20 15:01:02.034504 master-0 kubenswrapper[28120]: I0220 15:01:02.034523 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5429ce9-f3b7-4024-ac77-3a93a2ac77bb" volumeName="kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-etcd-client" seLinuxMountContext="" Feb 20 15:01:02.034504 master-0 kubenswrapper[28120]: I0220 15:01:02.034536 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="caef1c17-56b0-479c-b000-caaac3c2b249" volumeName="kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-images" seLinuxMountContext="" Feb 20 15:01:02.034504 master-0 kubenswrapper[28120]: I0220 15:01:02.034548 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee3a6748-0bbc-41bf-8726-a8db18faf03b" volumeName="kubernetes.io/projected/ee3a6748-0bbc-41bf-8726-a8db18faf03b-kube-api-access-mk2pl" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034562 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="234a44fd-c153-47a6-a11d-7d4b7165c236" volumeName="kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-client" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034576 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="26473c28-db42-47e6-9164-8c441ccc48ca" volumeName="kubernetes.io/projected/26473c28-db42-47e6-9164-8c441ccc48ca-kube-api-access" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034593 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8c0a6d2-f1f9-49e3-9475-4983b50667bf" volumeName="kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-trusted-ca-bundle" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034608 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8c0a6d2-f1f9-49e3-9475-4983b50667bf" volumeName="kubernetes.io/projected/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-kube-api-access-mchbh" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034623 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19ce4b45-db46-4fc3-8d72-963de22f026b" volumeName="kubernetes.io/empty-dir/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-tuned" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034637 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a278abf-8c59-4454-94d0-a0d0768cbec5" volumeName="kubernetes.io/projected/8a278abf-8c59-4454-94d0-a0d0768cbec5-kube-api-access-r9crd" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034652 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19ce4b45-db46-4fc3-8d72-963de22f026b" volumeName="kubernetes.io/empty-dir/19ce4b45-db46-4fc3-8d72-963de22f026b-tmp" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034669 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de" volumeName="kubernetes.io/secret/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-proxy-tls" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034683 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27ab8945-6a5b-4f7d-b893-6358da214499" volumeName="kubernetes.io/secret/27ab8945-6a5b-4f7d-b893-6358da214499-cluster-storage-operator-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034697 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49044786-483a-406e-8750-f6ded400841d" volumeName="kubernetes.io/secret/49044786-483a-406e-8750-f6ded400841d-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034711 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a4339bd5-b8d1-467e-8158-4464ea901148" volumeName="kubernetes.io/projected/a4339bd5-b8d1-467e-8158-4464ea901148-kube-api-access-jvthk" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034724 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8c0a6d2-f1f9-49e3-9475-4983b50667bf" volumeName="kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-etcd-client" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034738 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16d6dd52-d73b-4696-873e-00a6d4bb2c77" volumeName="kubernetes.io/configmap/16d6dd52-d73b-4696-873e-00a6d4bb2c77-images" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034750 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="26473c28-db42-47e6-9164-8c441ccc48ca" volumeName="kubernetes.io/secret/26473c28-db42-47e6-9164-8c441ccc48ca-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034763 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" volumeName="kubernetes.io/projected/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-kube-api-access-rj796" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034778 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c81ad608-a8ad-4289-a8d2-d48acb9b540c" volumeName="kubernetes.io/configmap/c81ad608-a8ad-4289-a8d2-d48acb9b540c-config" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034790 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de" volumeName="kubernetes.io/configmap/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-mcd-auth-proxy-config" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034803 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c31b8a7-edcb-403d-9122-7eb740f7d659" volumeName="kubernetes.io/configmap/4c31b8a7-edcb-403d-9122-7eb740f7d659-config" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034817 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c31b8a7-edcb-403d-9122-7eb740f7d659" volumeName="kubernetes.io/projected/4c31b8a7-edcb-403d-9122-7eb740f7d659-kube-api-access" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034861 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c31b8a7-edcb-403d-9122-7eb740f7d659" volumeName="kubernetes.io/secret/4c31b8a7-edcb-403d-9122-7eb740f7d659-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034877 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae43311e-14ba-40a1-bdbf-f02d68031757" volumeName="kubernetes.io/configmap/ae43311e-14ba-40a1-bdbf-f02d68031757-metrics-client-ca" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034895 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ea4c132-b6d0-4dc9-942d-48e359eed418" volumeName="kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034910 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdd203e0-3dd9-4e9d-81f1-46f60d235e38" volumeName="kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-server-tls" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034939 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3bf5be04-e4dd-44d9-be1a-3abe6ddd2367" volumeName="kubernetes.io/secret/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-proxy-tls" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034955 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45d7ef0c-272b-4d1e-965f-484975d5d25c" volumeName="kubernetes.io/configmap/45d7ef0c-272b-4d1e-965f-484975d5d25c-config" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.034967 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43e9807a-859c-44c1-8511-0066b0f59ff8" volumeName="kubernetes.io/secret/43e9807a-859c-44c1-8511-0066b0f59ff8-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035046 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0" volumeName="kubernetes.io/configmap/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-config-volume" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035068 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d71c90-cab7-4411-9426-0713cb026294" volumeName="kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035083 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3bf5be04-e4dd-44d9-be1a-3abe6ddd2367" volumeName="kubernetes.io/projected/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-kube-api-access-2vz22" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035098 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e5d953b-dbc7-48df-9d6b-d61030ffd6e3" volumeName="kubernetes.io/empty-dir/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-catalog-content" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035112 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="996d4949-f92c-42ac-9bda-8c6ec0295e92" volumeName="kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-config" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035125 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0b78aa6-7bc8-4221-81f5-bf62a7110380" volumeName="kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-tls" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035137 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef3a09a5-b019-48a3-97f8-7ddadb37394e" volumeName="kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-certs" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035150 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc334fff-c0bf-4905-bcdb-b0d2a35b0590" volumeName="kubernetes.io/empty-dir/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-cache" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035164 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="93786626-fac4-48f0-bf72-992bc39f4a82" volumeName="kubernetes.io/projected/93786626-fac4-48f0-bf72-992bc39f4a82-kube-api-access-fm2jn" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035176 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5429ce9-f3b7-4024-ac77-3a93a2ac77bb" volumeName="kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-etcd-serving-ca" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035192 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="84a61910-48eb-4c27-8d69-f6aa7ce912ca" volumeName="kubernetes.io/projected/84a61910-48eb-4c27-8d69-f6aa7ce912ca-ca-certs" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035205 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae43311e-14ba-40a1-bdbf-f02d68031757" volumeName="kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035220 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0" volumeName="kubernetes.io/secret/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-metrics-tls" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035234 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="26473c28-db42-47e6-9164-8c441ccc48ca" volumeName="kubernetes.io/configmap/26473c28-db42-47e6-9164-8c441ccc48ca-service-ca" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035247 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8157f73d-c757-40c4-80bc-3c9de2f2288a" volumeName="kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-config" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035260 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6e6d218-d969-40b5-a32b-9b2093089dbf" volumeName="kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-sysctl-allowlist" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035273 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdf18981-b755-4b11-8793-38bc5e2e755b" volumeName="kubernetes.io/secret/bdf18981-b755-4b11-8793-38bc5e2e755b-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035285 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0b78aa6-7bc8-4221-81f5-bf62a7110380" volumeName="kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035299 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee3a6748-0bbc-41bf-8726-a8db18faf03b" volumeName="kubernetes.io/secret/ee3a6748-0bbc-41bf-8726-a8db18faf03b-samples-operator-tls" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035312 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33675e96-ce49-49be-9117-954ac7cca5d5" volumeName="kubernetes.io/projected/33675e96-ce49-49be-9117-954ac7cca5d5-kube-api-access-hbw6n" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035330 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6e6d218-d969-40b5-a32b-9b2093089dbf" volumeName="kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-whereabouts-configmap" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035344 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0b78aa6-7bc8-4221-81f5-bf62a7110380" volumeName="kubernetes.io/empty-dir/c0b78aa6-7bc8-4221-81f5-bf62a7110380-volume-directive-shadow" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035357 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e8c5772-b6e2-43d8-b173-af74541855fb" volumeName="kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035372 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8c0a6d2-f1f9-49e3-9475-4983b50667bf" volumeName="kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-encryption-config" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035389 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0b78aa6-7bc8-4221-81f5-bf62a7110380" volumeName="kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-metrics-client-ca" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035401 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="234a44fd-c153-47a6-a11d-7d4b7165c236" volumeName="kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-service-ca" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035414 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a278abf-8c59-4454-94d0-a0d0768cbec5" volumeName="kubernetes.io/secret/8a278abf-8c59-4454-94d0-a0d0768cbec5-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035427 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a" volumeName="kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-profile-collector-cert" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035442 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1" volumeName="kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-bound-sa-token" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035456 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6e6d218-d969-40b5-a32b-9b2093089dbf" volumeName="kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-binary-copy" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035468 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5429ce9-f3b7-4024-ac77-3a93a2ac77bb" volumeName="kubernetes.io/projected/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-kube-api-access-b54xg" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035482 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="234a44fd-c153-47a6-a11d-7d4b7165c236" volumeName="kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035495 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="989af121-da08-4f40-b08c-dd2aa67bc60c" volumeName="kubernetes.io/secret/989af121-da08-4f40-b08c-dd2aa67bc60c-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035506 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a39c5481-961c-4ac2-8c5b-a2c0165f4188" volumeName="kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035522 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d" volumeName="kubernetes.io/empty-dir/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-catalog-content" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035537 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4b6a656c-40d6-4c63-9c6f-ac943eae4c9a" volumeName="kubernetes.io/secret/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-metrics-tls" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035551 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8157f73d-c757-40c4-80bc-3c9de2f2288a" volumeName="kubernetes.io/projected/8157f73d-c757-40c4-80bc-3c9de2f2288a-kube-api-access-bk5m4" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035562 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a4339bd5-b8d1-467e-8158-4464ea901148" volumeName="kubernetes.io/empty-dir/a4339bd5-b8d1-467e-8158-4464ea901148-available-featuregates" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035574 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e3cc4073-a926-4aba-81e6-c616c2bb2987" volumeName="kubernetes.io/secret/e3cc4073-a926-4aba-81e6-c616c2bb2987-tls-certificates" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035588 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92008ac4-8deb-4fb9-9116-14d2d005bd36" volumeName="kubernetes.io/projected/92008ac4-8deb-4fb9-9116-14d2d005bd36-kube-api-access-n4dn4" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035603 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="419f28a9-8fd7-4b59-9554-4d884a1208b5" volumeName="kubernetes.io/configmap/419f28a9-8fd7-4b59-9554-4d884a1208b5-telemetry-config" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035617 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="64e9eca9-bbdd-4eca-9219-922bbab9b388" volumeName="kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-profile-collector-cert" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035630 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86f6836b-b018-4c7a-87ad-51809a4b9c7a" volumeName="kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cert" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035641 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a4339bd5-b8d1-467e-8158-4464ea901148" volumeName="kubernetes.io/secret/a4339bd5-b8d1-467e-8158-4464ea901148-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035656 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d71c90-cab7-4411-9426-0713cb026294" volumeName="kubernetes.io/projected/31d71c90-cab7-4411-9426-0713cb026294-kube-api-access-57cks" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035669 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33675e96-ce49-49be-9117-954ac7cca5d5" volumeName="kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-env-overrides" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035685 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef3a09a5-b019-48a3-97f8-7ddadb37394e" volumeName="kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-node-bootstrap-token" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035699 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="787a4fee-6625-4df5-a432-c7e1190da777" volumeName="kubernetes.io/configmap/787a4fee-6625-4df5-a432-c7e1190da777-signing-cabundle" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035715 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e8c5772-b6e2-43d8-b173-af74541855fb" volumeName="kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-metrics-client-ca" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035728 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdd203e0-3dd9-4e9d-81f1-46f60d235e38" volumeName="kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-client-certs" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035741 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3ca2d2f-9f31-4524-a28f-cf16b02dd711" volumeName="kubernetes.io/empty-dir/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-operand-assets" seLinuxMountContext="" Feb 20 15:01:02.035587 master-0 kubenswrapper[28120]: I0220 15:01:02.035753 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc334fff-c0bf-4905-bcdb-b0d2a35b0590" volumeName="kubernetes.io/projected/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-ca-certs" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.035778 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="787a4fee-6625-4df5-a432-c7e1190da777" volumeName="kubernetes.io/secret/787a4fee-6625-4df5-a432-c7e1190da777-signing-key" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.035791 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86f6836b-b018-4c7a-87ad-51809a4b9c7a" volumeName="kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cluster-baremetal-operator-tls" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.035805 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49044786-483a-406e-8750-f6ded400841d" volumeName="kubernetes.io/projected/49044786-483a-406e-8750-f6ded400841d-kube-api-access-jljjg" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.035819 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc334fff-c0bf-4905-bcdb-b0d2a35b0590" volumeName="kubernetes.io/secret/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-catalogserver-certs" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.035832 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4ecbdf77-0c73-487e-943e-5315a0f8b8d4" volumeName="kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-apiservice-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.035846 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8157f73d-c757-40c4-80bc-3c9de2f2288a" volumeName="kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-trusted-ca-bundle" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.035861 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99fe3b99-0b40-4887-bcc8-59caa515b99f" volumeName="kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-tls" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.035876 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8c0a6d2-f1f9-49e3-9475-4983b50667bf" volumeName="kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-etcd-serving-ca" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.035889 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac3680de-aabf-414b-a340-5e5e6aea4822" volumeName="kubernetes.io/projected/ac3680de-aabf-414b-a340-5e5e6aea4822-kube-api-access-rln42" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.035901 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5429ce9-f3b7-4024-ac77-3a93a2ac77bb" volumeName="kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.035914 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="234a44fd-c153-47a6-a11d-7d4b7165c236" volumeName="kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-ca" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.035946 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="989af121-da08-4f40-b08c-dd2aa67bc60c" volumeName="kubernetes.io/projected/989af121-da08-4f40-b08c-dd2aa67bc60c-kube-api-access" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.035961 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4ecbdf77-0c73-487e-943e-5315a0f8b8d4" volumeName="kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-webhook-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.035973 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" volumeName="kubernetes.io/secret/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.035987 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db9dc349-5216-43ff-8c17-3a9384a010ea" volumeName="kubernetes.io/configmap/db9dc349-5216-43ff-8c17-3a9384a010ea-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.035998 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdd203e0-3dd9-4e9d-81f1-46f60d235e38" volumeName="kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036011 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6949e9d5-460c-4b63-94cb-1b20ad75ee1c" volumeName="kubernetes.io/projected/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-kube-api-access-jpt8j" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036022 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="84a61910-48eb-4c27-8d69-f6aa7ce912ca" volumeName="kubernetes.io/projected/84a61910-48eb-4c27-8d69-f6aa7ce912ca-kube-api-access-l5fng" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036033 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0" volumeName="kubernetes.io/projected/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-kube-api-access-tthkk" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036046 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0b78aa6-7bc8-4221-81f5-bf62a7110380" volumeName="kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036064 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="234a44fd-c153-47a6-a11d-7d4b7165c236" volumeName="kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036077 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="996d4949-f92c-42ac-9bda-8c6ec0295e92" volumeName="kubernetes.io/secret/996d4949-f92c-42ac-9bda-8c6ec0295e92-machine-approver-tls" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036089 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d2b154b-de63-4c9b-99d8-487fb3035fb9" volumeName="kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-env-overrides" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036129 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ea4c132-b6d0-4dc9-942d-48e359eed418" volumeName="kubernetes.io/projected/5ea4c132-b6d0-4dc9-942d-48e359eed418-kube-api-access-7nlf9" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036146 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" volumeName="kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-client-ca" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036161 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf4690-1ec1-44fc-94bd-730d9f2e6762" volumeName="kubernetes.io/configmap/87cf4690-1ec1-44fc-94bd-730d9f2e6762-iptables-alerter-script" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036174 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="234a44fd-c153-47a6-a11d-7d4b7165c236" volumeName="kubernetes.io/projected/234a44fd-c153-47a6-a11d-7d4b7165c236-kube-api-access-gwb5n" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036187 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43e9807a-859c-44c1-8511-0066b0f59ff8" volumeName="kubernetes.io/configmap/43e9807a-859c-44c1-8511-0066b0f59ff8-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036202 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5429ce9-f3b7-4024-ac77-3a93a2ac77bb" volumeName="kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-image-import-ca" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036213 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de" volumeName="kubernetes.io/projected/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-kube-api-access-wcfnf" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036227 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49defec6-a225-47ab-99ff-7a846f23eb00" volumeName="kubernetes.io/projected/49defec6-a225-47ab-99ff-7a846f23eb00-kube-api-access-k94cb" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036239 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="996d4949-f92c-42ac-9bda-8c6ec0295e92" volumeName="kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-auth-proxy-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.036251 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b73ae08-0ad7-4f99-8002-6df0d984cd2c" volumeName="kubernetes.io/configmap/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.037974 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b385880b-a26b-4353-8f6f-b7f926bcc67c" volumeName="kubernetes.io/secret/b385880b-a26b-4353-8f6f-b7f926bcc67c-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038032 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="caef1c17-56b0-479c-b000-caaac3c2b249" volumeName="kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-auth-proxy-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038049 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a278abf-8c59-4454-94d0-a0d0768cbec5" volumeName="kubernetes.io/configmap/8a278abf-8c59-4454-94d0-a0d0768cbec5-trusted-ca-bundle" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038063 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e8c5772-b6e2-43d8-b173-af74541855fb" volumeName="kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-serving-certs-ca-bundle" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038077 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b385880b-a26b-4353-8f6f-b7f926bcc67c" volumeName="kubernetes.io/projected/b385880b-a26b-4353-8f6f-b7f926bcc67c-kube-api-access-fwclx" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038091 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1" volumeName="kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-kube-api-access-pzmqr" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038106 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32a79fe0-e619-4a66-8617-e8111bdc7e96" volumeName="kubernetes.io/projected/32a79fe0-e619-4a66-8617-e8111bdc7e96-kube-api-access-jkq7j" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038120 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="787a4fee-6625-4df5-a432-c7e1190da777" volumeName="kubernetes.io/projected/787a4fee-6625-4df5-a432-c7e1190da777-kube-api-access-9k6br" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038134 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99fe3b99-0b40-4887-bcc8-59caa515b99f" volumeName="kubernetes.io/empty-dir/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-textfile" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038149 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d" volumeName="kubernetes.io/empty-dir/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-utilities" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038163 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1" volumeName="kubernetes.io/configmap/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-trusted-ca" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038178 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16d6dd52-d73b-4696-873e-00a6d4bb2c77" volumeName="kubernetes.io/projected/16d6dd52-d73b-4696-873e-00a6d4bb2c77-kube-api-access-sxncg" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038192 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45d7ef0c-272b-4d1e-965f-484975d5d25c" volumeName="kubernetes.io/secret/45d7ef0c-272b-4d1e-965f-484975d5d25c-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038208 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a39c5481-961c-4ac2-8c5b-a2c0165f4188" volumeName="kubernetes.io/configmap/a39c5481-961c-4ac2-8c5b-a2c0165f4188-metrics-client-ca" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038253 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d2b154b-de63-4c9b-99d8-487fb3035fb9" volumeName="kubernetes.io/projected/5d2b154b-de63-4c9b-99d8-487fb3035fb9-kube-api-access-mclrj" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038292 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99fe3b99-0b40-4887-bcc8-59caa515b99f" volumeName="kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038306 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db9dc349-5216-43ff-8c17-3a9384a010ea" volumeName="kubernetes.io/secret/db9dc349-5216-43ff-8c17-3a9384a010ea-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038319 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e5d953b-dbc7-48df-9d6b-d61030ffd6e3" volumeName="kubernetes.io/projected/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-kube-api-access-2xd6r" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038333 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a39c5481-961c-4ac2-8c5b-a2c0165f4188" volumeName="kubernetes.io/projected/a39c5481-961c-4ac2-8c5b-a2c0165f4188-kube-api-access-tl7tw" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038350 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d71c90-cab7-4411-9426-0713cb026294" volumeName="kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038366 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32a79fe0-e619-4a66-8617-e8111bdc7e96" volumeName="kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-daemon-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038380 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e7cac87-2eaa-4dad-b2dc-c8ed0557c665" volumeName="kubernetes.io/secret/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038393 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a278abf-8c59-4454-94d0-a0d0768cbec5" volumeName="kubernetes.io/empty-dir/8a278abf-8c59-4454-94d0-a0d0768cbec5-snapshots" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038412 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdf18981-b755-4b11-8793-38bc5e2e755b" volumeName="kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-client-ca" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038428 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0bedbe69-fc4b-4bd7-bcc2-acead927eda2" volumeName="kubernetes.io/projected/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-kube-api-access-gk2lq" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038442 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0bedbe69-fc4b-4bd7-bcc2-acead927eda2" volumeName="kubernetes.io/secret/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-machine-api-operator-tls" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038458 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33675e96-ce49-49be-9117-954ac7cca5d5" volumeName="kubernetes.io/secret/33675e96-ce49-49be-9117-954ac7cca5d5-webhook-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038473 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d28490b0-96ca-4fe0-8fae-e6f8390f933b" volumeName="kubernetes.io/projected/d28490b0-96ca-4fe0-8fae-e6f8390f933b-kube-api-access-qm5p2" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038486 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="419f28a9-8fd7-4b59-9554-4d884a1208b5" volumeName="kubernetes.io/projected/419f28a9-8fd7-4b59-9554-4d884a1208b5-kube-api-access-fttgr" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038500 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b73ae08-0ad7-4f99-8002-6df0d984cd2c" volumeName="kubernetes.io/projected/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-kube-api-access-mb46b" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038513 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af7b6f34-adca-4bdb-9e41-e2995a1d67a8" volumeName="kubernetes.io/projected/af7b6f34-adca-4bdb-9e41-e2995a1d67a8-kube-api-access-nrrq4" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038528 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3ca2d2f-9f31-4524-a28f-cf16b02dd711" volumeName="kubernetes.io/projected/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-kube-api-access-4jn8g" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038542 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d71c90-cab7-4411-9426-0713cb026294" volumeName="kubernetes.io/configmap/31d71c90-cab7-4411-9426-0713cb026294-trusted-ca" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038556 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32a79fe0-e619-4a66-8617-e8111bdc7e96" volumeName="kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-cni-binary-copy" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038570 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6949e9d5-460c-4b63-94cb-1b20ad75ee1c" volumeName="kubernetes.io/configmap/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-cco-trusted-ca" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038584 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e5d953b-dbc7-48df-9d6b-d61030ffd6e3" volumeName="kubernetes.io/empty-dir/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-utilities" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038598 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b385880b-a26b-4353-8f6f-b7f926bcc67c" volumeName="kubernetes.io/configmap/b385880b-a26b-4353-8f6f-b7f926bcc67c-auth-proxy-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038612 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdd203e0-3dd9-4e9d-81f1-46f60d235e38" volumeName="kubernetes.io/projected/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-kube-api-access-9zppr" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038625 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="caef1c17-56b0-479c-b000-caaac3c2b249" volumeName="kubernetes.io/projected/caef1c17-56b0-479c-b000-caaac3c2b249-kube-api-access-8kgzf" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038638 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef3a09a5-b019-48a3-97f8-7ddadb37394e" volumeName="kubernetes.io/projected/ef3a09a5-b019-48a3-97f8-7ddadb37394e-kube-api-access-pcqd4" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038651 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0bedbe69-fc4b-4bd7-bcc2-acead927eda2" volumeName="kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038665 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33675e96-ce49-49be-9117-954ac7cca5d5" volumeName="kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-ovnkube-identity-cm" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038678 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" volumeName="kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038692 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="989af121-da08-4f40-b08c-dd2aa67bc60c" volumeName="kubernetes.io/configmap/989af121-da08-4f40-b08c-dd2aa67bc60c-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038705 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8c0a6d2-f1f9-49e3-9475-4983b50667bf" volumeName="kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038720 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5429ce9-f3b7-4024-ac77-3a93a2ac77bb" volumeName="kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-trusted-ca-bundle" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038733 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21384bd0-495c-406a-9462-e9e740c04686" volumeName="kubernetes.io/projected/21384bd0-495c-406a-9462-e9e740c04686-kube-api-access-gr6nr" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038746 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4ecbdf77-0c73-487e-943e-5315a0f8b8d4" volumeName="kubernetes.io/empty-dir/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-tmpfs" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038761 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf4690-1ec1-44fc-94bd-730d9f2e6762" volumeName="kubernetes.io/projected/87cf4690-1ec1-44fc-94bd-730d9f2e6762-kube-api-access-r9c94" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038773 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8c0a6d2-f1f9-49e3-9475-4983b50667bf" volumeName="kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-audit-policies" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038788 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0bedbe69-fc4b-4bd7-bcc2-acead927eda2" volumeName="kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-images" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038802 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a" volumeName="kubernetes.io/projected/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-kube-api-access-tl7wm" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038823 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a278abf-8c59-4454-94d0-a0d0768cbec5" volumeName="kubernetes.io/configmap/8a278abf-8c59-4454-94d0-a0d0768cbec5-service-ca-bundle" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038835 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="929dffba-46da-4d81-a437-bc6a9fe79811" volumeName="kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038847 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae43311e-14ba-40a1-bdbf-f02d68031757" volumeName="kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-tls" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038863 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6e6d218-d969-40b5-a32b-9b2093089dbf" volumeName="kubernetes.io/projected/b6e6d218-d969-40b5-a32b-9b2093089dbf-kube-api-access-psd59" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038875 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1" volumeName="kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038890 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0a3548f-299c-4234-9bf1-c93efcb9740b" volumeName="kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038903 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27ab8945-6a5b-4f7d-b893-6358da214499" volumeName="kubernetes.io/projected/27ab8945-6a5b-4f7d-b893-6358da214499-kube-api-access-jshgm" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038914 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8157f73d-c757-40c4-80bc-3c9de2f2288a" volumeName="kubernetes.io/secret/8157f73d-c757-40c4-80bc-3c9de2f2288a-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038944 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b73ae08-0ad7-4f99-8002-6df0d984cd2c" volumeName="kubernetes.io/secret/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038961 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d" volumeName="kubernetes.io/projected/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-kube-api-access-xtgrt" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038974 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21384bd0-495c-406a-9462-e9e740c04686" volumeName="kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-env-overrides" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038985 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="419f28a9-8fd7-4b59-9554-4d884a1208b5" volumeName="kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.038999 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac3680de-aabf-414b-a340-5e5e6aea4822" volumeName="kubernetes.io/empty-dir/ac3680de-aabf-414b-a340-5e5e6aea4822-utilities" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039011 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="448aafd2-ffb3-42c5-8085-f6194d7862e5" volumeName="kubernetes.io/projected/448aafd2-ffb3-42c5-8085-f6194d7862e5-kube-api-access-nv57n" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039024 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4b6a656c-40d6-4c63-9c6f-ac943eae4c9a" volumeName="kubernetes.io/projected/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-bound-sa-token" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039037 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4ecbdf77-0c73-487e-943e-5315a0f8b8d4" volumeName="kubernetes.io/projected/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-kube-api-access-ntlv2" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039049 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="64e9eca9-bbdd-4eca-9219-922bbab9b388" volumeName="kubernetes.io/projected/64e9eca9-bbdd-4eca-9219-922bbab9b388-kube-api-access-47sqj" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039063 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="900e244c-67aa-402f-b5f0-d37c5c1cedf7" volumeName="kubernetes.io/projected/900e244c-67aa-402f-b5f0-d37c5c1cedf7-kube-api-access-n85mh" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039076 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0a3548f-299c-4234-9bf1-c93efcb9740b" volumeName="kubernetes.io/configmap/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-trusted-ca" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039088 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16d6dd52-d73b-4696-873e-00a6d4bb2c77" volumeName="kubernetes.io/configmap/16d6dd52-d73b-4696-873e-00a6d4bb2c77-auth-proxy-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039099 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43e9807a-859c-44c1-8511-0066b0f59ff8" volumeName="kubernetes.io/projected/43e9807a-859c-44c1-8511-0066b0f59ff8-kube-api-access" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039114 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" volumeName="kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-default-certificate" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039125 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e8c5772-b6e2-43d8-b173-af74541855fb" volumeName="kubernetes.io/projected/8e8c5772-b6e2-43d8-b173-af74541855fb-kube-api-access-z67rw" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039138 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="93786626-fac4-48f0-bf72-992bc39f4a82" volumeName="kubernetes.io/empty-dir/93786626-fac4-48f0-bf72-992bc39f4a82-utilities" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039152 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5429ce9-f3b7-4024-ac77-3a93a2ac77bb" volumeName="kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-audit" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039165 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1fe69517-eec2-4721-933c-fa27cea7ab1f" volumeName="kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039177 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="64e9eca9-bbdd-4eca-9219-922bbab9b388" volumeName="kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-srv-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039191 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" volumeName="kubernetes.io/projected/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-kube-api-access-wxjcq" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039202 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6949e9d5-460c-4b63-94cb-1b20ad75ee1c" volumeName="kubernetes.io/secret/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-cloud-credential-operator-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039215 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8157f73d-c757-40c4-80bc-3c9de2f2288a" volumeName="kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-service-ca-bundle" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039227 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fd9f419-2cdc-4991-8fb9-87d76ac58976" volumeName="kubernetes.io/projected/9fd9f419-2cdc-4991-8fb9-87d76ac58976-kube-api-access-svlzf" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039239 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1fe69517-eec2-4721-933c-fa27cea7ab1f" volumeName="kubernetes.io/projected/1fe69517-eec2-4721-933c-fa27cea7ab1f-kube-api-access-rnwtd" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039254 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21384bd0-495c-406a-9462-e9e740c04686" volumeName="kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039265 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21384bd0-495c-406a-9462-e9e740c04686" volumeName="kubernetes.io/secret/21384bd0-495c-406a-9462-e9e740c04686-ovn-node-metrics-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039285 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="caef1c17-56b0-479c-b000-caaac3c2b249" volumeName="kubernetes.io/secret/caef1c17-56b0-479c-b000-caaac3c2b249-cloud-controller-manager-operator-tls" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039297 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" volumeName="kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-stats-auth" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039309 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e8c5772-b6e2-43d8-b173-af74541855fb" volumeName="kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-federate-client-tls" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039325 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="996d4949-f92c-42ac-9bda-8c6ec0295e92" volumeName="kubernetes.io/projected/996d4949-f92c-42ac-9bda-8c6ec0295e92-kube-api-access-4kfqn" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039370 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdd203e0-3dd9-4e9d-81f1-46f60d235e38" volumeName="kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-metrics-server-audit-profiles" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039393 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc334fff-c0bf-4905-bcdb-b0d2a35b0590" volumeName="kubernetes.io/projected/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-kube-api-access-9lcqg" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039410 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45d7ef0c-272b-4d1e-965f-484975d5d25c" volumeName="kubernetes.io/projected/45d7ef0c-272b-4d1e-965f-484975d5d25c-kube-api-access-svhtr" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039425 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49defec6-a225-47ab-99ff-7a846f23eb00" volumeName="kubernetes.io/secret/49defec6-a225-47ab-99ff-7a846f23eb00-webhook-certs" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039445 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0b78aa6-7bc8-4221-81f5-bf62a7110380" volumeName="kubernetes.io/projected/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-api-access-lhzk6" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039459 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" volumeName="kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-metrics-certs" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039473 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86f6836b-b018-4c7a-87ad-51809a4b9c7a" volumeName="kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-images" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039489 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5429ce9-f3b7-4024-ac77-3a93a2ac77bb" volumeName="kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-encryption-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039504 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3ca2d2f-9f31-4524-a28f-cf16b02dd711" volumeName="kubernetes.io/secret/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039518 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99fe3b99-0b40-4887-bcc8-59caa515b99f" volumeName="kubernetes.io/configmap/99fe3b99-0b40-4887-bcc8-59caa515b99f-metrics-client-ca" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039532 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdf18981-b755-4b11-8793-38bc5e2e755b" volumeName="kubernetes.io/projected/bdf18981-b755-4b11-8793-38bc5e2e755b-kube-api-access-wr5wk" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039546 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c81ad608-a8ad-4289-a8d2-d48acb9b540c" volumeName="kubernetes.io/projected/c81ad608-a8ad-4289-a8d2-d48acb9b540c-kube-api-access-wj4dx" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039559 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d28490b0-96ca-4fe0-8fae-e6f8390f933b" volumeName="kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039573 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e8c5772-b6e2-43d8-b173-af74541855fb" volumeName="kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-client-tls" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039587 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fd9f419-2cdc-4991-8fb9-87d76ac58976" volumeName="kubernetes.io/secret/9fd9f419-2cdc-4991-8fb9-87d76ac58976-metrics-tls" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039601 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="93786626-fac4-48f0-bf72-992bc39f4a82" volumeName="kubernetes.io/empty-dir/93786626-fac4-48f0-bf72-992bc39f4a82-catalog-content" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039617 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a39c5481-961c-4ac2-8c5b-a2c0165f4188" volumeName="kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-tls" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039631 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5429ce9-f3b7-4024-ac77-3a93a2ac77bb" volumeName="kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039644 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" volumeName="kubernetes.io/configmap/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-service-ca-bundle" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039657 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e8c5772-b6e2-43d8-b173-af74541855fb" volumeName="kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-trusted-ca-bundle" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039669 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d2b154b-de63-4c9b-99d8-487fb3035fb9" volumeName="kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovnkube-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039683 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86f6836b-b018-4c7a-87ad-51809a4b9c7a" volumeName="kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039696 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac3680de-aabf-414b-a340-5e5e6aea4822" volumeName="kubernetes.io/empty-dir/ac3680de-aabf-414b-a340-5e5e6aea4822-catalog-content" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039708 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae43311e-14ba-40a1-bdbf-f02d68031757" volumeName="kubernetes.io/projected/ae43311e-14ba-40a1-bdbf-f02d68031757-kube-api-access-mf5p9" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039725 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdf18981-b755-4b11-8793-38bc5e2e755b" volumeName="kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-proxy-ca-bundles" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039738 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdf18981-b755-4b11-8793-38bc5e2e755b" volumeName="kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039754 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3bf5be04-e4dd-44d9-be1a-3abe6ddd2367" volumeName="kubernetes.io/configmap/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-mcc-auth-proxy-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039768 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4b6a656c-40d6-4c63-9c6f-ac943eae4c9a" volumeName="kubernetes.io/configmap/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-trusted-ca" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039780 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="db9dc349-5216-43ff-8c17-3a9384a010ea" volumeName="kubernetes.io/projected/db9dc349-5216-43ff-8c17-3a9384a010ea-kube-api-access-smglm" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039792 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a" volumeName="kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-srv-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039805 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="84a61910-48eb-4c27-8d69-f6aa7ce912ca" volumeName="kubernetes.io/empty-dir/84a61910-48eb-4c27-8d69-f6aa7ce912ca-cache" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039817 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdd203e0-3dd9-4e9d-81f1-46f60d235e38" volumeName="kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-client-ca-bundle" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039832 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c0a3548f-299c-4234-9bf1-c93efcb9740b" volumeName="kubernetes.io/projected/c0a3548f-299c-4234-9bf1-c93efcb9740b-kube-api-access-7d5fq" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039845 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c81ad608-a8ad-4289-a8d2-d48acb9b540c" volumeName="kubernetes.io/secret/c81ad608-a8ad-4289-a8d2-d48acb9b540c-serving-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039859 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16d6dd52-d73b-4696-873e-00a6d4bb2c77" volumeName="kubernetes.io/secret/16d6dd52-d73b-4696-873e-00a6d4bb2c77-proxy-tls" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039874 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21384bd0-495c-406a-9462-e9e740c04686" volumeName="kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-script-lib" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039891 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e7cac87-2eaa-4dad-b2dc-c8ed0557c665" volumeName="kubernetes.io/projected/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-kube-api-access-lc9pl" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039903 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d2b154b-de63-4c9b-99d8-487fb3035fb9" volumeName="kubernetes.io/secret/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039917 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e8c5772-b6e2-43d8-b173-af74541855fb" volumeName="kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039951 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1af84e0-776b-4285-906a-6880dbc82a7b" volumeName="kubernetes.io/projected/a1af84e0-776b-4285-906a-6880dbc82a7b-kube-api-access-6lp29" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039964 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19ce4b45-db46-4fc3-8d72-963de22f026b" volumeName="kubernetes.io/projected/19ce4b45-db46-4fc3-8d72-963de22f026b-kube-api-access-45226" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039975 28120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4b6a656c-40d6-4c63-9c6f-ac943eae4c9a" volumeName="kubernetes.io/projected/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-kube-api-access-mwnq7" seLinuxMountContext="" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039986 28120 reconstruct.go:97] "Volume reconstruction finished" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.039995 28120 reconciler.go:26] "Reconciler: start to sync state" Feb 20 15:01:02.049146 master-0 kubenswrapper[28120]: I0220 15:01:02.042498 28120 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 20 15:01:02.058410 master-0 kubenswrapper[28120]: I0220 15:01:02.050107 28120 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 20 15:01:02.058410 master-0 kubenswrapper[28120]: I0220 15:01:02.055054 28120 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 20 15:01:02.058410 master-0 kubenswrapper[28120]: I0220 15:01:02.055121 28120 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 20 15:01:02.058410 master-0 kubenswrapper[28120]: I0220 15:01:02.055154 28120 kubelet.go:2335] "Starting kubelet main sync loop" Feb 20 15:01:02.058410 master-0 kubenswrapper[28120]: E0220 15:01:02.055212 28120 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 20 15:01:02.075114 master-0 kubenswrapper[28120]: I0220 15:01:02.066210 28120 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 20 15:01:02.116963 master-0 kubenswrapper[28120]: I0220 15:01:02.116199 28120 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="3c2b6c4d3887c6ce78fb1f319d3d917dd19b6ede5e9ab3d53c00d05b6ea4ef23" exitCode=0 Feb 20 15:01:02.116963 master-0 kubenswrapper[28120]: I0220 15:01:02.116240 28120 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="321be2d7453c33396b3363bf789e4d552d4e8d66090aa9915bf60f644a971c6e" exitCode=0 Feb 20 15:01:02.116963 master-0 kubenswrapper[28120]: I0220 15:01:02.116248 28120 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="92784546c39ab249199b64e99295b360ac694daa7345bcc5ca4290c1679248d5" exitCode=0 Feb 20 15:01:02.118003 master-0 kubenswrapper[28120]: I0220 15:01:02.117956 28120 generic.go:334] "Generic (PLEG): container finished" podID="989af121-da08-4f40-b08c-dd2aa67bc60c" containerID="832f243cdb2cdff1065e35c1a4b8eb6397a6696e55399d5bf71d3cb4f866d80d" exitCode=0 Feb 20 15:01:02.121212 master-0 kubenswrapper[28120]: I0220 15:01:02.121172 28120 generic.go:334] "Generic (PLEG): container finished" podID="234a44fd-c153-47a6-a11d-7d4b7165c236" containerID="581f236214a140a0dd97c9926ea209ede3f39ed6cfcbab89bbd1dddd4483776d" exitCode=0 Feb 20 15:01:02.124022 master-0 kubenswrapper[28120]: I0220 15:01:02.123988 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-gjdb4_0bedbe69-fc4b-4bd7-bcc2-acead927eda2/machine-api-operator/0.log" Feb 20 15:01:02.124379 master-0 kubenswrapper[28120]: I0220 15:01:02.124343 28120 generic.go:334] "Generic (PLEG): container finished" podID="0bedbe69-fc4b-4bd7-bcc2-acead927eda2" containerID="09c2a559e7cc2a5451aca2755577ab8e7c2b5ea2ef73bac50c4295f2287bdf15" exitCode=255 Feb 20 15:01:02.144734 master-0 kubenswrapper[28120]: I0220 15:01:02.144668 28120 generic.go:334] "Generic (PLEG): container finished" podID="63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" containerID="ce6cf48b03cf7ea4bb59cbc88338b3797dd3cd5289e6bbf78ef6ac04abd04f98" exitCode=0 Feb 20 15:01:02.146813 master-0 kubenswrapper[28120]: I0220 15:01:02.146777 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-xcrlh_996d4949-f92c-42ac-9bda-8c6ec0295e92/machine-approver-controller/0.log" Feb 20 15:01:02.148158 master-0 kubenswrapper[28120]: I0220 15:01:02.148109 28120 generic.go:334] "Generic (PLEG): container finished" podID="996d4949-f92c-42ac-9bda-8c6ec0295e92" containerID="b6e9e6d9ccde8375bcdecc9c3bf9ed6951fb841bc2a4f124a46a0fefb565de16" exitCode=255 Feb 20 15:01:02.151832 master-0 kubenswrapper[28120]: I0220 15:01:02.151800 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-6r5qx_8157f73d-c757-40c4-80bc-3c9de2f2288a/authentication-operator/1.log" Feb 20 15:01:02.151897 master-0 kubenswrapper[28120]: I0220 15:01:02.151847 28120 generic.go:334] "Generic (PLEG): container finished" podID="8157f73d-c757-40c4-80bc-3c9de2f2288a" containerID="9eac150251b3b5d386062f7aa8467ef3cc273bff50cfaf7bb7d3226879ebfbb8" exitCode=1 Feb 20 15:01:02.155311 master-0 kubenswrapper[28120]: E0220 15:01:02.155283 28120 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 20 15:01:02.156631 master-0 kubenswrapper[28120]: I0220 15:01:02.156579 28120 generic.go:334] "Generic (PLEG): container finished" podID="c5429ce9-f3b7-4024-ac77-3a93a2ac77bb" containerID="18d29e751749b7ea9876b738d17d6268502d86978c1804f16e31b40402471107" exitCode=0 Feb 20 15:01:02.158840 master-0 kubenswrapper[28120]: I0220 15:01:02.158800 28120 generic.go:334] "Generic (PLEG): container finished" podID="16d6dd52-d73b-4696-873e-00a6d4bb2c77" containerID="e6e379ec088445dd86d2191d2d0584d608d0fb6a75f60858cd436421f083f620" exitCode=0 Feb 20 15:01:02.160652 master-0 kubenswrapper[28120]: I0220 15:01:02.160608 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-2sw9z_1fe69517-eec2-4721-933c-fa27cea7ab1f/package-server-manager/0.log" Feb 20 15:01:02.161031 master-0 kubenswrapper[28120]: I0220 15:01:02.161001 28120 generic.go:334] "Generic (PLEG): container finished" podID="1fe69517-eec2-4721-933c-fa27cea7ab1f" containerID="8fa1fcd077e28cf5cfeec8c2cafd29cf0677802573ac33c46747c76a0973c8ec" exitCode=1 Feb 20 15:01:02.176030 master-0 kubenswrapper[28120]: I0220 15:01:02.174462 28120 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="6e7cd59de9caeb6625ff93f951dca8b15c57f96db1e17aebced0a5231f411d3f" exitCode=0 Feb 20 15:01:02.176030 master-0 kubenswrapper[28120]: I0220 15:01:02.174535 28120 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="7caadc72530799fe020f6b0140bace32e6cb7e8ebbe6207315d6d035384c83d6" exitCode=0 Feb 20 15:01:02.176030 master-0 kubenswrapper[28120]: I0220 15:01:02.174550 28120 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="177aef91a6eb47e06724759a7ce69757e5533636be520f8861b5d3c44d7c4272" exitCode=0 Feb 20 15:01:02.176030 master-0 kubenswrapper[28120]: I0220 15:01:02.174560 28120 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="187177cba6632230d116641fd3dad458ff096f751d761a5c25483f731b58481b" exitCode=0 Feb 20 15:01:02.176030 master-0 kubenswrapper[28120]: I0220 15:01:02.174577 28120 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="86aaca74eb46c2a67484d7ed32bbe3315e4c31acc5fa267db57dbe7175337821" exitCode=0 Feb 20 15:01:02.176030 master-0 kubenswrapper[28120]: I0220 15:01:02.174589 28120 generic.go:334] "Generic (PLEG): container finished" podID="b6e6d218-d969-40b5-a32b-9b2093089dbf" containerID="c092110b72556c746170c7d0567154da90861fa9515b4bc320e9e6d1cc856cd6" exitCode=0 Feb 20 15:01:02.192161 master-0 kubenswrapper[28120]: I0220 15:01:02.192092 28120 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="2359af63f52b488394f4fa66a44d4982b382146adcf63bb193421cfeb1ecf07e" exitCode=0 Feb 20 15:01:02.194388 master-0 kubenswrapper[28120]: I0220 15:01:02.194340 28120 generic.go:334] "Generic (PLEG): container finished" podID="3bf5be04-e4dd-44d9-be1a-3abe6ddd2367" containerID="361cac7f381ef490c05a6ad20d7d519e61ac704ec32bc6d37576fd4551ff3afc" exitCode=0 Feb 20 15:01:02.196688 master-0 kubenswrapper[28120]: I0220 15:01:02.196656 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-855tj_caef1c17-56b0-479c-b000-caaac3c2b249/config-sync-controllers/0.log" Feb 20 15:01:02.197212 master-0 kubenswrapper[28120]: I0220 15:01:02.197170 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-855tj_caef1c17-56b0-479c-b000-caaac3c2b249/cluster-cloud-controller-manager/0.log" Feb 20 15:01:02.197278 master-0 kubenswrapper[28120]: I0220 15:01:02.197226 28120 generic.go:334] "Generic (PLEG): container finished" podID="caef1c17-56b0-479c-b000-caaac3c2b249" containerID="40d63e74e24fee68be44b5de74837dcb78a9dc13e3f7cf14b4e7c069fc14a3c1" exitCode=1 Feb 20 15:01:02.197278 master-0 kubenswrapper[28120]: I0220 15:01:02.197246 28120 generic.go:334] "Generic (PLEG): container finished" podID="caef1c17-56b0-479c-b000-caaac3c2b249" containerID="ba4791195ab28fdefd71609ee2f152b2f868666e0ec80047600b61f1c976a50f" exitCode=1 Feb 20 15:01:02.198734 master-0 kubenswrapper[28120]: I0220 15:01:02.198704 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_3ef51d3b-cd8b-4f34-961e-8daebbed3ca6/installer/0.log" Feb 20 15:01:02.198790 master-0 kubenswrapper[28120]: I0220 15:01:02.198743 28120 generic.go:334] "Generic (PLEG): container finished" podID="3ef51d3b-cd8b-4f34-961e-8daebbed3ca6" containerID="992d06369bcdfc83fe57ae6d1c5dce1f2cfa2163b4588fe5df6d49020418c795" exitCode=1 Feb 20 15:01:02.203612 master-0 kubenswrapper[28120]: I0220 15:01:02.203576 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-k2tnk_86f6836b-b018-4c7a-87ad-51809a4b9c7a/cluster-baremetal-operator/0.log" Feb 20 15:01:02.203681 master-0 kubenswrapper[28120]: I0220 15:01:02.203615 28120 generic.go:334] "Generic (PLEG): container finished" podID="86f6836b-b018-4c7a-87ad-51809a4b9c7a" containerID="57a9d244672b000b813223a646214cb5149d5553c3f6c953fcf4645211da137b" exitCode=1 Feb 20 15:01:02.205020 master-0 kubenswrapper[28120]: I0220 15:01:02.204972 28120 generic.go:334] "Generic (PLEG): container finished" podID="53835140-8eed-401c-ac07-f89b554ff616" containerID="ac1ebe21f01db828cbdc3775b7cb4f962d321758483e5f64757855bd43976682" exitCode=0 Feb 20 15:01:02.206985 master-0 kubenswrapper[28120]: I0220 15:01:02.206951 28120 generic.go:334] "Generic (PLEG): container finished" podID="d3ca2d2f-9f31-4524-a28f-cf16b02dd711" containerID="a95da6b755620b3477b82b60290cab82bafb501ad18fb013d6a2d035fb2977b7" exitCode=0 Feb 20 15:01:02.206985 master-0 kubenswrapper[28120]: I0220 15:01:02.206975 28120 generic.go:334] "Generic (PLEG): container finished" podID="d3ca2d2f-9f31-4524-a28f-cf16b02dd711" containerID="8b677f9dfe1adb3fd4defb49e7d0b98454fc7a8c20e2d380e3e690cdf86abbc6" exitCode=0 Feb 20 15:01:02.206985 master-0 kubenswrapper[28120]: I0220 15:01:02.206984 28120 generic.go:334] "Generic (PLEG): container finished" podID="d3ca2d2f-9f31-4524-a28f-cf16b02dd711" containerID="1233b754482b6558abf240af9822b6209076badce1d5bcade0d4d98c88cc1f1f" exitCode=0 Feb 20 15:01:02.208567 master-0 kubenswrapper[28120]: I0220 15:01:02.208536 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-gprr4_33675e96-ce49-49be-9117-954ac7cca5d5/approver/1.log" Feb 20 15:01:02.208840 master-0 kubenswrapper[28120]: I0220 15:01:02.208808 28120 generic.go:334] "Generic (PLEG): container finished" podID="33675e96-ce49-49be-9117-954ac7cca5d5" containerID="93c53e18dcac71f47a3746e6562e8b692068a3b0ff7c4afe8e6e0d3f178f230b" exitCode=1 Feb 20 15:01:02.215005 master-0 kubenswrapper[28120]: I0220 15:01:02.214955 28120 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="3bc728ed313ea4c2c24bfa6e5ec35ea80b76ead7f7237f5bfbb4c7d63e868b56" exitCode=0 Feb 20 15:01:02.215005 master-0 kubenswrapper[28120]: I0220 15:01:02.214991 28120 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="3bde77e581302fa688ce598a59eb1521eeb691223c05ecf792bb7f274b1fd8f2" exitCode=0 Feb 20 15:01:02.215005 master-0 kubenswrapper[28120]: I0220 15:01:02.214999 28120 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="b6ea946617b2fbe51c03eb02d48883421215780882113bffd73308a394e3acaf" exitCode=0 Feb 20 15:01:02.216369 master-0 kubenswrapper[28120]: I0220 15:01:02.216265 28120 generic.go:334] "Generic (PLEG): container finished" podID="8b73ae08-0ad7-4f99-8002-6df0d984cd2c" containerID="cfbd27b76aa0dc7c10ce1de7a1bdca66b3303ee8a7bc370fa5d11a1d913c8168" exitCode=0 Feb 20 15:01:02.219477 master-0 kubenswrapper[28120]: I0220 15:01:02.219444 28120 generic.go:334] "Generic (PLEG): container finished" podID="fea431d7-394f-4639-abd6-c70a28921fc6" containerID="91f517d397ca83de4c56e84947b8179187f25ef947f76871a498051ccbc41700" exitCode=0 Feb 20 15:01:02.222544 master-0 kubenswrapper[28120]: I0220 15:01:02.222507 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 20 15:01:02.222853 master-0 kubenswrapper[28120]: I0220 15:01:02.222821 28120 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="45a697749c461413b0722aa1be0b316cc858779a0e80c5ef44f0a3c27a2f1822" exitCode=1 Feb 20 15:01:02.222901 master-0 kubenswrapper[28120]: I0220 15:01:02.222862 28120 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="8aa8f34057d37d62316a09602947b9934df303dc999d1b14efc423cb04940c72" exitCode=0 Feb 20 15:01:02.228149 master-0 kubenswrapper[28120]: I0220 15:01:02.228107 28120 generic.go:334] "Generic (PLEG): container finished" podID="ac3680de-aabf-414b-a340-5e5e6aea4822" containerID="55b93e62b4f65de932584b817ba60092f21e3f44ea709a7dccfe6475d2084e38" exitCode=0 Feb 20 15:01:02.228149 master-0 kubenswrapper[28120]: I0220 15:01:02.228136 28120 generic.go:334] "Generic (PLEG): container finished" podID="ac3680de-aabf-414b-a340-5e5e6aea4822" containerID="c971e0c69d94dc6cc3921b26332fbb6cd07c9071a5e1bbf75f6a1abf3da41b6d" exitCode=0 Feb 20 15:01:02.230198 master-0 kubenswrapper[28120]: I0220 15:01:02.230174 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_380174fb-b30c-4f45-9119-397cdca91756/installer/0.log" Feb 20 15:01:02.230283 master-0 kubenswrapper[28120]: I0220 15:01:02.230202 28120 generic.go:334] "Generic (PLEG): container finished" podID="380174fb-b30c-4f45-9119-397cdca91756" containerID="28ee0d7fd2e81f54f5dcd52927e71c388f397b4ec8b363fb1c98a6fb82168cd2" exitCode=1 Feb 20 15:01:02.237874 master-0 kubenswrapper[28120]: I0220 15:01:02.237772 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-2mtj6_a1af84e0-776b-4285-906a-6880dbc82a7b/snapshot-controller/3.log" Feb 20 15:01:02.237952 master-0 kubenswrapper[28120]: I0220 15:01:02.237891 28120 generic.go:334] "Generic (PLEG): container finished" podID="a1af84e0-776b-4285-906a-6880dbc82a7b" containerID="f1b1e34a79f20570df08b5141ba77d85f604d72218b6eb7fe601f67b1fcd7a77" exitCode=1 Feb 20 15:01:02.249669 master-0 kubenswrapper[28120]: I0220 15:01:02.249597 28120 generic.go:334] "Generic (PLEG): container finished" podID="b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d" containerID="501e152806072f51a6aa348d15cc2667dcd91a44e63ea82bf19b7f6a5b79b7c9" exitCode=0 Feb 20 15:01:02.249669 master-0 kubenswrapper[28120]: I0220 15:01:02.249655 28120 generic.go:334] "Generic (PLEG): container finished" podID="b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d" containerID="e7333c1741153b59af991a3ad87866cd9c88f6ffc09e8e9cf921a7d0c933ce1e" exitCode=0 Feb 20 15:01:02.253365 master-0 kubenswrapper[28120]: I0220 15:01:02.253318 28120 generic.go:334] "Generic (PLEG): container finished" podID="93786626-fac4-48f0-bf72-992bc39f4a82" containerID="4b48185bed34b04ded3112db1a2c329d504a7ceb8c020ba9fbe406707b9c3662" exitCode=0 Feb 20 15:01:02.253365 master-0 kubenswrapper[28120]: I0220 15:01:02.253364 28120 generic.go:334] "Generic (PLEG): container finished" podID="93786626-fac4-48f0-bf72-992bc39f4a82" containerID="56019874af29c4e772f7520294fcbc7349ad7c86907d26939ad87c2a68027c4a" exitCode=0 Feb 20 15:01:02.261869 master-0 kubenswrapper[28120]: I0220 15:01:02.261830 28120 generic.go:334] "Generic (PLEG): container finished" podID="99fe3b99-0b40-4887-bcc8-59caa515b99f" containerID="ac76df8cb547ae36da1275aa8fb2cdc86502281cca8b0c482befd5640340a0ca" exitCode=0 Feb 20 15:01:02.283210 master-0 kubenswrapper[28120]: I0220 15:01:02.283042 28120 generic.go:334] "Generic (PLEG): container finished" podID="014f3913-ac7e-431a-880c-91d979a5dfc7" containerID="d0525760cb8ba3e4a202836682905e3209d011265d322e121763f9e03af800fb" exitCode=0 Feb 20 15:01:02.294373 master-0 kubenswrapper[28120]: I0220 15:01:02.294007 28120 generic.go:334] "Generic (PLEG): container finished" podID="c81ad608-a8ad-4289-a8d2-d48acb9b540c" containerID="5433accfcf1efda61ccbe8f683016067c773a6f6dbc87107ff277c75114e35c4" exitCode=0 Feb 20 15:01:02.299599 master-0 kubenswrapper[28120]: I0220 15:01:02.299558 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-jl7zr_fc334fff-c0bf-4905-bcdb-b0d2a35b0590/manager/0.log" Feb 20 15:01:02.303545 master-0 kubenswrapper[28120]: I0220 15:01:02.303496 28120 generic.go:334] "Generic (PLEG): container finished" podID="fc334fff-c0bf-4905-bcdb-b0d2a35b0590" containerID="c477064b0f3fd6cd0d107cda0e6daa47e69c108cc08e8c15adda744ad3c559d0" exitCode=1 Feb 20 15:01:02.320336 master-0 kubenswrapper[28120]: I0220 15:01:02.320278 28120 generic.go:334] "Generic (PLEG): container finished" podID="bdf18981-b755-4b11-8793-38bc5e2e755b" containerID="71a3faa6e2a13b4bcadc91647966380b556ee1824a73e0209af007ec80d749b3" exitCode=0 Feb 20 15:01:02.328426 master-0 kubenswrapper[28120]: I0220 15:01:02.328374 28120 generic.go:334] "Generic (PLEG): container finished" podID="43e9807a-859c-44c1-8511-0066b0f59ff8" containerID="434ed936cc25c1d0e0f36dd52a8572c7b7417d14a5a50821cdca25739e6e9d2b" exitCode=0 Feb 20 15:01:02.332359 master-0 kubenswrapper[28120]: I0220 15:01:02.332101 28120 generic.go:334] "Generic (PLEG): container finished" podID="9fd9f419-2cdc-4991-8fb9-87d76ac58976" containerID="5761b5d97bb857209597024a19cdbe2341d245c395e6ce681c8bc8fd7fa023bd" exitCode=0 Feb 20 15:01:02.335637 master-0 kubenswrapper[28120]: I0220 15:01:02.335601 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_277ab008-e6f0-49cd-801d-54d3071036d4/installer/0.log" Feb 20 15:01:02.335713 master-0 kubenswrapper[28120]: I0220 15:01:02.335640 28120 generic.go:334] "Generic (PLEG): container finished" podID="277ab008-e6f0-49cd-801d-54d3071036d4" containerID="6c7c12ccf7f07aacf9744ba31c10a72a4c19226b35c8d4fd36f32979a50dbaaf" exitCode=1 Feb 20 15:01:02.356194 master-0 kubenswrapper[28120]: I0220 15:01:02.341226 28120 generic.go:334] "Generic (PLEG): container finished" podID="8c60ad1f-f8d9-4c67-97a3-f9fa491bd463" containerID="39729aa63d210240a6c419acbf228b3a124ab4900f3cc120e7b7aead6bf8e73a" exitCode=0 Feb 20 15:01:02.356194 master-0 kubenswrapper[28120]: I0220 15:01:02.347483 28120 generic.go:334] "Generic (PLEG): container finished" podID="b6285323-3e75-4d44-ad05-98890c097dd2" containerID="e0e54afa304c07256ca81f12b5ac712d5ac8488390931a330fe4a44a3c9b790d" exitCode=0 Feb 20 15:01:02.356194 master-0 kubenswrapper[28120]: I0220 15:01:02.351088 28120 generic.go:334] "Generic (PLEG): container finished" podID="45d7ef0c-272b-4d1e-965f-484975d5d25c" containerID="2b921a59215a9b57fc0e140139af8ee009d893b2733cf5fcafdbd68899442899" exitCode=0 Feb 20 15:01:02.356194 master-0 kubenswrapper[28120]: I0220 15:01:02.352300 28120 generic.go:334] "Generic (PLEG): container finished" podID="5c8741d7-c96b-41cc-80cb-81683bb68480" containerID="cff869feeda154776fdb80bde49136ec0b5b04dcf06768e009678b70576a1603" exitCode=0 Feb 20 15:01:02.356194 master-0 kubenswrapper[28120]: I0220 15:01:02.354230 28120 generic.go:334] "Generic (PLEG): container finished" podID="ab3c370c-58b4-4115-a359-b3f55c87284d" containerID="00ed587ddf8155d51df42eba4d283cbd6beb09f53d1fc60d2651e845ec7cf08c" exitCode=0 Feb 20 15:01:02.356194 master-0 kubenswrapper[28120]: E0220 15:01:02.355349 28120 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 20 15:01:02.356645 master-0 kubenswrapper[28120]: I0220 15:01:02.356222 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-2tpv8_49044786-483a-406e-8750-f6ded400841d/control-plane-machine-set-operator/0.log" Feb 20 15:01:02.356645 master-0 kubenswrapper[28120]: I0220 15:01:02.356254 28120 generic.go:334] "Generic (PLEG): container finished" podID="49044786-483a-406e-8750-f6ded400841d" containerID="c537be0fb6abb27532917c3ba13de8d47b09b2f7faa20aacc94423594538336f" exitCode=1 Feb 20 15:01:02.358340 master-0 kubenswrapper[28120]: I0220 15:01:02.358292 28120 generic.go:334] "Generic (PLEG): container finished" podID="27ab8945-6a5b-4f7d-b893-6358da214499" containerID="3a018b588cd0fab81aef4437e8a3c01bf2d7562f85789ce7770c3b488cc91b89" exitCode=0 Feb 20 15:01:02.362600 master-0 kubenswrapper[28120]: I0220 15:01:02.362550 28120 generic.go:334] "Generic (PLEG): container finished" podID="a8c0a6d2-f1f9-49e3-9475-4983b50667bf" containerID="a27dacc9767bb08d41caf26b14c781b3928a704b21733f539c8b91a44b0c4d18" exitCode=0 Feb 20 15:01:02.368935 master-0 kubenswrapper[28120]: I0220 15:01:02.368785 28120 generic.go:334] "Generic (PLEG): container finished" podID="a4339bd5-b8d1-467e-8158-4464ea901148" containerID="23b61efd81399a78fa532e7f0cf8b35a9b7f7f7e97f61e6f0f85ac41949a2a92" exitCode=0 Feb 20 15:01:02.369061 master-0 kubenswrapper[28120]: I0220 15:01:02.369040 28120 generic.go:334] "Generic (PLEG): container finished" podID="a4339bd5-b8d1-467e-8158-4464ea901148" containerID="638df7437edc2bded4ad7d7ef94d2b7ebf2de761535638d3ecef6e0202944682" exitCode=0 Feb 20 15:01:02.372745 master-0 kubenswrapper[28120]: I0220 15:01:02.372675 28120 generic.go:334] "Generic (PLEG): container finished" podID="4c31b8a7-edcb-403d-9122-7eb740f7d659" containerID="696e06ef6554e221cbbd27e48c3197d621e72c8d19b1df8b12bd4eab6b3279b8" exitCode=0 Feb 20 15:01:02.376698 master-0 kubenswrapper[28120]: I0220 15:01:02.376671 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-6qqvd_84a61910-48eb-4c27-8d69-f6aa7ce912ca/manager/0.log" Feb 20 15:01:02.376786 master-0 kubenswrapper[28120]: I0220 15:01:02.376717 28120 generic.go:334] "Generic (PLEG): container finished" podID="84a61910-48eb-4c27-8d69-f6aa7ce912ca" containerID="033a3d2eac65c1b4d9f27c950aeb8dc662b4f02d9215e718db95c771bce201e1" exitCode=1 Feb 20 15:01:02.378064 master-0 kubenswrapper[28120]: I0220 15:01:02.378025 28120 generic.go:334] "Generic (PLEG): container finished" podID="c0a3548f-299c-4234-9bf1-c93efcb9740b" containerID="52bf43d0e30c121fdb642cca3e4e8c737348e2c0806817b6c660ae4bd355d192" exitCode=0 Feb 20 15:01:02.381988 master-0 kubenswrapper[28120]: I0220 15:01:02.381868 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_986049a1-b3e4-4dca-b178-55eaa7a27bfb/installer/0.log" Feb 20 15:01:02.382441 master-0 kubenswrapper[28120]: I0220 15:01:02.382163 28120 generic.go:334] "Generic (PLEG): container finished" podID="986049a1-b3e4-4dca-b178-55eaa7a27bfb" containerID="6f844b10f8ac3c87a0a1682a1e7ea9ccbec49915b04b1fd7a88cca60f9004b80" exitCode=1 Feb 20 15:01:02.384339 master-0 kubenswrapper[28120]: I0220 15:01:02.384266 28120 generic.go:334] "Generic (PLEG): container finished" podID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerID="4c99e85f05d7056363eecf219cc429ad9226d3b3266d2b4c70190b2024933a11" exitCode=0 Feb 20 15:01:02.384339 master-0 kubenswrapper[28120]: I0220 15:01:02.384323 28120 generic.go:334] "Generic (PLEG): container finished" podID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerID="a4f9d4f4e0643d58b1e9cfa61dcaf195c6366d9cff2806438f5074f51bd80343" exitCode=0 Feb 20 15:01:02.395895 master-0 kubenswrapper[28120]: I0220 15:01:02.393661 28120 generic.go:334] "Generic (PLEG): container finished" podID="21384bd0-495c-406a-9462-e9e740c04686" containerID="325237c1c62eee1b6dbe253582be0281f8aeaa79ed6559821ac6420b7b9c38ca" exitCode=0 Feb 20 15:01:02.398837 master-0 kubenswrapper[28120]: I0220 15:01:02.398787 28120 generic.go:334] "Generic (PLEG): container finished" podID="5d2b154b-de63-4c9b-99d8-487fb3035fb9" containerID="c3644a2305f2cac790098fa61dc92fdcede4316b05ab9e68ec6a558810ecdfcf" exitCode=0 Feb 20 15:01:02.400181 master-0 kubenswrapper[28120]: I0220 15:01:02.400139 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_975d0fde-cb2f-4599-b3b7-7de876307a61/installer/0.log" Feb 20 15:01:02.400181 master-0 kubenswrapper[28120]: I0220 15:01:02.400175 28120 generic.go:334] "Generic (PLEG): container finished" podID="975d0fde-cb2f-4599-b3b7-7de876307a61" containerID="a59f2b3ca51cdc733c2fc543e6bd0ce183b3347c73680778d3a84d4f88dd4a1f" exitCode=1 Feb 20 15:01:02.404063 master-0 kubenswrapper[28120]: I0220 15:01:02.404017 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-fjtrw_4b6a656c-40d6-4c63-9c6f-ac943eae4c9a/ingress-operator/2.log" Feb 20 15:01:02.404421 master-0 kubenswrapper[28120]: I0220 15:01:02.404376 28120 generic.go:334] "Generic (PLEG): container finished" podID="4b6a656c-40d6-4c63-9c6f-ac943eae4c9a" containerID="ba86653512a4222e60f99c9a0811e8150bea75c06b16f3bd7d165d8b4d82ace0" exitCode=1 Feb 20 15:01:02.406791 master-0 kubenswrapper[28120]: I0220 15:01:02.406678 28120 generic.go:334] "Generic (PLEG): container finished" podID="6e5d953b-dbc7-48df-9d6b-d61030ffd6e3" containerID="6bb51ccc67529cda0c8d2e85bd6a87b5b5906d7277689a9401dd4cc5bc52c400" exitCode=0 Feb 20 15:01:02.406791 master-0 kubenswrapper[28120]: I0220 15:01:02.406701 28120 generic.go:334] "Generic (PLEG): container finished" podID="6e5d953b-dbc7-48df-9d6b-d61030ffd6e3" containerID="47e63e5f96b20c92842a652e9774f2aec1b3dc91bda96ad7600899bf883b2ca7" exitCode=0 Feb 20 15:01:02.419776 master-0 kubenswrapper[28120]: I0220 15:01:02.419723 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_56ff46cdb00d28519af7c0cdc9ea8d11/kube-scheduler/0.log" Feb 20 15:01:02.420408 master-0 kubenswrapper[28120]: I0220 15:01:02.420260 28120 generic.go:334] "Generic (PLEG): container finished" podID="56ff46cdb00d28519af7c0cdc9ea8d11" containerID="c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b" exitCode=1 Feb 20 15:01:02.420647 master-0 kubenswrapper[28120]: I0220 15:01:02.420418 28120 generic.go:334] "Generic (PLEG): container finished" podID="56ff46cdb00d28519af7c0cdc9ea8d11" containerID="5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f" exitCode=0 Feb 20 15:01:02.435018 master-0 kubenswrapper[28120]: I0220 15:01:02.429871 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-c8w7r_b385880b-a26b-4353-8f6f-b7f926bcc67c/cluster-autoscaler-operator/0.log" Feb 20 15:01:02.435018 master-0 kubenswrapper[28120]: I0220 15:01:02.432226 28120 generic.go:334] "Generic (PLEG): container finished" podID="b385880b-a26b-4353-8f6f-b7f926bcc67c" containerID="fcbb2a13969414b96cd30dbad7457a49997232b9842608fdd68bbd19061a8401" exitCode=255 Feb 20 15:01:02.439860 master-0 kubenswrapper[28120]: I0220 15:01:02.439784 28120 generic.go:334] "Generic (PLEG): container finished" podID="db9dc349-5216-43ff-8c17-3a9384a010ea" containerID="255184eff0270c34b8e6556e377cc8915ae25bb2f15df7164830c2551d563b2b" exitCode=0 Feb 20 15:01:02.622528 master-0 kubenswrapper[28120]: I0220 15:01:02.622430 28120 manager.go:324] Recovery completed Feb 20 15:01:02.718118 master-0 kubenswrapper[28120]: I0220 15:01:02.718057 28120 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 20 15:01:02.718118 master-0 kubenswrapper[28120]: I0220 15:01:02.718086 28120 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 20 15:01:02.718118 master-0 kubenswrapper[28120]: I0220 15:01:02.718105 28120 state_mem.go:36] "Initialized new in-memory state store" Feb 20 15:01:02.718375 master-0 kubenswrapper[28120]: I0220 15:01:02.718266 28120 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 20 15:01:02.718375 master-0 kubenswrapper[28120]: I0220 15:01:02.718277 28120 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 20 15:01:02.718375 master-0 kubenswrapper[28120]: I0220 15:01:02.718295 28120 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 20 15:01:02.718375 master-0 kubenswrapper[28120]: I0220 15:01:02.718302 28120 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 20 15:01:02.718375 master-0 kubenswrapper[28120]: I0220 15:01:02.718309 28120 policy_none.go:49] "None policy: Start" Feb 20 15:01:02.729035 master-0 kubenswrapper[28120]: I0220 15:01:02.728964 28120 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 20 15:01:02.729204 master-0 kubenswrapper[28120]: I0220 15:01:02.729069 28120 state_mem.go:35] "Initializing new in-memory state store" Feb 20 15:01:02.729558 master-0 kubenswrapper[28120]: I0220 15:01:02.729524 28120 state_mem.go:75] "Updated machine memory state" Feb 20 15:01:02.729617 master-0 kubenswrapper[28120]: I0220 15:01:02.729557 28120 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 20 15:01:02.755768 master-0 kubenswrapper[28120]: E0220 15:01:02.755706 28120 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 20 15:01:02.756573 master-0 kubenswrapper[28120]: I0220 15:01:02.756539 28120 manager.go:334] "Starting Device Plugin manager" Feb 20 15:01:02.756651 master-0 kubenswrapper[28120]: I0220 15:01:02.756603 28120 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 20 15:01:02.756651 master-0 kubenswrapper[28120]: I0220 15:01:02.756615 28120 server.go:79] "Starting device plugin registration server" Feb 20 15:01:02.757077 master-0 kubenswrapper[28120]: I0220 15:01:02.757051 28120 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 20 15:01:02.757166 master-0 kubenswrapper[28120]: I0220 15:01:02.757071 28120 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 20 15:01:02.757301 master-0 kubenswrapper[28120]: I0220 15:01:02.757277 28120 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 20 15:01:02.757381 master-0 kubenswrapper[28120]: I0220 15:01:02.757361 28120 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 20 15:01:02.757381 master-0 kubenswrapper[28120]: I0220 15:01:02.757369 28120 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 20 15:01:02.858213 master-0 kubenswrapper[28120]: I0220 15:01:02.858119 28120 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 20 15:01:02.862832 master-0 kubenswrapper[28120]: I0220 15:01:02.862795 28120 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 20 15:01:02.862994 master-0 kubenswrapper[28120]: I0220 15:01:02.862976 28120 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 20 15:01:02.863115 master-0 kubenswrapper[28120]: I0220 15:01:02.863100 28120 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 20 15:01:02.863454 master-0 kubenswrapper[28120]: I0220 15:01:02.863438 28120 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 20 15:01:02.875428 master-0 kubenswrapper[28120]: I0220 15:01:02.875292 28120 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 20 15:01:02.875428 master-0 kubenswrapper[28120]: I0220 15:01:02.875366 28120 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 20 15:01:02.951980 master-0 kubenswrapper[28120]: I0220 15:01:02.951895 28120 apiserver.go:52] "Watching apiserver" Feb 20 15:01:02.982285 master-0 kubenswrapper[28120]: I0220 15:01:02.982183 28120 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 20 15:01:03.448043 master-0 kubenswrapper[28120]: I0220 15:01:03.447980 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-check-endpoints/0.log" Feb 20 15:01:03.460059 master-0 kubenswrapper[28120]: I0220 15:01:03.459284 28120 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="77c5708572ab9b4b6918c12a1fcd864571adf469d8703ecc7203af8fab7885f3" exitCode=255 Feb 20 15:01:03.556413 master-0 kubenswrapper[28120]: I0220 15:01:03.556282 28120 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 20 15:01:03.558296 master-0 kubenswrapper[28120]: I0220 15:01:03.558201 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4","openshift-dns/dns-default-dzfl8","openshift-dns/node-resolver-djs75","openshift-kube-controller-manager/installer-3-master-0","openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh","openshift-marketplace/certified-operators-9wddt","openshift-marketplace/community-operators-x5fhb","openshift-multus/multus-m6hpf","openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb","openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm","openshift-ingress/router-default-7b65dc9fcb-tlsdt","openshift-kube-controller-manager/installer-3-retry-1-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt","openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p","openshift-ingress-canary/ingress-canary-5qlzq","openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r","openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4","openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj","openshift-cluster-node-tuning-operator/tuned-jc4wl","openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx","openshift-network-operator/iptables-alerter-cgp8r","openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd","openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5","openshift-etcd/etcd-master-0","openshift-kube-apiserver/installer-1-master-0","openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m","openshift-monitoring/kube-state-metrics-59584d565f-stlhz","openshift-monitoring/node-exporter-bk9bp","openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24","openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx","openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6","openshift-insights/insights-operator-59b498fcfb-b9jmk","openshift-kube-controller-manager/installer-1-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp","openshift-network-diagnostics/network-check-source-58fb6744f5-nth67","openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-apiserver/installer-3-master-0","openshift-monitoring/prometheus-operator-754bc4d665-gsn48","openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd","openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7","openshift-apiserver/apiserver-776c8f54bc-gmvx8","openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7","openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc","openshift-controller-manager/controller-manager-647657fcb-w9586","openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt","openshift-monitoring/metrics-server-9bcdd7684-kz2z2","openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp","openshift-service-ca/service-ca-576b4d78bd-fc795","assisted-installer/assisted-installer-controller-wtxfh","openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh","openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x","openshift-kube-apiserver/installer-3-retry-1-master-0","openshift-kube-scheduler/installer-4-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb","openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx","openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s","openshift-cluster-version/cluster-version-operator-57476485-nl7tx","openshift-dns-operator/dns-operator-8c7d49845-gkrph","openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww","openshift-kube-scheduler/installer-3-master-0","openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4","openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq","openshift-multus/multus-additional-cni-plugins-6ts4p","openshift-multus/network-metrics-daemon-99lkv","openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr","openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c","openshift-ingress-operator/ingress-operator-6569778c84-fjtrw","openshift-machine-config-operator/machine-config-daemon-ztgdm","openshift-machine-config-operator/machine-config-server-5frvf","openshift-marketplace/marketplace-operator-6f5488b997-97m7r","openshift-marketplace/redhat-operators-z4wzg","openshift-network-node-identity/network-node-identity-gprr4","openshift-network-operator/network-operator-7d7db75979-tj8fx","openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2","openshift-ovn-kubernetes/ovnkube-node-5gzs6","openshift-etcd/installer-1-master-0","openshift-etcd/installer-2-master-0","openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6","openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8","openshift-marketplace/redhat-marketplace-n2cdp","openshift-network-diagnostics/network-check-target-ljvkb","openshift-oauth-apiserver/apiserver-7659f6b598-z8454"] Feb 20 15:01:03.559464 master-0 kubenswrapper[28120]: I0220 15:01:03.559423 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-wtxfh" Feb 20 15:01:03.563882 master-0 kubenswrapper[28120]: I0220 15:01:03.563831 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" Feb 20 15:01:03.564391 master-0 kubenswrapper[28120]: I0220 15:01:03.564341 28120 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="8dae063b-c9d3-429d-96d3-35490fb40222" Feb 20 15:01:03.570953 master-0 kubenswrapper[28120]: I0220 15:01:03.568438 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 20 15:01:03.574948 master-0 kubenswrapper[28120]: I0220 15:01:03.572009 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 20 15:01:03.574948 master-0 kubenswrapper[28120]: I0220 15:01:03.573163 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 20 15:01:03.574948 master-0 kubenswrapper[28120]: I0220 15:01:03.573861 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 20 15:01:03.585945 master-0 kubenswrapper[28120]: I0220 15:01:03.581506 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 20 15:01:03.585945 master-0 kubenswrapper[28120]: I0220 15:01:03.582105 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 20 15:01:03.585945 master-0 kubenswrapper[28120]: I0220 15:01:03.582468 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 20 15:01:03.585945 master-0 kubenswrapper[28120]: I0220 15:01:03.582617 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 20 15:01:03.585945 master-0 kubenswrapper[28120]: E0220 15:01:03.582729 28120 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:03.585945 master-0 kubenswrapper[28120]: I0220 15:01:03.583139 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 20 15:01:03.585945 master-0 kubenswrapper[28120]: I0220 15:01:03.583148 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 20 15:01:03.585945 master-0 kubenswrapper[28120]: I0220 15:01:03.583548 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 20 15:01:03.609364 master-0 kubenswrapper[28120]: I0220 15:01:03.608247 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 15:01:03.609364 master-0 kubenswrapper[28120]: I0220 15:01:03.608428 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 20 15:01:03.609364 master-0 kubenswrapper[28120]: I0220 15:01:03.608676 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 20 15:01:03.609364 master-0 kubenswrapper[28120]: I0220 15:01:03.608720 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 20 15:01:03.614449 master-0 kubenswrapper[28120]: I0220 15:01:03.608775 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 20 15:01:03.615932 master-0 kubenswrapper[28120]: I0220 15:01:03.614597 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 20 15:01:03.615932 master-0 kubenswrapper[28120]: I0220 15:01:03.608852 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 20 15:01:03.615932 master-0 kubenswrapper[28120]: I0220 15:01:03.614669 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 20 15:01:03.615932 master-0 kubenswrapper[28120]: I0220 15:01:03.614831 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 20 15:01:03.615932 master-0 kubenswrapper[28120]: I0220 15:01:03.614971 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 20 15:01:03.615932 master-0 kubenswrapper[28120]: I0220 15:01:03.615099 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 20 15:01:03.615932 master-0 kubenswrapper[28120]: I0220 15:01:03.608888 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 20 15:01:03.615932 master-0 kubenswrapper[28120]: I0220 15:01:03.615904 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 20 15:01:03.616212 master-0 kubenswrapper[28120]: I0220 15:01:03.608963 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 20 15:01:03.616212 master-0 kubenswrapper[28120]: I0220 15:01:03.616146 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 20 15:01:03.616212 master-0 kubenswrapper[28120]: I0220 15:01:03.616156 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 20 15:01:03.616298 master-0 kubenswrapper[28120]: I0220 15:01:03.614603 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 20 15:01:03.616326 master-0 kubenswrapper[28120]: I0220 15:01:03.616302 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: I0220 15:01:03.609353 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: I0220 15:01:03.609539 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: I0220 15:01:03.616978 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: I0220 15:01:03.609589 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: I0220 15:01:03.609765 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: I0220 15:01:03.617425 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: I0220 15:01:03.609405 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: I0220 15:01:03.617568 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: I0220 15:01:03.616457 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: I0220 15:01:03.617644 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: E0220 15:01:03.617713 28120 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: I0220 15:01:03.617888 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: I0220 15:01:03.617936 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: I0220 15:01:03.618114 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: I0220 15:01:03.618430 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: I0220 15:01:03.618474 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 20 15:01:03.618795 master-0 kubenswrapper[28120]: I0220 15:01:03.618508 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.625859 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.625965 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626004 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: E0220 15:01:03.626061 28120 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626073 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626221 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626233 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626304 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626317 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626347 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626405 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626485 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626488 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626523 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626554 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626587 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626633 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626692 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626720 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626747 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626791 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626819 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626857 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626909 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 20 15:01:03.626905 master-0 kubenswrapper[28120]: I0220 15:01:03.626939 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 20 15:01:03.627572 master-0 kubenswrapper[28120]: I0220 15:01:03.626995 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 20 15:01:03.627572 master-0 kubenswrapper[28120]: I0220 15:01:03.627064 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 20 15:01:03.627572 master-0 kubenswrapper[28120]: I0220 15:01:03.627079 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 20 15:01:03.627572 master-0 kubenswrapper[28120]: I0220 15:01:03.626758 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 20 15:01:03.627572 master-0 kubenswrapper[28120]: I0220 15:01:03.626698 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 20 15:01:03.627572 master-0 kubenswrapper[28120]: I0220 15:01:03.627065 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 20 15:01:03.627572 master-0 kubenswrapper[28120]: I0220 15:01:03.627219 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 20 15:01:03.627572 master-0 kubenswrapper[28120]: I0220 15:01:03.626487 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 20 15:01:03.627572 master-0 kubenswrapper[28120]: I0220 15:01:03.627330 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 20 15:01:03.627572 master-0 kubenswrapper[28120]: I0220 15:01:03.626634 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 20 15:01:03.627572 master-0 kubenswrapper[28120]: I0220 15:01:03.626164 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 20 15:01:03.627572 master-0 kubenswrapper[28120]: I0220 15:01:03.627465 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 20 15:01:03.627572 master-0 kubenswrapper[28120]: I0220 15:01:03.626128 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 20 15:01:03.627572 master-0 kubenswrapper[28120]: I0220 15:01:03.627538 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 20 15:01:03.627572 master-0 kubenswrapper[28120]: I0220 15:01:03.627558 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 20 15:01:03.627572 master-0 kubenswrapper[28120]: I0220 15:01:03.627580 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 20 15:01:03.627979 master-0 kubenswrapper[28120]: I0220 15:01:03.627609 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 20 15:01:03.627979 master-0 kubenswrapper[28120]: I0220 15:01:03.626615 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 20 15:01:03.627979 master-0 kubenswrapper[28120]: I0220 15:01:03.627720 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 20 15:01:03.627979 master-0 kubenswrapper[28120]: I0220 15:01:03.627729 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 20 15:01:03.627979 master-0 kubenswrapper[28120]: I0220 15:01:03.627738 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 20 15:01:03.627979 master-0 kubenswrapper[28120]: I0220 15:01:03.627958 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 20 15:01:03.628127 master-0 kubenswrapper[28120]: I0220 15:01:03.628063 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 20 15:01:03.628387 master-0 kubenswrapper[28120]: I0220 15:01:03.628174 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 20 15:01:03.628387 master-0 kubenswrapper[28120]: I0220 15:01:03.628287 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 20 15:01:03.628387 master-0 kubenswrapper[28120]: I0220 15:01:03.628328 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 20 15:01:03.628495 master-0 kubenswrapper[28120]: I0220 15:01:03.628435 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 20 15:01:03.628495 master-0 kubenswrapper[28120]: I0220 15:01:03.628469 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628630 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628655 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628681 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7111b0bf2b7379929af69699174f229cbbc25f01fc7ffc44b3371950f17c6f2" Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628689 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" event={"ID":"989af121-da08-4f40-b08c-dd2aa67bc60c","Type":"ContainerStarted","Data":"179a409fb734cc1e38b874ef7dc3085074afe4aed4fb1a3a89836ccbf244466e"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628727 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628735 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" event={"ID":"989af121-da08-4f40-b08c-dd2aa67bc60c","Type":"ContainerDied","Data":"832f243cdb2cdff1065e35c1a4b8eb6397a6696e55399d5bf71d3cb4f866d80d"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628750 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" event={"ID":"989af121-da08-4f40-b08c-dd2aa67bc60c","Type":"ContainerStarted","Data":"b4cf8dbc3fd31a273c2cbd586eecdb2a0961392b7bd552bb39381cfb88539e45"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628760 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dzfl8" event={"ID":"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0","Type":"ContainerStarted","Data":"ba357a800fe404f4a9748472118f17be9b12e6d6ab1016a71232a56b0b7e5488"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628770 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dzfl8" event={"ID":"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0","Type":"ContainerStarted","Data":"4431b0961f5ef90f6fd38e08f66e9d8f2d37ec169bbd7cc70b8c5597be2182b0"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628779 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dzfl8" event={"ID":"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0","Type":"ContainerStarted","Data":"fa9d778b1d5703420b9779e5e17c8c6a6104fc97f8264778eb9ed382719853b9"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628782 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628787 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" event={"ID":"234a44fd-c153-47a6-a11d-7d4b7165c236","Type":"ContainerStarted","Data":"103407b542cb92b60a19cb575033cc9b552341ed431c515e2e942eb226538d8d"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628827 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" event={"ID":"234a44fd-c153-47a6-a11d-7d4b7165c236","Type":"ContainerDied","Data":"581f236214a140a0dd97c9926ea209ede3f39ed6cfcbab89bbd1dddd4483776d"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628858 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" event={"ID":"234a44fd-c153-47a6-a11d-7d4b7165c236","Type":"ContainerStarted","Data":"7190b6f768a0fe97808696f83db6e3236f51dc32c15727d9791bd6e154e97696"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628875 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" event={"ID":"0bedbe69-fc4b-4bd7-bcc2-acead927eda2","Type":"ContainerStarted","Data":"82af1e0ac2a38d423d2f66ae453fd46fc4c8ae778116720f18e17a37eb6994b6"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628888 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" event={"ID":"0bedbe69-fc4b-4bd7-bcc2-acead927eda2","Type":"ContainerDied","Data":"09c2a559e7cc2a5451aca2755577ab8e7c2b5ea2ef73bac50c4295f2287bdf15"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628901 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" event={"ID":"0bedbe69-fc4b-4bd7-bcc2-acead927eda2","Type":"ContainerStarted","Data":"163c978ceae0c9e27e26a1ee8ee74f1b64e99eeb67823839b6892e27b1c56ac9"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628889 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628965 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" event={"ID":"0bedbe69-fc4b-4bd7-bcc2-acead927eda2","Type":"ContainerStarted","Data":"934ad9d048e353486054177eacce7219c994c68dfad561ddfd4035fc938101d3"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628983 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" event={"ID":"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd","Type":"ContainerStarted","Data":"00f9e9b6b6ccf56cbc32cbe6a3bf7dcabdcf2702c8bfb772dfa8c5e881fe2a66"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.628996 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" event={"ID":"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd","Type":"ContainerDied","Data":"ce6cf48b03cf7ea4bb59cbc88338b3797dd3cd5289e6bbf78ef6ac04abd04f98"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629022 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" event={"ID":"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd","Type":"ContainerStarted","Data":"a3b80d783578c7d5bcce0396d10b0b7507567b7ddeed1d7dec131680bd38e6da"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629031 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" event={"ID":"996d4949-f92c-42ac-9bda-8c6ec0295e92","Type":"ContainerStarted","Data":"21cd6842aaa686fe3e5ffd58e0388911fa0632be4778f1bd4b937f97547182c4"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629042 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" event={"ID":"996d4949-f92c-42ac-9bda-8c6ec0295e92","Type":"ContainerDied","Data":"b6e9e6d9ccde8375bcdecc9c3bf9ed6951fb841bc2a4f124a46a0fefb565de16"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629052 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" event={"ID":"996d4949-f92c-42ac-9bda-8c6ec0295e92","Type":"ContainerStarted","Data":"4f03a59c794ee73bb7ffacd9f9054d362f5e6f5814326c13ca3530a0f5caacfb"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629060 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" event={"ID":"996d4949-f92c-42ac-9bda-8c6ec0295e92","Type":"ContainerStarted","Data":"8ef8165957098f6be8792289e9cb306a276c73110e287a7b80ba51a3888e812c"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629086 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" event={"ID":"8157f73d-c757-40c4-80bc-3c9de2f2288a","Type":"ContainerStarted","Data":"7385adb772bdc866c1e9e9a8c8aa66d6fd12f60258c65541abbc4f3fd882ad30"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629095 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" event={"ID":"8157f73d-c757-40c4-80bc-3c9de2f2288a","Type":"ContainerDied","Data":"9eac150251b3b5d386062f7aa8467ef3cc273bff50cfaf7bb7d3226879ebfbb8"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629104 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" event={"ID":"8157f73d-c757-40c4-80bc-3c9de2f2288a","Type":"ContainerStarted","Data":"3f68274f91c27d15a060c5bac225b0b94e8aa70b90454461d048fa9e384a03df"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629114 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" event={"ID":"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb","Type":"ContainerStarted","Data":"e951f4f03371ec55dc5f3e48a1367b2b71d375b075a902f792202889dbbea009"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629123 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" event={"ID":"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb","Type":"ContainerStarted","Data":"d02919abbb4b42258350951ea9c9a4298d4828f8160b708b55f0a6383e536f04"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629132 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" event={"ID":"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb","Type":"ContainerDied","Data":"18d29e751749b7ea9876b738d17d6268502d86978c1804f16e31b40402471107"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629142 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" event={"ID":"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb","Type":"ContainerStarted","Data":"aa71c4fa879120a78bd3b6a5ee4f553adcd2305018af6f53632371d2a776a283"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629151 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" event={"ID":"16d6dd52-d73b-4696-873e-00a6d4bb2c77","Type":"ContainerStarted","Data":"34881b0a33741767f43b826868d3348dca748d3964c8f347ae447e2ba7dda28a"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629161 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" event={"ID":"16d6dd52-d73b-4696-873e-00a6d4bb2c77","Type":"ContainerStarted","Data":"78400b95bca26d5b5b6a101069ed9dc03c843794bfde2ac90af2ffd94a8b56c9"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629169 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" event={"ID":"16d6dd52-d73b-4696-873e-00a6d4bb2c77","Type":"ContainerDied","Data":"e6e379ec088445dd86d2191d2d0584d608d0fb6a75f60858cd436421f083f620"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629179 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" event={"ID":"16d6dd52-d73b-4696-873e-00a6d4bb2c77","Type":"ContainerStarted","Data":"ed48d3d3cb753c9bbe342f9ecdd79f0991ed3456ddbdf3081cbeeab5126bcab1"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629188 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" event={"ID":"1fe69517-eec2-4721-933c-fa27cea7ab1f","Type":"ContainerStarted","Data":"4fd1f054de6dcbc46bdd02fc9bf3ec3e08235db2968aa7d6b81eadf482d090a3"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629197 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" event={"ID":"1fe69517-eec2-4721-933c-fa27cea7ab1f","Type":"ContainerDied","Data":"8fa1fcd077e28cf5cfeec8c2cafd29cf0677802573ac33c46747c76a0973c8ec"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629207 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" event={"ID":"1fe69517-eec2-4721-933c-fa27cea7ab1f","Type":"ContainerStarted","Data":"14c2be5daaa831938eade439658f265a27e1cda2542e3eb0c4e1d57c63fee064"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629217 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" event={"ID":"1fe69517-eec2-4721-933c-fa27cea7ab1f","Type":"ContainerStarted","Data":"5fc1828a85716c5c152a1e9d497ac8c147726f1a98a02df72c44bdcd9feda4f1"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629227 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5c4f5d60772fa42f26e9c219bffa62b9","Type":"ContainerStarted","Data":"648933d86ebc41b4f0c29dee7c6def360e8626c8f16e72ee5fb4e3e4b02a93f1"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629236 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5c4f5d60772fa42f26e9c219bffa62b9","Type":"ContainerStarted","Data":"29c1db2527f092355034b5557942ea50b25282b9b77501d427c1a6d0e01d2771"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629246 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerStarted","Data":"6edbad64d87976b1f93118bbd0e1e9a7e395f45b91ea14e4ae685cc9850e8c3c"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629256 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerDied","Data":"6e7cd59de9caeb6625ff93f951dca8b15c57f96db1e17aebced0a5231f411d3f"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629265 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerDied","Data":"7caadc72530799fe020f6b0140bace32e6cb7e8ebbe6207315d6d035384c83d6"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629274 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerDied","Data":"177aef91a6eb47e06724759a7ce69757e5533636be520f8861b5d3c44d7c4272"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629283 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerDied","Data":"187177cba6632230d116641fd3dad458ff096f751d761a5c25483f731b58481b"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629293 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerDied","Data":"86aaca74eb46c2a67484d7ed32bbe3315e4c31acc5fa267db57dbe7175337821"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629302 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerDied","Data":"c092110b72556c746170c7d0567154da90861fa9515b4bc320e9e6d1cc856cd6"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629311 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts4p" event={"ID":"b6e6d218-d969-40b5-a32b-9b2093089dbf","Type":"ContainerStarted","Data":"95115710de33578fe832a95630e8d98eba6ecc806a442bdc7740ad889ac1e80b"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629319 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"77c5708572ab9b4b6918c12a1fcd864571adf469d8703ecc7203af8fab7885f3"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629330 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"f7706ff200b2846eeea63820bf2ee306105f8590609e6b62651139a96b21f3a0"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629338 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"39189b545322f50b0910ed1efecdaa2e4608924890fcad29e9895c652836077f"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629347 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"355cedb8d26b37698e3a57c3d09006cbd9f428b85de301bc95a24404f10ef9fd"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629356 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"798ea82daeb38f0c7b68436fab2a622bb37f8874bef02285ea669acff721c7d4"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629365 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerDied","Data":"2359af63f52b488394f4fa66a44d4982b382146adcf63bb193421cfeb1ecf07e"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629374 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"13613c47bf97c812cc9e166f449f1af9864a34c9dcb66bd85e8e3c727e970a41"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629383 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" event={"ID":"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367","Type":"ContainerStarted","Data":"0e3955eb45775218b3ec78e9d48cc3dcca22b622e9fd2c4efce8aad1a0511807"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629392 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" event={"ID":"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367","Type":"ContainerStarted","Data":"89ac61279a537a1903577035106a26789b5d8208200729a8969d5e1dbcb119e4"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629400 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" event={"ID":"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367","Type":"ContainerDied","Data":"361cac7f381ef490c05a6ad20d7d519e61ac704ec32bc6d37576fd4551ff3afc"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629410 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" event={"ID":"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367","Type":"ContainerStarted","Data":"864b7e188cfb62e2b7e87dc90ff4536aab0f9cd5aed1bd5481272fd1babe2e98"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629420 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" event={"ID":"caef1c17-56b0-479c-b000-caaac3c2b249","Type":"ContainerStarted","Data":"9fb95b2eeb097676234cbbf758fd01689ed32c313ae8911055836e8c306a38f8"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629429 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" event={"ID":"caef1c17-56b0-479c-b000-caaac3c2b249","Type":"ContainerStarted","Data":"031dcaaee6eadbfa72ca313aff262add31ad56554e56df47aa69a95767f1176d"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629437 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" event={"ID":"caef1c17-56b0-479c-b000-caaac3c2b249","Type":"ContainerStarted","Data":"24414df873e0571bc67283c69593b9f634f2224fa05d695362ba0c99afbe232a"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629445 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" event={"ID":"caef1c17-56b0-479c-b000-caaac3c2b249","Type":"ContainerDied","Data":"40d63e74e24fee68be44b5de74837dcb78a9dc13e3f7cf14b4e7c069fc14a3c1"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629455 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" event={"ID":"caef1c17-56b0-479c-b000-caaac3c2b249","Type":"ContainerDied","Data":"ba4791195ab28fdefd71609ee2f152b2f868666e0ec80047600b61f1c976a50f"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629463 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" event={"ID":"caef1c17-56b0-479c-b000-caaac3c2b249","Type":"ContainerStarted","Data":"f83848e1580bc2bc923ed29b258b640fe63d1b2a36889eeff462ef2f63db0d04"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629472 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6","Type":"ContainerDied","Data":"992d06369bcdfc83fe57ae6d1c5dce1f2cfa2163b4588fe5df6d49020418c795"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629482 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"3ef51d3b-cd8b-4f34-961e-8daebbed3ca6","Type":"ContainerDied","Data":"00c49d62b94564e456b20bb8a4dbb2c93a1fe2806ab8327bbf14d442fc57441b"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629490 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00c49d62b94564e456b20bb8a4dbb2c93a1fe2806ab8327bbf14d442fc57441b" Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629498 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-djs75" event={"ID":"448aafd2-ffb3-42c5-8085-f6194d7862e5","Type":"ContainerStarted","Data":"3c665ce9d0faa8d1bd8dd54a769a338f58b327d40c8c797dba804d0cf7affadc"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629508 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-djs75" event={"ID":"448aafd2-ffb3-42c5-8085-f6194d7862e5","Type":"ContainerStarted","Data":"0170d69b891340d8304a044f9ba11f3c45572b8e1e7f16d78f09e0c25d8c5a22"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629517 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp" event={"ID":"e3cc4073-a926-4aba-81e6-c616c2bb2987","Type":"ContainerStarted","Data":"c7fbdaabc9defebc24663b20b460123ec251f6593568a39a6e85af3aef0bcfd5"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629528 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp" event={"ID":"e3cc4073-a926-4aba-81e6-c616c2bb2987","Type":"ContainerStarted","Data":"535151362e36c1745033704c37dfb910d9260b348b0c35a197ec5a2c74a4ea53"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629539 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" event={"ID":"86f6836b-b018-4c7a-87ad-51809a4b9c7a","Type":"ContainerStarted","Data":"56def389d27cfe7ad67180dd3ed63a339125a9d6855768c06cdeebbe5ed251cd"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629548 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" event={"ID":"86f6836b-b018-4c7a-87ad-51809a4b9c7a","Type":"ContainerStarted","Data":"ffba8bf7d32818ce3d1f44fe5a89b60d238835da37d2ceadd236bb1c7f7c8066"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629557 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" event={"ID":"86f6836b-b018-4c7a-87ad-51809a4b9c7a","Type":"ContainerDied","Data":"57a9d244672b000b813223a646214cb5149d5553c3f6c953fcf4645211da137b"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629567 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" event={"ID":"86f6836b-b018-4c7a-87ad-51809a4b9c7a","Type":"ContainerStarted","Data":"51c5a5d32ca643efba642911927baab174d9c9270d18541b0810089261e8c8d5"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629577 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"53835140-8eed-401c-ac07-f89b554ff616","Type":"ContainerDied","Data":"ac1ebe21f01db828cbdc3775b7cb4f962d321758483e5f64757855bd43976682"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629587 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"53835140-8eed-401c-ac07-f89b554ff616","Type":"ContainerDied","Data":"823465cca5c74108f34569b06808ad03bfdc5a9d5fe983b835a9ba1e796ceb31"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629596 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="823465cca5c74108f34569b06808ad03bfdc5a9d5fe983b835a9ba1e796ceb31" Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629604 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" event={"ID":"d3ca2d2f-9f31-4524-a28f-cf16b02dd711","Type":"ContainerStarted","Data":"c396e73ee6b7eb5c6449cf276a9d0d5ae9c9bc55bb4f24a00ddad593e1c6275c"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629613 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" event={"ID":"d3ca2d2f-9f31-4524-a28f-cf16b02dd711","Type":"ContainerDied","Data":"a95da6b755620b3477b82b60290cab82bafb501ad18fb013d6a2d035fb2977b7"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629623 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" event={"ID":"d3ca2d2f-9f31-4524-a28f-cf16b02dd711","Type":"ContainerDied","Data":"8b677f9dfe1adb3fd4defb49e7d0b98454fc7a8c20e2d380e3e690cdf86abbc6"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629632 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" event={"ID":"d3ca2d2f-9f31-4524-a28f-cf16b02dd711","Type":"ContainerDied","Data":"1233b754482b6558abf240af9822b6209076badce1d5bcade0d4d98c88cc1f1f"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629642 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" event={"ID":"d3ca2d2f-9f31-4524-a28f-cf16b02dd711","Type":"ContainerStarted","Data":"6315ef904771a7f7ee8f8fb64b568088a83f03dc9235439160e67d9df1c9a04f"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629652 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-gprr4" event={"ID":"33675e96-ce49-49be-9117-954ac7cca5d5","Type":"ContainerStarted","Data":"e228425982ffade67c1a967b350cd6a3af970665a081f0a86186926eabc43343"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629661 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-gprr4" event={"ID":"33675e96-ce49-49be-9117-954ac7cca5d5","Type":"ContainerDied","Data":"93c53e18dcac71f47a3746e6562e8b692068a3b0ff7c4afe8e6e0d3f178f230b"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629673 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-gprr4" event={"ID":"33675e96-ce49-49be-9117-954ac7cca5d5","Type":"ContainerStarted","Data":"30db4aa1175c82b753660c5597ea88a713c4233cb318dbcdd55159d329e5e404"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629682 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-gprr4" event={"ID":"33675e96-ce49-49be-9117-954ac7cca5d5","Type":"ContainerStarted","Data":"3595d9d8fc957b18c48383f1ad0fcfa521ef5e3e33c6ab788b51ff8638981630"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629690 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"e37813285486b0462bda9c69308d201df650878eef48faf9f76e3de05ff0a8ac"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629700 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"388297d97925469d6ea9b84d2f2a86a576d0a7b6f5f083892f33906a5b9e0f04"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629709 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"f4c7e525c09ff1dff6066196cae634275eb0ec1bb486fdb04d0889a7fba258c3"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629718 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"1fb66d91bf9251ceb0c28561e4de849b3048f457180199219b47b4ae089b8f04"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629727 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"a9910dbc8b4d784e8e79d6947c350b95a90988deee0d33fc3478aba56e03a8cd"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629735 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"3bc728ed313ea4c2c24bfa6e5ec35ea80b76ead7f7237f5bfbb4c7d63e868b56"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629744 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"3bde77e581302fa688ce598a59eb1521eeb691223c05ecf792bb7f274b1fd8f2"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629754 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"b6ea946617b2fbe51c03eb02d48883421215780882113bffd73308a394e3acaf"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629765 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"996b54ad7bf339a39ffff49432d0181ad23ef73bddec2b3817ca026944ee2962"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629774 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" event={"ID":"8b73ae08-0ad7-4f99-8002-6df0d984cd2c","Type":"ContainerStarted","Data":"5a018964150859d45676b2b2c0dbe19b3259b6b089851d91eea98e412d65f129"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629784 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" event={"ID":"8b73ae08-0ad7-4f99-8002-6df0d984cd2c","Type":"ContainerDied","Data":"cfbd27b76aa0dc7c10ce1de7a1bdca66b3303ee8a7bc370fa5d11a1d913c8168"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629794 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" event={"ID":"8b73ae08-0ad7-4f99-8002-6df0d984cd2c","Type":"ContainerStarted","Data":"afc706c41127ee1f98bf413cd8a012a0e0a8f183eef4bf77721d14a272ded89e"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629804 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" event={"ID":"8e8c5772-b6e2-43d8-b173-af74541855fb","Type":"ContainerStarted","Data":"fde4c1f926ef51136abe74f14fd5102b8adec09263eab2c1bc2673f3a644e9e6"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629813 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" event={"ID":"8e8c5772-b6e2-43d8-b173-af74541855fb","Type":"ContainerStarted","Data":"f67a6f8819fd04404e66693d9145f5c750d553aae06fb05a6df8c4ce1725f387"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629822 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" event={"ID":"8e8c5772-b6e2-43d8-b173-af74541855fb","Type":"ContainerStarted","Data":"ec1f2942b833e4699e40cac84e92b5387087cd186af08453ba24e486f285439e"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629830 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" event={"ID":"8e8c5772-b6e2-43d8-b173-af74541855fb","Type":"ContainerStarted","Data":"04c921d85b432c0d1b6bd571166f434dca8313768c8990c88277ecdb55bd26c7"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629840 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"fea431d7-394f-4639-abd6-c70a28921fc6","Type":"ContainerDied","Data":"91f517d397ca83de4c56e84947b8179187f25ef947f76871a498051ccbc41700"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629852 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"fea431d7-394f-4639-abd6-c70a28921fc6","Type":"ContainerDied","Data":"c88ebe1ca0622fd22f4a19976f3ec2cf228a80d7134db8d5e9d57aad94e932f3"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629860 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c88ebe1ca0622fd22f4a19976f3ec2cf228a80d7134db8d5e9d57aad94e932f3" Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629869 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" event={"ID":"49defec6-a225-47ab-99ff-7a846f23eb00","Type":"ContainerStarted","Data":"dba9c42dcf7fcdb0a76b8629c779f71b81357a7ea8751c9b83573eb252b5a3d1"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629878 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" event={"ID":"49defec6-a225-47ab-99ff-7a846f23eb00","Type":"ContainerStarted","Data":"01fa54e3fedf15625b874769be8058628ecbf8d9c1e1408b5cd8a41440ab8cfc"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629887 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" event={"ID":"49defec6-a225-47ab-99ff-7a846f23eb00","Type":"ContainerStarted","Data":"ec64844e3e46d42ec4c570bb811039de046f41f872bc256c338ea6312e07ba0d"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629898 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"246a4a72bf2ebfa2d43942f255f719a181c7fa6fae84b5f564297d3cc7eff684"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629908 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"45a697749c461413b0722aa1be0b316cc858779a0e80c5ef44f0a3c27a2f1822"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629937 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"8aa8f34057d37d62316a09602947b9934df303dc999d1b14efc423cb04940c72"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629949 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"5ea8ac7578359ce087855682fd87fbd08a72604f8701716ddbb28b051d93bff2"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629959 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" event={"ID":"19ce4b45-db46-4fc3-8d72-963de22f026b","Type":"ContainerStarted","Data":"4b1e59dab15a09dcf91960fbb173589cc11af80f8d763992d3393dd40ec3c134"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629969 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" event={"ID":"19ce4b45-db46-4fc3-8d72-963de22f026b","Type":"ContainerStarted","Data":"ea1c995ced42f5f70bb2e5d4eefdbbd65e8b628be215ee59daaba52d55c8ad0f"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629979 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" event={"ID":"31d71c90-cab7-4411-9426-0713cb026294","Type":"ContainerStarted","Data":"87f837e420d5053d4442b3cdb2fe63d6e5ee3cff979f63a5c0302a5647f7f2f6"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.629989 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" event={"ID":"31d71c90-cab7-4411-9426-0713cb026294","Type":"ContainerStarted","Data":"81b14b205a5b43d7cf78b359f564d3ae3e67aaf00f87262df973d130ce6f30c0"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.630039 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2cdp" event={"ID":"ac3680de-aabf-414b-a340-5e5e6aea4822","Type":"ContainerStarted","Data":"c0aa3c8e94eb252e0640fa7490825fb0751114e737ccaefad40144b9aceb63d9"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.630054 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2cdp" event={"ID":"ac3680de-aabf-414b-a340-5e5e6aea4822","Type":"ContainerDied","Data":"55b93e62b4f65de932584b817ba60092f21e3f44ea709a7dccfe6475d2084e38"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.630073 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2cdp" event={"ID":"ac3680de-aabf-414b-a340-5e5e6aea4822","Type":"ContainerDied","Data":"c971e0c69d94dc6cc3921b26332fbb6cd07c9071a5e1bbf75f6a1abf3da41b6d"} Feb 20 15:01:03.629947 master-0 kubenswrapper[28120]: I0220 15:01:03.630087 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2cdp" event={"ID":"ac3680de-aabf-414b-a340-5e5e6aea4822","Type":"ContainerStarted","Data":"8df5627ff680da0c81aa3a3c2df511cdff6fa3f30ba3845441250cbb689ca7f4"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630102 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"380174fb-b30c-4f45-9119-397cdca91756","Type":"ContainerDied","Data":"28ee0d7fd2e81f54f5dcd52927e71c388f397b4ec8b363fb1c98a6fb82168cd2"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630113 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"380174fb-b30c-4f45-9119-397cdca91756","Type":"ContainerDied","Data":"1ee17e2383cea0ad71bf0ed7b91b99cbf73c1a9e377abadef6ff61fb1e1e6676"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630122 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ee17e2383cea0ad71bf0ed7b91b99cbf73c1a9e377abadef6ff61fb1e1e6676" Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630130 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff5aeff3d91fe04ad5b35e5f18daa8ee28aba3161b0999bafdb650c9674062ac" Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630137 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" event={"ID":"a39c5481-961c-4ac2-8c5b-a2c0165f4188","Type":"ContainerStarted","Data":"17e8999646d64007d1bd58d640bc79694199b53000fbeaec2e4a35a48342c1a5"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630147 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" event={"ID":"a39c5481-961c-4ac2-8c5b-a2c0165f4188","Type":"ContainerStarted","Data":"ab0f49b2f7d1b009ca5ecc4b169081ae95aaaf6b5ee65b7672e3618ef61d1e7f"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630155 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" event={"ID":"a39c5481-961c-4ac2-8c5b-a2c0165f4188","Type":"ContainerStarted","Data":"49c14eb0ee80e8816f81743c980057c7a3f0930b6f7facd4c8a07a25d04b2a16"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630164 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" event={"ID":"a39c5481-961c-4ac2-8c5b-a2c0165f4188","Type":"ContainerStarted","Data":"4e9788fdd4565e3a230622830adb39ca18b14112a272177c052904a2d24b6cd0"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630174 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" event={"ID":"a1af84e0-776b-4285-906a-6880dbc82a7b","Type":"ContainerStarted","Data":"2ae4537b93ca1df380fb49c25fa560c619b235ffc48c39d0f2e8fa5a73331fc8"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630185 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" event={"ID":"a1af84e0-776b-4285-906a-6880dbc82a7b","Type":"ContainerDied","Data":"f1b1e34a79f20570df08b5141ba77d85f604d72218b6eb7fe601f67b1fcd7a77"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630195 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" event={"ID":"a1af84e0-776b-4285-906a-6880dbc82a7b","Type":"ContainerStarted","Data":"ba0f9ce144b093c1fbdb0462da21ced21845e2aa8fb2233766270fcddb816e51"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630204 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wddt" event={"ID":"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d","Type":"ContainerStarted","Data":"52067142a26667c2638519b1973ebe093bf73aa0ce624b9cf4768d4f63063be7"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630213 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wddt" event={"ID":"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d","Type":"ContainerDied","Data":"501e152806072f51a6aa348d15cc2667dcd91a44e63ea82bf19b7f6a5b79b7c9"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630224 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wddt" event={"ID":"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d","Type":"ContainerDied","Data":"e7333c1741153b59af991a3ad87866cd9c88f6ffc09e8e9cf921a7d0c933ce1e"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630234 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wddt" event={"ID":"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d","Type":"ContainerStarted","Data":"236aeb004972a9d3e9949ce545b3cfedb3b4ea60df38f4b61a82d0b2465524af"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630244 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4wzg" event={"ID":"93786626-fac4-48f0-bf72-992bc39f4a82","Type":"ContainerStarted","Data":"b0987a23de1af7452aa858f67b72055860aa4f74e71922797df18cc1e04dddf0"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630253 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4wzg" event={"ID":"93786626-fac4-48f0-bf72-992bc39f4a82","Type":"ContainerDied","Data":"4b48185bed34b04ded3112db1a2c329d504a7ceb8c020ba9fbe406707b9c3662"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630264 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4wzg" event={"ID":"93786626-fac4-48f0-bf72-992bc39f4a82","Type":"ContainerDied","Data":"56019874af29c4e772f7520294fcbc7349ad7c86907d26939ad87c2a68027c4a"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630274 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4wzg" event={"ID":"93786626-fac4-48f0-bf72-992bc39f4a82","Type":"ContainerStarted","Data":"db318f21d539d497ae2372897b56aaa3b6fedeaae97e556d74c5b3c251315d6e"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630284 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24c827995023caaffd01654949c8d4dd","Type":"ContainerStarted","Data":"23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630293 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24c827995023caaffd01654949c8d4dd","Type":"ContainerStarted","Data":"a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630301 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24c827995023caaffd01654949c8d4dd","Type":"ContainerStarted","Data":"180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630310 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24c827995023caaffd01654949c8d4dd","Type":"ContainerStarted","Data":"d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630319 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24c827995023caaffd01654949c8d4dd","Type":"ContainerStarted","Data":"733f20d59a2548ac1c9bcca1dc13fb3a2581f1cde83bb3bdf7f826c178e76f76"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630328 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr" event={"ID":"900e244c-67aa-402f-b5f0-d37c5c1cedf7","Type":"ContainerStarted","Data":"1c41dabedad84cad3a05f49f849b79c399c388e8f2b9c7bfb18efcd28c2ae0be"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630338 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr" event={"ID":"900e244c-67aa-402f-b5f0-d37c5c1cedf7","Type":"ContainerStarted","Data":"9bd614ac7dafc38d2154363d724a872731a806692546d4bc858006cdc5ade17d"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630347 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-bk9bp" event={"ID":"99fe3b99-0b40-4887-bcc8-59caa515b99f","Type":"ContainerStarted","Data":"249c8f01deec61596704fed74ef02874dd0095cff46bbbc1facf51120bbc8333"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630357 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-bk9bp" event={"ID":"99fe3b99-0b40-4887-bcc8-59caa515b99f","Type":"ContainerStarted","Data":"62df2fe99b665d08438eea218ad5ac1857eaea573fdf0c22507e4202d78adb51"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630365 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-bk9bp" event={"ID":"99fe3b99-0b40-4887-bcc8-59caa515b99f","Type":"ContainerDied","Data":"ac76df8cb547ae36da1275aa8fb2cdc86502281cca8b0c482befd5640340a0ca"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630375 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-bk9bp" event={"ID":"99fe3b99-0b40-4887-bcc8-59caa515b99f","Type":"ContainerStarted","Data":"2d789ae2430f40a62d0c76334dce72b1228320484eb36b8f7f3663eb8534eb42"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630384 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-ljvkb" event={"ID":"929dffba-46da-4d81-a437-bc6a9fe79811","Type":"ContainerStarted","Data":"e6b22158f2a0887e8ea0d74b234993e0aec608e03c27efe7a886fc0349f774e3"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630393 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-ljvkb" event={"ID":"929dffba-46da-4d81-a437-bc6a9fe79811","Type":"ContainerStarted","Data":"07f2250f0416c7a8aaa5ba7190cd272a32f30bcb4026105fc1ebf0050f1e79f2"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630402 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-wtxfh" event={"ID":"014f3913-ac7e-431a-880c-91d979a5dfc7","Type":"ContainerDied","Data":"d0525760cb8ba3e4a202836682905e3209d011265d322e121763f9e03af800fb"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630413 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-wtxfh" event={"ID":"014f3913-ac7e-431a-880c-91d979a5dfc7","Type":"ContainerDied","Data":"617ccef4b48beb8ed1f21a9b1c418d8de1fbc1ee6e5e89c3998a1f0b78051407"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630421 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="617ccef4b48beb8ed1f21a9b1c418d8de1fbc1ee6e5e89c3998a1f0b78051407" Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630428 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" event={"ID":"c81ad608-a8ad-4289-a8d2-d48acb9b540c","Type":"ContainerStarted","Data":"76678f3b3771aa596ae00afe94f70cbbd9bcae26c675da96c50642342f6abcee"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630438 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" event={"ID":"c81ad608-a8ad-4289-a8d2-d48acb9b540c","Type":"ContainerDied","Data":"5433accfcf1efda61ccbe8f683016067c773a6f6dbc87107ff277c75114e35c4"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630447 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" event={"ID":"c81ad608-a8ad-4289-a8d2-d48acb9b540c","Type":"ContainerStarted","Data":"92d6a373c92ade68969e49443823f212abf3c0859e9aaf5d10ff5913a474e6f8"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630457 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" event={"ID":"fc334fff-c0bf-4905-bcdb-b0d2a35b0590","Type":"ContainerStarted","Data":"e10f915b137c12c85dbe6de89c833ba4a9f763caac14e31f03d7e9153f656999"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630467 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" event={"ID":"fc334fff-c0bf-4905-bcdb-b0d2a35b0590","Type":"ContainerDied","Data":"c477064b0f3fd6cd0d107cda0e6daa47e69c108cc08e8c15adda744ad3c559d0"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630476 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" event={"ID":"fc334fff-c0bf-4905-bcdb-b0d2a35b0590","Type":"ContainerStarted","Data":"2cc001d9b9602fb584b5d0b096a0d40fac4dbe465b509891b7825972dc39ddc5"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630488 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" event={"ID":"fc334fff-c0bf-4905-bcdb-b0d2a35b0590","Type":"ContainerStarted","Data":"0c48d8481d8bb6541d7d83f4ffc4e7c6003e82f4f8d378fb9a1333d706bc6f14"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630497 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" event={"ID":"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a","Type":"ContainerStarted","Data":"0280eb835d13df084844046377ae19fb68b78454fc360d2bbb9b0a6d7af5b23f"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630507 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" event={"ID":"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a","Type":"ContainerStarted","Data":"56784add7fab2d6fa30c1dec4a904d183b8bd0ff401f8eca8e9ad2aff7741c30"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630516 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" event={"ID":"4ecbdf77-0c73-487e-943e-5315a0f8b8d4","Type":"ContainerStarted","Data":"e95fbb8f53ad11db019f7bffa9dab7bf19c983cfeacec893299776b627fcb23e"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630525 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" event={"ID":"4ecbdf77-0c73-487e-943e-5315a0f8b8d4","Type":"ContainerStarted","Data":"972260fa4d71d5a14fa2c2c948e5708100e799e6a9e6ff6a656d3e5a79c34eaa"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630534 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh" event={"ID":"af7b6f34-adca-4bdb-9e41-e2995a1d67a8","Type":"ContainerStarted","Data":"e969545d1642c072152a5ec102f1eb7f4892e0030ac35eab40601381088404b4"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630543 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh" event={"ID":"af7b6f34-adca-4bdb-9e41-e2995a1d67a8","Type":"ContainerStarted","Data":"3ee41ba4abbcbb86e18b3b6f53b30fff65e0915edf9c525908f5d8d1e3b5de7b"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630552 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh" event={"ID":"af7b6f34-adca-4bdb-9e41-e2995a1d67a8","Type":"ContainerStarted","Data":"118104a32f855cf343fc9a68201c174973d8b0ae6653c1a549eeef25c7c2eefa"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630561 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" event={"ID":"8a278abf-8c59-4454-94d0-a0d0768cbec5","Type":"ContainerStarted","Data":"c01f7b48911df7bce77798908697a8a45e47a4d0fafcbef1fd81d40c9b28eb31"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630570 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" event={"ID":"8a278abf-8c59-4454-94d0-a0d0768cbec5","Type":"ContainerStarted","Data":"56dab50a6ee92d8b7787a1ffbdfc72e9a26511781eb108040e7d6dc84a65109f"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630579 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-cgp8r" event={"ID":"87cf4690-1ec1-44fc-94bd-730d9f2e6762","Type":"ContainerStarted","Data":"7f8dbc22b8958f7d49d97b2d1cc7318ab14c413e48ae6f880d4b31bcda852197"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630590 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-cgp8r" event={"ID":"87cf4690-1ec1-44fc-94bd-730d9f2e6762","Type":"ContainerStarted","Data":"0ea53368ce61e6c8836a7d0c6d716b7e2c7e18ee974ab80f253b08e24d34227b"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630598 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" event={"ID":"bdf18981-b755-4b11-8793-38bc5e2e755b","Type":"ContainerStarted","Data":"db8b2b97e53f2e0f9eb8b077984d360867eb853438c79f964c4316743bc03b9a"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630607 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" event={"ID":"bdf18981-b755-4b11-8793-38bc5e2e755b","Type":"ContainerDied","Data":"71a3faa6e2a13b4bcadc91647966380b556ee1824a73e0209af007ec80d749b3"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630617 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" event={"ID":"bdf18981-b755-4b11-8793-38bc5e2e755b","Type":"ContainerStarted","Data":"88c6fd1112c1b3efe31f79a2dc6cd9198555dc6b1c7c6547da60005b56efbb9b"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630626 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" event={"ID":"c0b78aa6-7bc8-4221-81f5-bf62a7110380","Type":"ContainerStarted","Data":"64603a3ea96fd717e2035494db040660bbfe5a6894d3b87f6bdcaa17f69d7f5c"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630635 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" event={"ID":"c0b78aa6-7bc8-4221-81f5-bf62a7110380","Type":"ContainerStarted","Data":"b4e6e35a13489e6753b258d26ed5a83ff62c7c8c4f879f0771edf0596a055016"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630645 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" event={"ID":"c0b78aa6-7bc8-4221-81f5-bf62a7110380","Type":"ContainerStarted","Data":"fdffe43b1b08d49ea8314e914701c72141f7d81a9b8fe2dd80fb3d7e5d551135"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630654 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" event={"ID":"c0b78aa6-7bc8-4221-81f5-bf62a7110380","Type":"ContainerStarted","Data":"7cd291b9260d8474da6db1ea27593954a0b8a80d92876d3da551d5f4c38e22a4"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630670 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" event={"ID":"43e9807a-859c-44c1-8511-0066b0f59ff8","Type":"ContainerStarted","Data":"576abacd055041debbe6e09151c67b20fc73597532194f0a3cbc9b1e0f7ce583"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630679 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" event={"ID":"43e9807a-859c-44c1-8511-0066b0f59ff8","Type":"ContainerDied","Data":"434ed936cc25c1d0e0f36dd52a8572c7b7417d14a5a50821cdca25739e6e9d2b"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630688 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" event={"ID":"43e9807a-859c-44c1-8511-0066b0f59ff8","Type":"ContainerStarted","Data":"e94527abc555de66f60f9e134865dfe60d787ebd1878546078cb9b2523c30cab"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630697 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" event={"ID":"419f28a9-8fd7-4b59-9554-4d884a1208b5","Type":"ContainerStarted","Data":"8b6e9e82961a1b1569e8b4f8d72a5575024ad0e3ea1e52d7f46885fdbde3d82b"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630706 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" event={"ID":"419f28a9-8fd7-4b59-9554-4d884a1208b5","Type":"ContainerStarted","Data":"84da6dcc282a18c48a027b33cd2404e3592b75c697de5dd4ab39e2cebf5cff28"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630716 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" event={"ID":"9fd9f419-2cdc-4991-8fb9-87d76ac58976","Type":"ContainerStarted","Data":"dc441fa27824734a377d9db318c86f20db95ce4983905e77258b9eeaa40c81d4"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630725 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" event={"ID":"9fd9f419-2cdc-4991-8fb9-87d76ac58976","Type":"ContainerDied","Data":"5761b5d97bb857209597024a19cdbe2341d245c395e6ce681c8bc8fd7fa023bd"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630735 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" event={"ID":"9fd9f419-2cdc-4991-8fb9-87d76ac58976","Type":"ContainerStarted","Data":"469af398b29095aa460373b4a9d58261db50995525853368aaa76c2198d9753f"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630745 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"277ab008-e6f0-49cd-801d-54d3071036d4","Type":"ContainerDied","Data":"6c7c12ccf7f07aacf9744ba31c10a72a4c19226b35c8d4fd36f32979a50dbaaf"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630774 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"277ab008-e6f0-49cd-801d-54d3071036d4","Type":"ContainerDied","Data":"9973189e4a2bcf54eee01772766f154e4f0414d83cd39d056cbee6f94ee506af"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630782 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9973189e4a2bcf54eee01772766f154e4f0414d83cd39d056cbee6f94ee506af" Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630791 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" event={"ID":"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463","Type":"ContainerDied","Data":"39729aa63d210240a6c419acbf228b3a124ab4900f3cc120e7b7aead6bf8e73a"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630802 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526645-ff5j2" event={"ID":"8c60ad1f-f8d9-4c67-97a3-f9fa491bd463","Type":"ContainerDied","Data":"ea68c4defdeeb01e90817720006f1125f253badcc4d0dde7d2c2223dd487b94c"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630810 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea68c4defdeeb01e90817720006f1125f253badcc4d0dde7d2c2223dd487b94c" Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630819 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-fc795" event={"ID":"787a4fee-6625-4df5-a432-c7e1190da777","Type":"ContainerStarted","Data":"49b822c3e47c1cd6ec2009b226ef965940d964b01865e3cc2dbdf575ba59319a"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630828 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-fc795" event={"ID":"787a4fee-6625-4df5-a432-c7e1190da777","Type":"ContainerStarted","Data":"e1b8782a8564dd4906c6406ffd3ad6cd072d92723a07ad86ed42c394d07ab355"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630837 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" event={"ID":"64e9eca9-bbdd-4eca-9219-922bbab9b388","Type":"ContainerStarted","Data":"b0ee251bfbaebe0892c05b520dd9bb47366244efbf8033dfe4b8b8ef8373e2f0"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630846 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" event={"ID":"64e9eca9-bbdd-4eca-9219-922bbab9b388","Type":"ContainerStarted","Data":"cc5528fa6db2bfe114c1842f536c398cb14a3103bc976fa904abdc30e48bc9b3"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630855 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" event={"ID":"bdd203e0-3dd9-4e9d-81f1-46f60d235e38","Type":"ContainerStarted","Data":"6de3357e6e18954512d073202b91b501ca58384ea08b18ec75d08c4929c63531"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630865 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" event={"ID":"bdd203e0-3dd9-4e9d-81f1-46f60d235e38","Type":"ContainerStarted","Data":"3209ad8e141d4f4023abb0b8711dc267473b98fd78163c32b9a46c610babe186"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630873 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b6285323-3e75-4d44-ad05-98890c097dd2","Type":"ContainerDied","Data":"e0e54afa304c07256ca81f12b5ac712d5ac8488390931a330fe4a44a3c9b790d"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630883 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b6285323-3e75-4d44-ad05-98890c097dd2","Type":"ContainerDied","Data":"a18ba6fef141df70b03fa378f8e3dafed41e947f342e811cb930b80a2236b753"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630891 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a18ba6fef141df70b03fa378f8e3dafed41e947f342e811cb930b80a2236b753" Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630899 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5qlzq" event={"ID":"4e7cac87-2eaa-4dad-b2dc-c8ed0557c665","Type":"ContainerStarted","Data":"98dbd2fe6ba8be6befbd533cc0cc2296e7e140b533b1c3130dfcc27e3db2bb67"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630907 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5qlzq" event={"ID":"4e7cac87-2eaa-4dad-b2dc-c8ed0557c665","Type":"ContainerStarted","Data":"013da989dc1e60fa75e3d1e3955a83dece7eed7353880205a9acd5aa5c2d4d69"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630939 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" event={"ID":"45d7ef0c-272b-4d1e-965f-484975d5d25c","Type":"ContainerStarted","Data":"09d216a3abc55643af39c5d59bcb2e247cd57b0e4c4569a1bf7ef453f5b7658a"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630950 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" event={"ID":"45d7ef0c-272b-4d1e-965f-484975d5d25c","Type":"ContainerDied","Data":"2b921a59215a9b57fc0e140139af8ee009d893b2733cf5fcafdbd68899442899"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630961 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" event={"ID":"45d7ef0c-272b-4d1e-965f-484975d5d25c","Type":"ContainerStarted","Data":"9df920ca539f41ddc66a331c27bc3a12a40dbc8ec795ca71f8a746f6b5203647"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630970 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5c8741d7-c96b-41cc-80cb-81683bb68480","Type":"ContainerDied","Data":"cff869feeda154776fdb80bde49136ec0b5b04dcf06768e009678b70576a1603"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630981 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5c8741d7-c96b-41cc-80cb-81683bb68480","Type":"ContainerDied","Data":"eef1aa66846c305d37d9496640c02851ab1df6ad78f667da48c6c7b15695dd4f"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630989 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef1aa66846c305d37d9496640c02851ab1df6ad78f667da48c6c7b15695dd4f" Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.630998 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"ab3c370c-58b4-4115-a359-b3f55c87284d","Type":"ContainerDied","Data":"00ed587ddf8155d51df42eba4d283cbd6beb09f53d1fc60d2651e845ec7cf08c"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631007 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"ab3c370c-58b4-4115-a359-b3f55c87284d","Type":"ContainerDied","Data":"c88e96be470ca889285a29fe125676aab3c03c8788f261a9c66f2a8654e5e5e5"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631015 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c88e96be470ca889285a29fe125676aab3c03c8788f261a9c66f2a8654e5e5e5" Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631023 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" event={"ID":"49044786-483a-406e-8750-f6ded400841d","Type":"ContainerStarted","Data":"34cd67fe375d543593e71b0db6a6c6578ad59b2187779424e14bfbf76ca085fe"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631031 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" event={"ID":"49044786-483a-406e-8750-f6ded400841d","Type":"ContainerDied","Data":"c537be0fb6abb27532917c3ba13de8d47b09b2f7faa20aacc94423594538336f"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631041 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" event={"ID":"49044786-483a-406e-8750-f6ded400841d","Type":"ContainerStarted","Data":"92c9b6ef7965615602e16b5814c26d9915a23507222fc502b624945d6f4ccc53"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631051 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" event={"ID":"27ab8945-6a5b-4f7d-b893-6358da214499","Type":"ContainerStarted","Data":"e8c5d6ce583150e5025bcd44242a6fd0048c02eb48e405e4c26fcefe2dcec569"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631061 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" event={"ID":"27ab8945-6a5b-4f7d-b893-6358da214499","Type":"ContainerDied","Data":"3a018b588cd0fab81aef4437e8a3c01bf2d7562f85789ce7770c3b488cc91b89"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631071 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" event={"ID":"27ab8945-6a5b-4f7d-b893-6358da214499","Type":"ContainerStarted","Data":"07243cbc35256d0bbc44485dfcf1dcdc835463392fa9dc5f89599380e929e672"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631080 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" event={"ID":"a8c0a6d2-f1f9-49e3-9475-4983b50667bf","Type":"ContainerStarted","Data":"f8a433dd00d15430b30f07f3b74ccacefa6f8385a2e11e771e5ee34057464565"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631089 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" event={"ID":"a8c0a6d2-f1f9-49e3-9475-4983b50667bf","Type":"ContainerDied","Data":"a27dacc9767bb08d41caf26b14c781b3928a704b21733f539c8b91a44b0c4d18"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631099 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" event={"ID":"a8c0a6d2-f1f9-49e3-9475-4983b50667bf","Type":"ContainerStarted","Data":"e4a8f393be39a3a9efde4bf2412add15fe01a8acdf8e5580190095494f3e6b47"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631108 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" event={"ID":"6949e9d5-460c-4b63-94cb-1b20ad75ee1c","Type":"ContainerStarted","Data":"3191dd09efb413807b0f7ac65de89263fd86c8dfae5fdd396c8d8c4703e7e79b"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631118 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" event={"ID":"6949e9d5-460c-4b63-94cb-1b20ad75ee1c","Type":"ContainerStarted","Data":"07e9c574c476b552e4880ea04698b85b76f727446caf5a26cb7851d60cbd7d25"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631128 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" event={"ID":"6949e9d5-460c-4b63-94cb-1b20ad75ee1c","Type":"ContainerStarted","Data":"8468bd2a2161175e696f20868531488b079471cbb37c953cccf04ab9a47ce2b3"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631139 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-nth67" event={"ID":"92008ac4-8deb-4fb9-9116-14d2d005bd36","Type":"ContainerStarted","Data":"e366984c121e8d2e113065b7ddcf8c580aefffdb74afe23f16e38dc9a00e5aa3"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631151 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-nth67" event={"ID":"92008ac4-8deb-4fb9-9116-14d2d005bd36","Type":"ContainerStarted","Data":"6ea59bb762ddd917687d0ab9c9b4c4c212079c243fa33d303d25cc82d89c923b"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631161 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" event={"ID":"a4339bd5-b8d1-467e-8158-4464ea901148","Type":"ContainerStarted","Data":"46fafdf5fa767d53d528bf20ba8d233f608a0480ae3b29b71e6e78f155340f4a"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631171 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" event={"ID":"a4339bd5-b8d1-467e-8158-4464ea901148","Type":"ContainerDied","Data":"23b61efd81399a78fa532e7f0cf8b35a9b7f7f7e97f61e6f0f85ac41949a2a92"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631182 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" event={"ID":"a4339bd5-b8d1-467e-8158-4464ea901148","Type":"ContainerDied","Data":"638df7437edc2bded4ad7d7ef94d2b7ebf2de761535638d3ecef6e0202944682"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631192 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" event={"ID":"a4339bd5-b8d1-467e-8158-4464ea901148","Type":"ContainerStarted","Data":"a9fb4904f90243607c1bd114c0e1c541fb17de9f6f5ce80d7f75369901ce613b"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631249 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" event={"ID":"4c31b8a7-edcb-403d-9122-7eb740f7d659","Type":"ContainerStarted","Data":"8df41532c87905e245b26ec36aa0216e69f949d06e668930ddee22c3fd75c8ba"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631261 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" event={"ID":"4c31b8a7-edcb-403d-9122-7eb740f7d659","Type":"ContainerDied","Data":"696e06ef6554e221cbbd27e48c3197d621e72c8d19b1df8b12bd4eab6b3279b8"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631272 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" event={"ID":"4c31b8a7-edcb-403d-9122-7eb740f7d659","Type":"ContainerStarted","Data":"1913b004153de96aee747d5e43e4468694e4be30746f1b0a2aa4f60e2176707c"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631285 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5frvf" event={"ID":"ef3a09a5-b019-48a3-97f8-7ddadb37394e","Type":"ContainerStarted","Data":"ed3a51968be582f9405f7944baf7a18811d9549cf28f115ef204ab2e3755c685"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631294 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5frvf" event={"ID":"ef3a09a5-b019-48a3-97f8-7ddadb37394e","Type":"ContainerStarted","Data":"34cc992d367669608546ba8ae39873d4139dfeeb4850c5979567cde508c8b524"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631305 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" event={"ID":"84a61910-48eb-4c27-8d69-f6aa7ce912ca","Type":"ContainerStarted","Data":"0c7b8bf82047d7f14cd11a58c6013f1477a0bb779432cc1841cbfb0ce5b3642f"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631316 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" event={"ID":"84a61910-48eb-4c27-8d69-f6aa7ce912ca","Type":"ContainerStarted","Data":"c611b24ddb76e62693aedc7b9d79cfbcb4b25fe7da745bd0f6bf1d9bed95789d"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631325 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" event={"ID":"84a61910-48eb-4c27-8d69-f6aa7ce912ca","Type":"ContainerDied","Data":"033a3d2eac65c1b4d9f27c950aeb8dc662b4f02d9215e718db95c771bce201e1"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631335 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" event={"ID":"84a61910-48eb-4c27-8d69-f6aa7ce912ca","Type":"ContainerStarted","Data":"34bf21f0d5e74283c2c3382d9b925b925de6b532a3f67ab7bff4afdbe95f9332"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631345 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" event={"ID":"c0a3548f-299c-4234-9bf1-c93efcb9740b","Type":"ContainerStarted","Data":"8955afec05ac17b6d5bd5b27623b6f73413fa01ace341f3ccb7e06f06406e93d"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631354 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" event={"ID":"c0a3548f-299c-4234-9bf1-c93efcb9740b","Type":"ContainerDied","Data":"52bf43d0e30c121fdb642cca3e4e8c737348e2c0806817b6c660ae4bd355d192"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631364 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" event={"ID":"c0a3548f-299c-4234-9bf1-c93efcb9740b","Type":"ContainerStarted","Data":"3e54884bb129553f96e22ded74db5788d449f044a28bbdd487ce407f3c14ba01"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631374 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" event={"ID":"ee3a6748-0bbc-41bf-8726-a8db18faf03b","Type":"ContainerStarted","Data":"f8fca03cf5f84009dcd71f69da0387d2543ee01bf4de2848abd4137f8d885ea7"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631384 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" event={"ID":"ee3a6748-0bbc-41bf-8726-a8db18faf03b","Type":"ContainerStarted","Data":"d290cc412a6f01775c2fd7e994caaa64944314f12f379ba1be952f8d473106fb"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631393 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" event={"ID":"ee3a6748-0bbc-41bf-8726-a8db18faf03b","Type":"ContainerStarted","Data":"5412cad37cfea94450b3688c380c9cc1161ff7a9a7f0b141297d24e746b33629"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631402 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"986049a1-b3e4-4dca-b178-55eaa7a27bfb","Type":"ContainerDied","Data":"6f844b10f8ac3c87a0a1682a1e7ea9ccbec49915b04b1fd7a88cca60f9004b80"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631412 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"986049a1-b3e4-4dca-b178-55eaa7a27bfb","Type":"ContainerDied","Data":"c5695ade0d175e611702bd38877ae968a3b086637dc9039f70a7cafe4447aa4c"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631420 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5695ade0d175e611702bd38877ae968a3b086637dc9039f70a7cafe4447aa4c" Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631428 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" event={"ID":"5f55b652-bef8-4f50-9d1d-9d0a340c1dea","Type":"ContainerStarted","Data":"b32808c0432a17a8a50986c476d3d940501a7c0d29edb9bdac073e34bc6d47d4"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631437 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" event={"ID":"5f55b652-bef8-4f50-9d1d-9d0a340c1dea","Type":"ContainerDied","Data":"4c99e85f05d7056363eecf219cc429ad9226d3b3266d2b4c70190b2024933a11"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631448 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" event={"ID":"5f55b652-bef8-4f50-9d1d-9d0a340c1dea","Type":"ContainerDied","Data":"a4f9d4f4e0643d58b1e9cfa61dcaf195c6366d9cff2806438f5074f51bd80343"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631458 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" event={"ID":"5f55b652-bef8-4f50-9d1d-9d0a340c1dea","Type":"ContainerStarted","Data":"80505c2710f2e2216eec6a4e82e9601038f01af58386ea11bb977eb9c2b78e51"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631467 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" event={"ID":"ae43311e-14ba-40a1-bdbf-f02d68031757","Type":"ContainerStarted","Data":"addc02a5d315de2c99503b1e4806fe5dbe1a200c2f58b5fc9834e01320e787df"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631476 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" event={"ID":"ae43311e-14ba-40a1-bdbf-f02d68031757","Type":"ContainerStarted","Data":"98cefd97ab706d635159fe166fdd26af88aed13fdb9a558beff59fd90bc32cf6"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631484 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" event={"ID":"ae43311e-14ba-40a1-bdbf-f02d68031757","Type":"ContainerStarted","Data":"3c3c6a0066a2da65aa0c6f5621f865feea551c3602354f05a3bf53b7f588a01e"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631494 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" event={"ID":"d28490b0-96ca-4fe0-8fae-e6f8390f933b","Type":"ContainerStarted","Data":"e59cdaeddac19ab24ca5869bcd614625e7a228c980805267b2b8efe30053d76b"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631503 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" event={"ID":"d28490b0-96ca-4fe0-8fae-e6f8390f933b","Type":"ContainerStarted","Data":"a6bc9f04ac2dc38938d0e42cdeacec4b6423ef553431eb66d814af552bb9732b"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631512 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" event={"ID":"d28490b0-96ca-4fe0-8fae-e6f8390f933b","Type":"ContainerStarted","Data":"80b53aa57494cc0bc6bbacad6b2e04131adc3c0ab6e7a77f83dd0c6c91461d7d"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631523 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"a7fc05cab5fa5d76a89a52b0f5a53914558966a5d0c6a7984a6068b77bc7a605"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631533 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"eb4d015a488a0971e00d7594bd984177653b19f895e48b58140d83ce5c2ae58c"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631542 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"62a99730256e3da9df09e2ce0694443877449feb60b369ea0edbb62d7804a6cb"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631551 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"ed310b368397e816ac9d5651b95ab2ee3ceabf0ed71343ecfdc11e63fc82bf1d"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631560 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"90e4db478907522d61b2829ddc102e1a38c166c49934ed77415fc7f72dda10f0"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631570 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"5ca73d36b494abb0df059713a8f5425aebb316af1c2c669bb3690be1c4b60660"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631579 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"24d71ae939c27497695911553274ea2034c00a6fd53d16474b7a6d926d474f9c"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631588 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"d5988f2df105cf8c575e9b1c55b4b257b1238892c0d445f0831ad99b911ed459"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631598 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerDied","Data":"325237c1c62eee1b6dbe253582be0281f8aeaa79ed6559821ac6420b7b9c38ca"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631608 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" event={"ID":"21384bd0-495c-406a-9462-e9e740c04686","Type":"ContainerStarted","Data":"26c5fe83ca44257f00aa75056a5ba23aa71fd99df73033faf567ea11ded1340f"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631618 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" event={"ID":"26473c28-db42-47e6-9164-8c441ccc48ca","Type":"ContainerStarted","Data":"cf505a16e5ad42c8152bcd72aafcb820926098f85d46295fdbd79e955e20ab07"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631627 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" event={"ID":"26473c28-db42-47e6-9164-8c441ccc48ca","Type":"ContainerStarted","Data":"4d7a859ad253e344142e3d8002817623ee421d3b324eff2b6246c1b1fdd11bc1"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631636 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" event={"ID":"5d2b154b-de63-4c9b-99d8-487fb3035fb9","Type":"ContainerStarted","Data":"17a1dcd626d2cfc41eeea0541351130306226005125096a98152fe8eaa485bfc"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631646 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" event={"ID":"5d2b154b-de63-4c9b-99d8-487fb3035fb9","Type":"ContainerDied","Data":"c3644a2305f2cac790098fa61dc92fdcede4316b05ab9e68ec6a558810ecdfcf"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631655 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" event={"ID":"5d2b154b-de63-4c9b-99d8-487fb3035fb9","Type":"ContainerStarted","Data":"708920ea2d1be46cb95e4867b2c05c1f808d669a1169cbd70df0ac5377ecd8d6"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631663 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" event={"ID":"5d2b154b-de63-4c9b-99d8-487fb3035fb9","Type":"ContainerStarted","Data":"329b7497d730cc1438c1c88bd3563dab745cc5c71baf09835af567df43aee00e"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631673 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"975d0fde-cb2f-4599-b3b7-7de876307a61","Type":"ContainerDied","Data":"a59f2b3ca51cdc733c2fc543e6bd0ce183b3347c73680778d3a84d4f88dd4a1f"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631683 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"975d0fde-cb2f-4599-b3b7-7de876307a61","Type":"ContainerDied","Data":"60d5adb534a09e9e59437dcfd0ecba0aa4cc034a5ffeab8cd4bf643934aa8641"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631694 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60d5adb534a09e9e59437dcfd0ecba0aa4cc034a5ffeab8cd4bf643934aa8641" Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631702 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" event={"ID":"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1","Type":"ContainerStarted","Data":"1a2c604d274d1ad76efc44b881e36ebb1157c8f409246d344247ee87da1d2861"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631710 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" event={"ID":"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1","Type":"ContainerStarted","Data":"5c30b9cdcf13e6a3816e39ff92455fc96f090fac8eb9899e480122d604e7a1b8"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631719 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" event={"ID":"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a","Type":"ContainerStarted","Data":"a9ce7eb71cc45446f0234e09c6889c880d3cf1028e26cba052e2e23651943f9c"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631729 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" event={"ID":"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a","Type":"ContainerDied","Data":"ba86653512a4222e60f99c9a0811e8150bea75c06b16f3bd7d165d8b4d82ace0"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631739 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" event={"ID":"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a","Type":"ContainerStarted","Data":"7e5ec22b696c92663538e3ab3921b281ba772a0d18ef481de63a1f9eb71af2ff"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631748 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" event={"ID":"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a","Type":"ContainerStarted","Data":"22094081262cfd9afca75424166ecb944e973d770312e29078a1dee4fb675d30"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631757 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5fhb" event={"ID":"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3","Type":"ContainerStarted","Data":"6bdf8ae8895847f111c076e57ac2ee7237248e5947f527438f2b1ae9a2034af5"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631766 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5fhb" event={"ID":"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3","Type":"ContainerDied","Data":"6bb51ccc67529cda0c8d2e85bd6a87b5b5906d7277689a9401dd4cc5bc52c400"} Feb 20 15:01:03.633157 master-0 kubenswrapper[28120]: I0220 15:01:03.631777 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5fhb" event={"ID":"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3","Type":"ContainerDied","Data":"47e63e5f96b20c92842a652e9774f2aec1b3dc91bda96ad7600899bf883b2ca7"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631787 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5fhb" event={"ID":"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3","Type":"ContainerStarted","Data":"0c1b7791952a54d8b3ef36cceac195dbbcc9face3120a05a59672ee12b84ba46"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631796 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m6hpf" event={"ID":"32a79fe0-e619-4a66-8617-e8111bdc7e96","Type":"ContainerStarted","Data":"337ba8f0ea63092ae9d8ede824c31eaf5e84ca8f14eaf03b8e8583029c921325"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631805 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m6hpf" event={"ID":"32a79fe0-e619-4a66-8617-e8111bdc7e96","Type":"ContainerStarted","Data":"1489b48b9281848030ac8650ba6a4f51919e00d3276dcba9cb79f43f94b0f041"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631815 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" event={"ID":"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de","Type":"ContainerStarted","Data":"8eb9245e6af7170d918b2860e9811085196312cb4a81756246c16b0213c120bb"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631824 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" event={"ID":"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de","Type":"ContainerStarted","Data":"b30b0c2af77d1a0b3adcf4f9ef949ee16aed89bd4c51896da816fdd72fb442ea"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631833 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" event={"ID":"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de","Type":"ContainerStarted","Data":"dd0998467d8099b6ff8531304dd3f0e97b5c79ad6520753dadef997846c4d469"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631841 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-99lkv" event={"ID":"5ea4c132-b6d0-4dc9-942d-48e359eed418","Type":"ContainerStarted","Data":"f999ff4f0f2066a6276195a06d42bb6e1b1ff00d93613cff0f6a63447e475eb5"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631853 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-99lkv" event={"ID":"5ea4c132-b6d0-4dc9-942d-48e359eed418","Type":"ContainerStarted","Data":"6de47c9027ea6c2d8b35fbec623ec40ec3080ea6f35588d0df87b3d552d897e5"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631862 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-99lkv" event={"ID":"5ea4c132-b6d0-4dc9-942d-48e359eed418","Type":"ContainerStarted","Data":"7d7dfb1a01a9470453018e9e4e99ad966573e066e4eb9b370f42ef7d7426a75e"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631870 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631880 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631888 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631898 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerDied","Data":"c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631910 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerDied","Data":"5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631966 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"56ff46cdb00d28519af7c0cdc9ea8d11","Type":"ContainerStarted","Data":"2210f3254bc0bc47bf63efd7d8223a017f9ce1d63560804be28d1d5db58e4a7d"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631976 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" event={"ID":"b385880b-a26b-4353-8f6f-b7f926bcc67c","Type":"ContainerStarted","Data":"9ab968c039881eca411605f2dc6ddf6c3bae4902938cad1735091ee161273d08"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631986 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" event={"ID":"b385880b-a26b-4353-8f6f-b7f926bcc67c","Type":"ContainerDied","Data":"fcbb2a13969414b96cd30dbad7457a49997232b9842608fdd68bbd19061a8401"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631996 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" event={"ID":"b385880b-a26b-4353-8f6f-b7f926bcc67c","Type":"ContainerStarted","Data":"fdca7d5d1704511dbbe557b4aad88eeb5de8fd854245d73d9f7c0ff99dbe2f76"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.632004 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" event={"ID":"b385880b-a26b-4353-8f6f-b7f926bcc67c","Type":"ContainerStarted","Data":"437abb0aba17c9c29dae7086b861fc64a62c90a30c1567fbdec9a15f52cef039"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.632012 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" event={"ID":"db9dc349-5216-43ff-8c17-3a9384a010ea","Type":"ContainerStarted","Data":"09404528b44b3c434922019f2c7d2520c924306f4d6dd307fe7646b14b292751"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.632021 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" event={"ID":"db9dc349-5216-43ff-8c17-3a9384a010ea","Type":"ContainerDied","Data":"255184eff0270c34b8e6556e377cc8915ae25bb2f15df7164830c2551d563b2b"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.632032 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" event={"ID":"db9dc349-5216-43ff-8c17-3a9384a010ea","Type":"ContainerStarted","Data":"95650a37daeacacf8e69d045d48ba4a17652648a0c83345072715e4ffcfa2dda"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.632043 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerDied","Data":"77c5708572ab9b4b6918c12a1fcd864571adf469d8703ecc7203af8fab7885f3"} Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.631531 28120 scope.go:117] "RemoveContainer" containerID="a4f9d4f4e0643d58b1e9cfa61dcaf195c6366d9cff2806438f5074f51bd80343" Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.635319 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 20 15:01:03.642875 master-0 kubenswrapper[28120]: I0220 15:01:03.641542 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 20 15:01:03.644618 master-0 kubenswrapper[28120]: I0220 15:01:03.643073 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 20 15:01:03.645983 master-0 kubenswrapper[28120]: I0220 15:01:03.645757 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 20 15:01:03.652718 master-0 kubenswrapper[28120]: I0220 15:01:03.652124 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 20 15:01:03.653736 master-0 kubenswrapper[28120]: I0220 15:01:03.653696 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 20 15:01:03.658212 master-0 kubenswrapper[28120]: I0220 15:01:03.656256 28120 scope.go:117] "RemoveContainer" containerID="a4f9d4f4e0643d58b1e9cfa61dcaf195c6366d9cff2806438f5074f51bd80343" Feb 20 15:01:03.658212 master-0 kubenswrapper[28120]: E0220 15:01:03.656823 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f9d4f4e0643d58b1e9cfa61dcaf195c6366d9cff2806438f5074f51bd80343\": container with ID starting with a4f9d4f4e0643d58b1e9cfa61dcaf195c6366d9cff2806438f5074f51bd80343 not found: ID does not exist" containerID="a4f9d4f4e0643d58b1e9cfa61dcaf195c6366d9cff2806438f5074f51bd80343" Feb 20 15:01:03.658212 master-0 kubenswrapper[28120]: I0220 15:01:03.656847 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f9d4f4e0643d58b1e9cfa61dcaf195c6366d9cff2806438f5074f51bd80343"} err="failed to get container status \"a4f9d4f4e0643d58b1e9cfa61dcaf195c6366d9cff2806438f5074f51bd80343\": rpc error: code = NotFound desc = could not find container \"a4f9d4f4e0643d58b1e9cfa61dcaf195c6366d9cff2806438f5074f51bd80343\": container with ID starting with a4f9d4f4e0643d58b1e9cfa61dcaf195c6366d9cff2806438f5074f51bd80343 not found: ID does not exist" Feb 20 15:01:03.658212 master-0 kubenswrapper[28120]: I0220 15:01:03.657847 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj796\" (UniqueName: \"kubernetes.io/projected/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-kube-api-access-rj796\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:03.658212 master-0 kubenswrapper[28120]: I0220 15:01:03.657893 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-conf-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.658212 master-0 kubenswrapper[28120]: I0220 15:01:03.657914 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-trusted-ca\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 15:01:03.658212 master-0 kubenswrapper[28120]: I0220 15:01:03.657968 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-env-overrides\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 15:01:03.658212 master-0 kubenswrapper[28120]: I0220 15:01:03.657988 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-utilities\") pod \"community-operators-x5fhb\" (UID: \"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3\") " pod="openshift-marketplace/community-operators-x5fhb" Feb 20 15:01:03.658212 master-0 kubenswrapper[28120]: I0220 15:01:03.658005 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 15:01:03.658212 master-0 kubenswrapper[28120]: I0220 15:01:03.658024 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/84a61910-48eb-4c27-8d69-f6aa7ce912ca-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 15:01:03.658212 master-0 kubenswrapper[28120]: I0220 15:01:03.658149 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-etc-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.658212 master-0 kubenswrapper[28120]: I0220 15:01:03.658191 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tthkk\" (UniqueName: \"kubernetes.io/projected/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-kube-api-access-tthkk\") pod \"dns-default-dzfl8\" (UID: \"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0\") " pod="openshift-dns/dns-default-dzfl8" Feb 20 15:01:03.658725 master-0 kubenswrapper[28120]: I0220 15:01:03.658269 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-sn97p" Feb 20 15:01:03.658725 master-0 kubenswrapper[28120]: I0220 15:01:03.658360 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysctl-conf\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.658725 master-0 kubenswrapper[28120]: I0220 15:01:03.658414 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-utilities\") pod \"community-operators-x5fhb\" (UID: \"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3\") " pod="openshift-marketplace/community-operators-x5fhb" Feb 20 15:01:03.658725 master-0 kubenswrapper[28120]: I0220 15:01:03.658409 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/787a4fee-6625-4df5-a432-c7e1190da777-signing-cabundle\") pod \"service-ca-576b4d78bd-fc795\" (UID: \"787a4fee-6625-4df5-a432-c7e1190da777\") " pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 15:01:03.658725 master-0 kubenswrapper[28120]: I0220 15:01:03.658458 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8a278abf-8c59-4454-94d0-a0d0768cbec5-snapshots\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 15:01:03.658725 master-0 kubenswrapper[28120]: I0220 15:01:03.658532 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 15:01:03.658725 master-0 kubenswrapper[28120]: I0220 15:01:03.658555 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b385880b-a26b-4353-8f6f-b7f926bcc67c-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 15:01:03.658725 master-0 kubenswrapper[28120]: I0220 15:01:03.658576 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-os-release\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.658725 master-0 kubenswrapper[28120]: I0220 15:01:03.658591 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-cnibin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.658725 master-0 kubenswrapper[28120]: I0220 15:01:03.658609 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-mv42p\" (UID: \"6949e9d5-460c-4b63-94cb-1b20ad75ee1c\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 15:01:03.658725 master-0 kubenswrapper[28120]: I0220 15:01:03.658624 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:01:03.659214 master-0 kubenswrapper[28120]: I0220 15:01:03.658749 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mchbh\" (UniqueName: \"kubernetes.io/projected/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-kube-api-access-mchbh\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:03.659214 master-0 kubenswrapper[28120]: I0220 15:01:03.658801 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jshgm\" (UniqueName: \"kubernetes.io/projected/27ab8945-6a5b-4f7d-b893-6358da214499-kube-api-access-jshgm\") pod \"cluster-storage-operator-f94476f49-m2bj7\" (UID: \"27ab8945-6a5b-4f7d-b893-6358da214499\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" Feb 20 15:01:03.659214 master-0 kubenswrapper[28120]: I0220 15:01:03.658836 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:01:03.659214 master-0 kubenswrapper[28120]: I0220 15:01:03.658946 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-node-log\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.659214 master-0 kubenswrapper[28120]: I0220 15:01:03.658970 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smglm\" (UniqueName: \"kubernetes.io/projected/db9dc349-5216-43ff-8c17-3a9384a010ea-kube-api-access-smglm\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 15:01:03.659214 master-0 kubenswrapper[28120]: I0220 15:01:03.658987 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-bound-sa-token\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 15:01:03.659214 master-0 kubenswrapper[28120]: I0220 15:01:03.659004 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-serving-cert\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:03.659214 master-0 kubenswrapper[28120]: I0220 15:01:03.659021 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk2lq\" (UniqueName: \"kubernetes.io/projected/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-kube-api-access-gk2lq\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 15:01:03.659214 master-0 kubenswrapper[28120]: I0220 15:01:03.659040 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 15:01:03.659214 master-0 kubenswrapper[28120]: I0220 15:01:03.659129 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-mv42p\" (UID: \"6949e9d5-460c-4b63-94cb-1b20ad75ee1c\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 15:01:03.659214 master-0 kubenswrapper[28120]: I0220 15:01:03.659148 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm5p2\" (UniqueName: \"kubernetes.io/projected/d28490b0-96ca-4fe0-8fae-e6f8390f933b-kube-api-access-qm5p2\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 15:01:03.659214 master-0 kubenswrapper[28120]: I0220 15:01:03.659170 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 15:01:03.659214 master-0 kubenswrapper[28120]: I0220 15:01:03.659203 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/16d6dd52-d73b-4696-873e-00a6d4bb2c77-images\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 15:01:03.659214 master-0 kubenswrapper[28120]: I0220 15:01:03.659224 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-utilities\") pod \"certified-operators-9wddt\" (UID: \"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d\") " pod="openshift-marketplace/certified-operators-9wddt" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659269 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-serving-cert\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659280 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-host\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659335 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659374 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-serving-cert\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659396 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26473c28-db42-47e6-9164-8c441ccc48ca-serving-cert\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659411 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659391 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-utilities\") pod \"certified-operators-9wddt\" (UID: \"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d\") " pod="openshift-marketplace/certified-operators-9wddt" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659417 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-etc-kubernetes\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659522 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659557 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-cache\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659591 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/26473c28-db42-47e6-9164-8c441ccc48ca-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659605 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-serving-cert\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659619 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-tmpfs\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659647 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr6nr\" (UniqueName: \"kubernetes.io/projected/21384bd0-495c-406a-9462-e9e740c04686-kube-api-access-gr6nr\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659674 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-cache\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659684 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-config\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659713 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e9807a-859c-44c1-8511-0066b0f59ff8-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 15:01:03.659733 master-0 kubenswrapper[28120]: I0220 15:01:03.659740 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-sys\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.660414 master-0 kubenswrapper[28120]: I0220 15:01:03.659757 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-tmpfs\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 15:01:03.660414 master-0 kubenswrapper[28120]: I0220 15:01:03.659794 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrrq4\" (UniqueName: \"kubernetes.io/projected/af7b6f34-adca-4bdb-9e41-e2995a1d67a8-kube-api-access-nrrq4\") pod \"migrator-5c85bff57-9mbsh\" (UID: \"af7b6f34-adca-4bdb-9e41-e2995a1d67a8\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh" Feb 20 15:01:03.660414 master-0 kubenswrapper[28120]: I0220 15:01:03.659832 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-whereabouts-configmap\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.660414 master-0 kubenswrapper[28120]: I0220 15:01:03.659855 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mpr8\" (UniqueName: \"kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8\") pod \"network-check-target-ljvkb\" (UID: \"929dffba-46da-4d81-a437-bc6a9fe79811\") " pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 15:01:03.660414 master-0 kubenswrapper[28120]: I0220 15:01:03.659865 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8a278abf-8c59-4454-94d0-a0d0768cbec5-snapshots\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 15:01:03.660414 master-0 kubenswrapper[28120]: I0220 15:01:03.659873 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-ovnkube-identity-cm\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 15:01:03.660414 master-0 kubenswrapper[28120]: I0220 15:01:03.659890 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 15:01:03.660414 master-0 kubenswrapper[28120]: I0220 15:01:03.659908 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fttgr\" (UniqueName: \"kubernetes.io/projected/419f28a9-8fd7-4b59-9554-4d884a1208b5-kube-api-access-fttgr\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 15:01:03.660414 master-0 kubenswrapper[28120]: I0220 15:01:03.659940 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e9807a-859c-44c1-8511-0066b0f59ff8-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 15:01:03.660414 master-0 kubenswrapper[28120]: I0220 15:01:03.659984 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntlv2\" (UniqueName: \"kubernetes.io/projected/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-kube-api-access-ntlv2\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 15:01:03.660414 master-0 kubenswrapper[28120]: I0220 15:01:03.660008 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/989af121-da08-4f40-b08c-dd2aa67bc60c-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.660025 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9c94\" (UniqueName: \"kubernetes.io/projected/87cf4690-1ec1-44fc-94bd-730d9f2e6762-kube-api-access-r9c94\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.662787 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb46b\" (UniqueName: \"kubernetes.io/projected/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-kube-api-access-mb46b\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.662827 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwb5n\" (UniqueName: \"kubernetes.io/projected/234a44fd-c153-47a6-a11d-7d4b7165c236-kube-api-access-gwb5n\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.662849 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.662875 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl7wm\" (UniqueName: \"kubernetes.io/projected/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-kube-api-access-tl7wm\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.662903 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8157f73d-c757-40c4-80bc-3c9de2f2288a-serving-cert\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.662965 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b54xg\" (UniqueName: \"kubernetes.io/projected/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-kube-api-access-b54xg\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.660187 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.660648 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-whereabouts-configmap\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.662988 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/27ab8945-6a5b-4f7d-b893-6358da214499-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-m2bj7\" (UID: \"27ab8945-6a5b-4f7d-b893-6358da214499\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663084 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/787a4fee-6625-4df5-a432-c7e1190da777-signing-key\") pod \"service-ca-576b4d78bd-fc795\" (UID: \"787a4fee-6625-4df5-a432-c7e1190da777\") " pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663116 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-hostroot\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663145 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663171 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663196 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e3cc4073-a926-4aba-81e6-c616c2bb2987-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-d8fvp\" (UID: \"e3cc4073-a926-4aba-81e6-c616c2bb2987\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663228 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43e9807a-859c-44c1-8511-0066b0f59ff8-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663257 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57cks\" (UniqueName: \"kubernetes.io/projected/31d71c90-cab7-4411-9426-0713cb026294-kube-api-access-57cks\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663282 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a278abf-8c59-4454-94d0-a0d0768cbec5-service-ca-bundle\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663317 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-systemd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663345 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663370 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-systemd\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663387 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8157f73d-c757-40c4-80bc-3c9de2f2288a-serving-cert\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663395 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-tuned\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663424 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663448 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/19ce4b45-db46-4fc3-8d72-963de22f026b-tmp\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663481 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/419f28a9-8fd7-4b59-9554-4d884a1208b5-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663505 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-image-import-ca\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663532 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-netd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663574 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663598 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-system-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663623 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663623 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/787a4fee-6625-4df5-a432-c7e1190da777-signing-key\") pod \"service-ca-576b4d78bd-fc795\" (UID: \"787a4fee-6625-4df5-a432-c7e1190da777\") " pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663652 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663818 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.663976 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-slash\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664011 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664021 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e3cc4073-a926-4aba-81e6-c616c2bb2987-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-d8fvp\" (UID: \"e3cc4073-a926-4aba-81e6-c616c2bb2987\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664037 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664085 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-etcd-client\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664139 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jljjg\" (UniqueName: \"kubernetes.io/projected/49044786-483a-406e-8750-f6ded400841d-kube-api-access-jljjg\") pod \"control-plane-machine-set-operator-686847ff5f-2tpv8\" (UID: \"49044786-483a-406e-8750-f6ded400841d\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664146 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-tuned\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664176 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-srv-cert\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664190 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/19ce4b45-db46-4fc3-8d72-963de22f026b-tmp\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664195 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-image-import-ca\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664236 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31d71c90-cab7-4411-9426-0713cb026294-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664286 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a278abf-8c59-4454-94d0-a0d0768cbec5-serving-cert\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664417 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/419f28a9-8fd7-4b59-9554-4d884a1208b5-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664362 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-etcd-client\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664436 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-os-release\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664474 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-j9q5m\" (UID: \"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664594 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664633 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-config\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664688 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664719 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-ovn\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664770 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664842 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-multus-certs\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664874 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c81ad608-a8ad-4289-a8d2-d48acb9b540c-serving-cert\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.664989 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d28490b0-96ca-4fe0-8fae-e6f8390f933b-metrics-tls\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665013 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n85mh\" (UniqueName: \"kubernetes.io/projected/900e244c-67aa-402f-b5f0-d37c5c1cedf7-kube-api-access-n85mh\") pod \"csi-snapshot-controller-operator-6fb4df594f-p29qr\" (UID: \"900e244c-67aa-402f-b5f0-d37c5c1cedf7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665044 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45226\" (UniqueName: \"kubernetes.io/projected/19ce4b45-db46-4fc3-8d72-963de22f026b-kube-api-access-45226\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665099 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/26473c28-db42-47e6-9164-8c441ccc48ca-etc-ssl-certs\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665129 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db9dc349-5216-43ff-8c17-3a9384a010ea-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665180 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c81ad608-a8ad-4289-a8d2-d48acb9b540c-serving-cert\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665177 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-kubelet\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665211 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87cf4690-1ec1-44fc-94bd-730d9f2e6762-host-slash\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665527 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-encryption-config\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665587 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rln42\" (UniqueName: \"kubernetes.io/projected/ac3680de-aabf-414b-a340-5e5e6aea4822-kube-api-access-rln42\") pod \"redhat-marketplace-n2cdp\" (UID: \"ac3680de-aabf-414b-a340-5e5e6aea4822\") " pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665616 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c31b8a7-edcb-403d-9122-7eb740f7d659-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665665 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e9807a-859c-44c1-8511-0066b0f59ff8-config\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665693 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665718 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-cni-binary-copy\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665771 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-bin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665798 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnwtd\" (UniqueName: \"kubernetes.io/projected/1fe69517-eec2-4721-933c-fa27cea7ab1f-kube-api-access-rnwtd\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665853 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv57n\" (UniqueName: \"kubernetes.io/projected/448aafd2-ffb3-42c5-8085-f6194d7862e5-kube-api-access-nv57n\") pod \"node-resolver-djs75\" (UID: \"448aafd2-ffb3-42c5-8085-f6194d7862e5\") " pod="openshift-dns/node-resolver-djs75" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665878 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.665940 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.666011 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24c827995023caaffd01654949c8d4dd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.666044 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svlzf\" (UniqueName: \"kubernetes.io/projected/9fd9f419-2cdc-4991-8fb9-87d76ac58976-kube-api-access-svlzf\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.666095 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.666118 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/448aafd2-ffb3-42c5-8085-f6194d7862e5-hosts-file\") pod \"node-resolver-djs75\" (UID: \"448aafd2-ffb3-42c5-8085-f6194d7862e5\") " pod="openshift-dns/node-resolver-djs75" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.666165 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.666193 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcfnf\" (UniqueName: \"kubernetes.io/projected/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-kube-api-access-wcfnf\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.666214 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-webhook-cert\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.666265 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.666586 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-cni-binary-copy\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.666591 master-0 kubenswrapper[28120]: I0220 15:01:03.666611 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-encryption-config\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.669792 master-0 kubenswrapper[28120]: I0220 15:01:03.666689 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-metrics-tls\") pod \"dns-default-dzfl8\" (UID: \"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0\") " pod="openshift-dns/dns-default-dzfl8" Feb 20 15:01:03.669792 master-0 kubenswrapper[28120]: I0220 15:01:03.666837 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49044786-483a-406e-8750-f6ded400841d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-2tpv8\" (UID: \"49044786-483a-406e-8750-f6ded400841d\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" Feb 20 15:01:03.669792 master-0 kubenswrapper[28120]: I0220 15:01:03.666906 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vz22\" (UniqueName: \"kubernetes.io/projected/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-kube-api-access-2vz22\") pod \"machine-config-controller-54cb48566c-j9q5m\" (UID: \"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 15:01:03.669792 master-0 kubenswrapper[28120]: I0220 15:01:03.666958 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e9807a-859c-44c1-8511-0066b0f59ff8-config\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 15:01:03.669792 master-0 kubenswrapper[28120]: I0220 15:01:03.666965 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db9dc349-5216-43ff-8c17-3a9384a010ea-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 15:01:03.669792 master-0 kubenswrapper[28120]: I0220 15:01:03.666988 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-mv42p\" (UID: \"6949e9d5-460c-4b63-94cb-1b20ad75ee1c\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 15:01:03.669792 master-0 kubenswrapper[28120]: I0220 15:01:03.667028 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c31b8a7-edcb-403d-9122-7eb740f7d659-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 15:01:03.669792 master-0 kubenswrapper[28120]: I0220 15:01:03.667056 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-metrics-tls\") pod \"dns-default-dzfl8\" (UID: \"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0\") " pod="openshift-dns/dns-default-dzfl8" Feb 20 15:01:03.669792 master-0 kubenswrapper[28120]: I0220 15:01:03.667021 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 15:01:03.669792 master-0 kubenswrapper[28120]: I0220 15:01:03.667148 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-mcd-auth-proxy-config\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 15:01:03.672366 master-0 kubenswrapper[28120]: E0220 15:01:03.672322 28120 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:01:03.673162 master-0 kubenswrapper[28120]: E0220 15:01:03.673017 28120 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.675224 master-0 kubenswrapper[28120]: I0220 15:01:03.667212 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-var-lib-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.675333 master-0 kubenswrapper[28120]: I0220 15:01:03.675293 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-metrics-tls\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 15:01:03.675392 master-0 kubenswrapper[28120]: I0220 15:01:03.675347 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-binary-copy\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.675392 master-0 kubenswrapper[28120]: I0220 15:01:03.675386 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-stats-auth\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:03.675643 master-0 kubenswrapper[28120]: I0220 15:01:03.675425 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-node-pullsecrets\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.675643 master-0 kubenswrapper[28120]: I0220 15:01:03.675463 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4339bd5-b8d1-467e-8158-4464ea901148-serving-cert\") pod \"openshift-config-operator-6f47d587d6-hsqjc\" (UID: \"a4339bd5-b8d1-467e-8158-4464ea901148\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 15:01:03.675884 master-0 kubenswrapper[28120]: I0220 15:01:03.675753 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk2pl\" (UniqueName: \"kubernetes.io/projected/ee3a6748-0bbc-41bf-8726-a8db18faf03b-kube-api-access-mk2pl\") pod \"cluster-samples-operator-65c5c48b9b-92c4x\" (UID: \"ee3a6748-0bbc-41bf-8726-a8db18faf03b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" Feb 20 15:01:03.676237 master-0 kubenswrapper[28120]: I0220 15:01:03.676192 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:01:03.676308 master-0 kubenswrapper[28120]: I0220 15:01:03.676282 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 15:01:03.676734 master-0 kubenswrapper[28120]: I0220 15:01:03.676677 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-binary-copy\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.676792 master-0 kubenswrapper[28120]: I0220 15:01:03.676767 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 15:01:03.679059 master-0 kubenswrapper[28120]: I0220 15:01:03.679028 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 20 15:01:03.682578 master-0 kubenswrapper[28120]: I0220 15:01:03.681547 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 15:01:03.682578 master-0 kubenswrapper[28120]: I0220 15:01:03.681588 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-bin\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.682578 master-0 kubenswrapper[28120]: I0220 15:01:03.681621 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5fng\" (UniqueName: \"kubernetes.io/projected/84a61910-48eb-4c27-8d69-f6aa7ce912ca-kube-api-access-l5fng\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 15:01:03.682578 master-0 kubenswrapper[28120]: I0220 15:01:03.681980 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 15:01:03.682578 master-0 kubenswrapper[28120]: I0220 15:01:03.682113 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 15:01:03.682578 master-0 kubenswrapper[28120]: I0220 15:01:03.682144 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbw6n\" (UniqueName: \"kubernetes.io/projected/33675e96-ce49-49be-9117-954ac7cca5d5-kube-api-access-hbw6n\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 15:01:03.682578 master-0 kubenswrapper[28120]: I0220 15:01:03.682170 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/16d6dd52-d73b-4696-873e-00a6d4bb2c77-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 15:01:03.682578 master-0 kubenswrapper[28120]: I0220 15:01:03.682191 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-encryption-config\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:03.682578 master-0 kubenswrapper[28120]: I0220 15:01:03.682263 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xd6r\" (UniqueName: \"kubernetes.io/projected/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-kube-api-access-2xd6r\") pod \"community-operators-x5fhb\" (UID: \"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3\") " pod="openshift-marketplace/community-operators-x5fhb" Feb 20 15:01:03.682578 master-0 kubenswrapper[28120]: I0220 15:01:03.682327 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c31b8a7-edcb-403d-9122-7eb740f7d659-config\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 15:01:03.682578 master-0 kubenswrapper[28120]: I0220 15:01:03.682349 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysctl-d\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.682578 master-0 kubenswrapper[28120]: I0220 15:01:03.682368 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mclrj\" (UniqueName: \"kubernetes.io/projected/5d2b154b-de63-4c9b-99d8-487fb3035fb9-kube-api-access-mclrj\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 15:01:03.682578 master-0 kubenswrapper[28120]: I0220 15:01:03.682389 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 15:01:03.682578 master-0 kubenswrapper[28120]: I0220 15:01:03.682410 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/84a61910-48eb-4c27-8d69-f6aa7ce912ca-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 15:01:03.682578 master-0 kubenswrapper[28120]: I0220 15:01:03.682431 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-env-overrides\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.683338 master-0 kubenswrapper[28120]: I0220 15:01:03.682758 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db9dc349-5216-43ff-8c17-3a9384a010ea-config\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 15:01:03.683338 master-0 kubenswrapper[28120]: I0220 15:01:03.682859 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/84a61910-48eb-4c27-8d69-f6aa7ce912ca-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 15:01:03.683338 master-0 kubenswrapper[28120]: I0220 15:01:03.682977 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-kubernetes\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.683338 master-0 kubenswrapper[28120]: I0220 15:01:03.683002 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-apiservice-cert\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 15:01:03.683338 master-0 kubenswrapper[28120]: I0220 15:01:03.683019 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21384bd0-495c-406a-9462-e9e740c04686-ovn-node-metrics-cert\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.683338 master-0 kubenswrapper[28120]: I0220 15:01:03.683041 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-proxy-tls\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 15:01:03.683338 master-0 kubenswrapper[28120]: I0220 15:01:03.683059 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-script-lib\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.683338 master-0 kubenswrapper[28120]: I0220 15:01:03.683080 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.683338 master-0 kubenswrapper[28120]: I0220 15:01:03.682916 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c31b8a7-edcb-403d-9122-7eb740f7d659-config\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.683104 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psd59\" (UniqueName: \"kubernetes.io/projected/b6e6d218-d969-40b5-a32b-9b2093089dbf-kube-api-access-psd59\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.683437 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d7ef0c-272b-4d1e-965f-484975d5d25c-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.683463 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c0a3548f-299c-4234-9bf1-c93efcb9740b-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.683506 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-audit-dir\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.683547 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9fd9f419-2cdc-4991-8fb9-87d76ac58976-metrics-tls\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.683572 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21384bd0-495c-406a-9462-e9e740c04686-ovn-node-metrics-cert\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.683367 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-encryption-config\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.683750 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d7ef0c-272b-4d1e-965f-484975d5d25c-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.683606 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-ca\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.683814 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-client\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.683838 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtgrt\" (UniqueName: \"kubernetes.io/projected/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-kube-api-access-xtgrt\") pod \"certified-operators-9wddt\" (UID: \"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d\") " pod="openshift-marketplace/certified-operators-9wddt" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.683857 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-run\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.683879 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.683902 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.684080 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b6e6d218-d969-40b5-a32b-9b2093089dbf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.684144 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-proxy-tls\") pod \"machine-config-controller-54cb48566c-j9q5m\" (UID: \"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.684201 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9fd9f419-2cdc-4991-8fb9-87d76ac58976-metrics-tls\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 15:01:03.684212 master-0 kubenswrapper[28120]: I0220 15:01:03.684225 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj4dx\" (UniqueName: \"kubernetes.io/projected/c81ad608-a8ad-4289-a8d2-d48acb9b540c-kube-api-access-wj4dx\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 15:01:03.685087 master-0 kubenswrapper[28120]: I0220 15:01:03.684267 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee3a6748-0bbc-41bf-8726-a8db18faf03b-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-92c4x\" (UID: \"ee3a6748-0bbc-41bf-8726-a8db18faf03b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" Feb 20 15:01:03.685087 master-0 kubenswrapper[28120]: I0220 15:01:03.684432 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 15:01:03.685087 master-0 kubenswrapper[28120]: I0220 15:01:03.684477 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 15:01:03.685087 master-0 kubenswrapper[28120]: I0220 15:01:03.684684 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-etcd-serving-ca\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:03.685087 master-0 kubenswrapper[28120]: I0220 15:01:03.684724 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-audit-dir\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:03.685087 master-0 kubenswrapper[28120]: I0220 15:01:03.684759 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvthk\" (UniqueName: \"kubernetes.io/projected/a4339bd5-b8d1-467e-8158-4464ea901148-kube-api-access-jvthk\") pod \"openshift-config-operator-6f47d587d6-hsqjc\" (UID: \"a4339bd5-b8d1-467e-8158-4464ea901148\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 15:01:03.685087 master-0 kubenswrapper[28120]: I0220 15:01:03.684909 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-client\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.686642 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k6br\" (UniqueName: \"kubernetes.io/projected/787a4fee-6625-4df5-a432-c7e1190da777-kube-api-access-9k6br\") pod \"service-ca-576b4d78bd-fc795\" (UID: \"787a4fee-6625-4df5-a432-c7e1190da777\") " pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.686945 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-rootfs\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687003 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687160 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkq7j\" (UniqueName: \"kubernetes.io/projected/32a79fe0-e619-4a66-8617-e8111bdc7e96-kube-api-access-jkq7j\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687265 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nlf9\" (UniqueName: \"kubernetes.io/projected/5ea4c132-b6d0-4dc9-942d-48e359eed418-kube-api-access-7nlf9\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687291 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c81ad608-a8ad-4289-a8d2-d48acb9b540c-config\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687314 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzmqr\" (UniqueName: \"kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-kube-api-access-pzmqr\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687355 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-config\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687377 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-system-cni-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687394 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-k8s-cni-cncf-io\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687432 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687456 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/989af121-da08-4f40-b08c-dd2aa67bc60c-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687473 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-lib-modules\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687595 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-service-ca-bundle\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687634 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93786626-fac4-48f0-bf72-992bc39f4a82-catalog-content\") pod \"redhat-operators-z4wzg\" (UID: \"93786626-fac4-48f0-bf72-992bc39f4a82\") " pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687774 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac3680de-aabf-414b-a340-5e5e6aea4822-catalog-content\") pod \"redhat-marketplace-n2cdp\" (UID: \"ac3680de-aabf-414b-a340-5e5e6aea4822\") " pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687798 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwclx\" (UniqueName: \"kubernetes.io/projected/b385880b-a26b-4353-8f6f-b7f926bcc67c-kube-api-access-fwclx\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687905 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c81ad608-a8ad-4289-a8d2-d48acb9b540c-config\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687945 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-multus\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.687984 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93786626-fac4-48f0-bf72-992bc39f4a82-catalog-content\") pod \"redhat-operators-z4wzg\" (UID: \"93786626-fac4-48f0-bf72-992bc39f4a82\") " pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 15:01:03.687967 master-0 kubenswrapper[28120]: I0220 15:01:03.688004 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-etcd-serving-ca\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688036 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk5m4\" (UniqueName: \"kubernetes.io/projected/8157f73d-c757-40c4-80bc-3c9de2f2288a-kube-api-access-bk5m4\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688089 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-config\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688119 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svhtr\" (UniqueName: \"kubernetes.io/projected/45d7ef0c-272b-4d1e-965f-484975d5d25c-kube-api-access-svhtr\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688136 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac3680de-aabf-414b-a340-5e5e6aea4822-catalog-content\") pod \"redhat-marketplace-n2cdp\" (UID: \"ac3680de-aabf-414b-a340-5e5e6aea4822\") " pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688170 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-serving-cert\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688177 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/989af121-da08-4f40-b08c-dd2aa67bc60c-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688212 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26473c28-db42-47e6-9164-8c441ccc48ca-kube-api-access\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688246 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-systemd-units\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688264 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-config\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688314 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-netns\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688387 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-netns\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688415 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-log-socket\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688441 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-config-volume\") pod \"dns-default-dzfl8\" (UID: \"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0\") " pod="openshift-dns/dns-default-dzfl8" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688556 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/87cf4690-1ec1-44fc-94bd-730d9f2e6762-iptables-alerter-script\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688598 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-config-volume\") pod \"dns-default-dzfl8\" (UID: \"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0\") " pod="openshift-dns/dns-default-dzfl8" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688628 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/16d6dd52-d73b-4696-873e-00a6d4bb2c77-proxy-tls\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688658 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jn8g\" (UniqueName: \"kubernetes.io/projected/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-kube-api-access-4jn8g\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688700 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688721 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688739 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688776 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688789 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/16d6dd52-d73b-4696-873e-00a6d4bb2c77-proxy-tls\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 15:01:03.688772 master-0 kubenswrapper[28120]: I0220 15:01:03.688801 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.688828 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a278abf-8c59-4454-94d0-a0d0768cbec5-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.688850 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9crd\" (UniqueName: \"kubernetes.io/projected/8a278abf-8c59-4454-94d0-a0d0768cbec5-kube-api-access-r9crd\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.688869 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.688887 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxncg\" (UniqueName: \"kubernetes.io/projected/16d6dd52-d73b-4696-873e-00a6d4bb2c77-kube-api-access-sxncg\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.688908 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.689021 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.689033 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/234a44fd-c153-47a6-a11d-7d4b7165c236-serving-cert\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.689053 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.689236 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a278abf-8c59-4454-94d0-a0d0768cbec5-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.689330 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-default-certificate\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.689350 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24c827995023caaffd01654949c8d4dd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.689369 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-trusted-ca-bundle\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.689389 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-var-lib-kubelet\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.689409 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcffg\" (UniqueName: \"kubernetes.io/projected/86f6836b-b018-4c7a-87ad-51809a4b9c7a-kube-api-access-wcffg\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.689425 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-config\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.689382 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/419f28a9-8fd7-4b59-9554-4d884a1208b5-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.689442 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-etcd-serving-ca\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.689601 master-0 kubenswrapper[28120]: I0220 15:01:03.689618 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/31d71c90-cab7-4411-9426-0713cb026294-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 15:01:03.690272 master-0 kubenswrapper[28120]: I0220 15:01:03.689618 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 15:01:03.690272 master-0 kubenswrapper[28120]: I0220 15:01:03.689823 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 15:01:03.690272 master-0 kubenswrapper[28120]: I0220 15:01:03.689876 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-etcd-serving-ca\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.690272 master-0 kubenswrapper[28120]: I0220 15:01:03.689928 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-trusted-ca-bundle\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.690272 master-0 kubenswrapper[28120]: I0220 15:01:03.689942 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-etcd-client\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:03.690272 master-0 kubenswrapper[28120]: I0220 15:01:03.689972 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/989af121-da08-4f40-b08c-dd2aa67bc60c-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 15:01:03.690272 master-0 kubenswrapper[28120]: I0220 15:01:03.689999 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-socket-dir-parent\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.690272 master-0 kubenswrapper[28120]: I0220 15:01:03.690030 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwnq7\" (UniqueName: \"kubernetes.io/projected/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-kube-api-access-mwnq7\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 15:01:03.690272 master-0 kubenswrapper[28120]: I0220 15:01:03.690054 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.690272 master-0 kubenswrapper[28120]: I0220 15:01:03.690108 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.690272 master-0 kubenswrapper[28120]: I0220 15:01:03.690141 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lcqg\" (UniqueName: \"kubernetes.io/projected/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-kube-api-access-9lcqg\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 15:01:03.690272 master-0 kubenswrapper[28120]: I0220 15:01:03.690174 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-cnibin\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.690745 master-0 kubenswrapper[28120]: I0220 15:01:03.690404 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/989af121-da08-4f40-b08c-dd2aa67bc60c-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 15:01:03.690745 master-0 kubenswrapper[28120]: I0220 15:01:03.690436 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lp29\" (UniqueName: \"kubernetes.io/projected/a1af84e0-776b-4285-906a-6880dbc82a7b-kube-api-access-6lp29\") pod \"csi-snapshot-controller-6847bb4785-2mtj6\" (UID: \"a1af84e0-776b-4285-906a-6880dbc82a7b\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" Feb 20 15:01:03.690745 master-0 kubenswrapper[28120]: I0220 15:01:03.690455 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm2jn\" (UniqueName: \"kubernetes.io/projected/93786626-fac4-48f0-bf72-992bc39f4a82-kube-api-access-fm2jn\") pod \"redhat-operators-z4wzg\" (UID: \"93786626-fac4-48f0-bf72-992bc39f4a82\") " pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 15:01:03.690745 master-0 kubenswrapper[28120]: I0220 15:01:03.690473 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysconfig\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.690745 master-0 kubenswrapper[28120]: I0220 15:01:03.690516 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 15:01:03.690745 master-0 kubenswrapper[28120]: I0220 15:01:03.690536 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9fd9f419-2cdc-4991-8fb9-87d76ac58976-host-etc-kube\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 15:01:03.690745 master-0 kubenswrapper[28120]: I0220 15:01:03.690554 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/84a61910-48eb-4c27-8d69-f6aa7ce912ca-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 15:01:03.690745 master-0 kubenswrapper[28120]: I0220 15:01:03.690570 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-kubelet\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.690745 master-0 kubenswrapper[28120]: I0220 15:01:03.690586 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a4339bd5-b8d1-467e-8158-4464ea901148-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-hsqjc\" (UID: \"a4339bd5-b8d1-467e-8158-4464ea901148\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 15:01:03.690745 master-0 kubenswrapper[28120]: I0220 15:01:03.690605 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c31b8a7-edcb-403d-9122-7eb740f7d659-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 15:01:03.690745 master-0 kubenswrapper[28120]: I0220 15:01:03.690666 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4dn4\" (UniqueName: \"kubernetes.io/projected/92008ac4-8deb-4fb9-9116-14d2d005bd36-kube-api-access-n4dn4\") pod \"network-check-source-58fb6744f5-nth67\" (UID: \"92008ac4-8deb-4fb9-9116-14d2d005bd36\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-nth67" Feb 20 15:01:03.690745 master-0 kubenswrapper[28120]: I0220 15:01:03.690684 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-metrics-certs\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:03.690745 master-0 kubenswrapper[28120]: I0220 15:01:03.690702 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-images\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 15:01:03.690745 master-0 kubenswrapper[28120]: I0220 15:01:03.690719 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-catalog-content\") pod \"community-operators-x5fhb\" (UID: \"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3\") " pod="openshift-marketplace/community-operators-x5fhb" Feb 20 15:01:03.690745 master-0 kubenswrapper[28120]: I0220 15:01:03.690737 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/84a61910-48eb-4c27-8d69-f6aa7ce912ca-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 15:01:03.690745 master-0 kubenswrapper[28120]: I0220 15:01:03.690755 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b385880b-a26b-4353-8f6f-b7f926bcc67c-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.690772 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac3680de-aabf-414b-a340-5e5e6aea4822-utilities\") pod \"redhat-marketplace-n2cdp\" (UID: \"ac3680de-aabf-414b-a340-5e5e6aea4822\") " pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.690782 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-etcd-client\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.690790 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-audit-policies\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.690811 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-trusted-ca-bundle\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.690829 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d5fq\" (UniqueName: \"kubernetes.io/projected/c0a3548f-299c-4234-9bf1-c93efcb9740b-kube-api-access-7d5fq\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.690846 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/26473c28-db42-47e6-9164-8c441ccc48ca-service-ca\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.690863 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93786626-fac4-48f0-bf72-992bc39f4a82-utilities\") pod \"redhat-operators-z4wzg\" (UID: \"93786626-fac4-48f0-bf72-992bc39f4a82\") " pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.691016 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33675e96-ce49-49be-9117-954ac7cca5d5-webhook-cert\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.691035 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-catalog-content\") pod \"certified-operators-9wddt\" (UID: \"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d\") " pod="openshift-marketplace/certified-operators-9wddt" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.691057 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-modprobe-d\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.691079 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.691097 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-daemon-config\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.691115 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47sqj\" (UniqueName: \"kubernetes.io/projected/64e9eca9-bbdd-4eca-9219-922bbab9b388-kube-api-access-47sqj\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.691140 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.691157 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.691179 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpt8j\" (UniqueName: \"kubernetes.io/projected/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-kube-api-access-jpt8j\") pod \"cloud-credential-operator-6968c58f46-mv42p\" (UID: \"6949e9d5-460c-4b63-94cb-1b20ad75ee1c\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.691205 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-audit\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.691232 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.691252 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.691270 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-srv-cert\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.691287 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45d7ef0c-272b-4d1e-965f-484975d5d25c-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 15:01:03.691292 master-0 kubenswrapper[28120]: I0220 15:01:03.691310 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 15:01:03.692380 master-0 kubenswrapper[28120]: I0220 15:01:03.691403 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93786626-fac4-48f0-bf72-992bc39f4a82-utilities\") pod \"redhat-operators-z4wzg\" (UID: \"93786626-fac4-48f0-bf72-992bc39f4a82\") " pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 15:01:03.692380 master-0 kubenswrapper[28120]: I0220 15:01:03.691859 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/84a61910-48eb-4c27-8d69-f6aa7ce912ca-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 15:01:03.692380 master-0 kubenswrapper[28120]: I0220 15:01:03.691936 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a4339bd5-b8d1-467e-8158-4464ea901148-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-hsqjc\" (UID: \"a4339bd5-b8d1-467e-8158-4464ea901148\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 15:01:03.692380 master-0 kubenswrapper[28120]: I0220 15:01:03.692060 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-catalog-content\") pod \"community-operators-x5fhb\" (UID: \"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3\") " pod="openshift-marketplace/community-operators-x5fhb" Feb 20 15:01:03.692380 master-0 kubenswrapper[28120]: I0220 15:01:03.692147 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-trusted-ca-bundle\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:03.692380 master-0 kubenswrapper[28120]: I0220 15:01:03.692190 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac3680de-aabf-414b-a340-5e5e6aea4822-utilities\") pod \"redhat-marketplace-n2cdp\" (UID: \"ac3680de-aabf-414b-a340-5e5e6aea4822\") " pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 15:01:03.692380 master-0 kubenswrapper[28120]: I0220 15:01:03.692307 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-audit-policies\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:03.692630 master-0 kubenswrapper[28120]: I0220 15:01:03.692591 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33675e96-ce49-49be-9117-954ac7cca5d5-webhook-cert\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 15:01:03.692670 master-0 kubenswrapper[28120]: I0220 15:01:03.692659 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-catalog-content\") pod \"certified-operators-9wddt\" (UID: \"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d\") " pod="openshift-marketplace/certified-operators-9wddt" Feb 20 15:01:03.692831 master-0 kubenswrapper[28120]: I0220 15:01:03.692800 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-daemon-config\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.693130 master-0 kubenswrapper[28120]: I0220 15:01:03.693098 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ea4c132-b6d0-4dc9-942d-48e359eed418-metrics-certs\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 15:01:03.693363 master-0 kubenswrapper[28120]: I0220 15:01:03.693332 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-audit\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.693590 master-0 kubenswrapper[28120]: I0220 15:01:03.693558 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe69517-eec2-4721-933c-fa27cea7ab1f-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 15:01:03.693801 master-0 kubenswrapper[28120]: I0220 15:01:03.693782 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45d7ef0c-272b-4d1e-965f-484975d5d25c-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 15:01:03.694634 master-0 kubenswrapper[28120]: I0220 15:01:03.694569 28120 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 20 15:01:03.698535 master-0 kubenswrapper[28120]: I0220 15:01:03.698468 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 20 15:01:03.700656 master-0 kubenswrapper[28120]: I0220 15:01:03.700618 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/16d6dd52-d73b-4696-873e-00a6d4bb2c77-images\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 15:01:03.719268 master-0 kubenswrapper[28120]: I0220 15:01:03.719164 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 20 15:01:03.722772 master-0 kubenswrapper[28120]: I0220 15:01:03.722738 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/16d6dd52-d73b-4696-873e-00a6d4bb2c77-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 15:01:03.725270 master-0 kubenswrapper[28120]: I0220 15:01:03.725246 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-j9q5m\" (UID: \"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 15:01:03.728941 master-0 kubenswrapper[28120]: I0220 15:01:03.728878 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-mcd-auth-proxy-config\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 15:01:03.739353 master-0 kubenswrapper[28120]: I0220 15:01:03.739303 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 20 15:01:03.760452 master-0 kubenswrapper[28120]: I0220 15:01:03.760388 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-cp6wb" Feb 20 15:01:03.780044 master-0 kubenswrapper[28120]: I0220 15:01:03.779972 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 20 15:01:03.787846 master-0 kubenswrapper[28120]: I0220 15:01:03.787802 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-mv42p\" (UID: \"6949e9d5-460c-4b63-94cb-1b20ad75ee1c\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 15:01:03.791902 master-0 kubenswrapper[28120]: I0220 15:01:03.791845 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-netd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.791986 master-0 kubenswrapper[28120]: I0220 15:01:03.791906 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-metrics-client-ca\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:03.792043 master-0 kubenswrapper[28120]: I0220 15:01:03.792003 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-netd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.792122 master-0 kubenswrapper[28120]: I0220 15:01:03.792091 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-system-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.792166 master-0 kubenswrapper[28120]: I0220 15:01:03.792125 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:03.792166 master-0 kubenswrapper[28120]: I0220 15:01:03.792139 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-system-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.792166 master-0 kubenswrapper[28120]: I0220 15:01:03.792150 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:01:03.792282 master-0 kubenswrapper[28120]: I0220 15:01:03.792175 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:03.792282 master-0 kubenswrapper[28120]: I0220 15:01:03.792236 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-slash\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.792282 master-0 kubenswrapper[28120]: I0220 15:01:03.792242 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:01:03.792282 master-0 kubenswrapper[28120]: I0220 15:01:03.792267 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-client-certs\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:03.792492 master-0 kubenswrapper[28120]: I0220 15:01:03.792292 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-slash\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.792492 master-0 kubenswrapper[28120]: I0220 15:01:03.792358 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:03.792492 master-0 kubenswrapper[28120]: I0220 15:01:03.792384 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:03.792492 master-0 kubenswrapper[28120]: I0220 15:01:03.792423 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-os-release\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.792492 master-0 kubenswrapper[28120]: I0220 15:01:03.792446 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:03.792492 master-0 kubenswrapper[28120]: I0220 15:01:03.792464 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-metrics-server-audit-profiles\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:03.792492 master-0 kubenswrapper[28120]: I0220 15:01:03.792493 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-ovn\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.792730 master-0 kubenswrapper[28120]: I0220 15:01:03.792512 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.792730 master-0 kubenswrapper[28120]: I0220 15:01:03.792524 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:03.792730 master-0 kubenswrapper[28120]: I0220 15:01:03.792553 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-ovn\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.792730 master-0 kubenswrapper[28120]: I0220 15:01:03.792529 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-os-release\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.792730 master-0 kubenswrapper[28120]: I0220 15:01:03.792581 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.792730 master-0 kubenswrapper[28120]: I0220 15:01:03.792617 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 15:01:03.792730 master-0 kubenswrapper[28120]: I0220 15:01:03.792637 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 15:01:03.792730 master-0 kubenswrapper[28120]: I0220 15:01:03.792675 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-multus-certs\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.792730 master-0 kubenswrapper[28120]: I0220 15:01:03.792694 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-cert\") pod \"ingress-canary-5qlzq\" (UID: \"4e7cac87-2eaa-4dad-b2dc-c8ed0557c665\") " pod="openshift-ingress-canary/ingress-canary-5qlzq" Feb 20 15:01:03.792730 master-0 kubenswrapper[28120]: I0220 15:01:03.792711 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-config\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 15:01:03.793136 master-0 kubenswrapper[28120]: I0220 15:01:03.792745 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-multus-certs\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.793136 master-0 kubenswrapper[28120]: I0220 15:01:03.792798 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/26473c28-db42-47e6-9164-8c441ccc48ca-etc-ssl-certs\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 15:01:03.793136 master-0 kubenswrapper[28120]: I0220 15:01:03.792821 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-kubelet\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.793136 master-0 kubenswrapper[28120]: I0220 15:01:03.792824 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/26473c28-db42-47e6-9164-8c441ccc48ca-etc-ssl-certs\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 15:01:03.793136 master-0 kubenswrapper[28120]: I0220 15:01:03.792848 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87cf4690-1ec1-44fc-94bd-730d9f2e6762-host-slash\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 15:01:03.793136 master-0 kubenswrapper[28120]: I0220 15:01:03.792862 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-kubelet\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.793136 master-0 kubenswrapper[28120]: I0220 15:01:03.792880 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:03.793136 master-0 kubenswrapper[28120]: I0220 15:01:03.792897 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87cf4690-1ec1-44fc-94bd-730d9f2e6762-host-slash\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 15:01:03.793136 master-0 kubenswrapper[28120]: I0220 15:01:03.792913 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/49defec6-a225-47ab-99ff-7a846f23eb00-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-7j5jb\" (UID: \"49defec6-a225-47ab-99ff-7a846f23eb00\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" Feb 20 15:01:03.793136 master-0 kubenswrapper[28120]: I0220 15:01:03.793004 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 15:01:03.793136 master-0 kubenswrapper[28120]: I0220 15:01:03.793053 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-bin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.793136 master-0 kubenswrapper[28120]: I0220 15:01:03.793084 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-bin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.793136 master-0 kubenswrapper[28120]: I0220 15:01:03.793106 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:03.793136 master-0 kubenswrapper[28120]: I0220 15:01:03.793127 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkc7z\" (UniqueName: \"kubernetes.io/projected/99fe3b99-0b40-4887-bcc8-59caa515b99f-kube-api-access-dkc7z\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:03.793136 master-0 kubenswrapper[28120]: I0220 15:01:03.793145 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-client-ca\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793225 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793255 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793275 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24c827995023caaffd01654949c8d4dd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793279 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793295 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-client-ca\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793312 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793318 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24c827995023caaffd01654949c8d4dd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793336 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793352 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/448aafd2-ffb3-42c5-8085-f6194d7862e5-hosts-file\") pod \"node-resolver-djs75\" (UID: \"448aafd2-ffb3-42c5-8085-f6194d7862e5\") " pod="openshift-dns/node-resolver-djs75" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793362 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793368 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793392 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-federate-client-tls\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793402 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/448aafd2-ffb3-42c5-8085-f6194d7862e5-hosts-file\") pod \"node-resolver-djs75\" (UID: \"448aafd2-ffb3-42c5-8085-f6194d7862e5\") " pod="openshift-dns/node-resolver-djs75" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793414 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793420 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793436 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793446 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/c0b78aa6-7bc8-4221-81f5-bf62a7110380-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793515 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793523 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/c0b78aa6-7bc8-4221-81f5-bf62a7110380-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793542 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-var-lib-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793573 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-node-pullsecrets\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793604 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793619 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-var-lib-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.793645 master-0 kubenswrapper[28120]: I0220 15:01:03.793624 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-trusted-ca-bundle\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.793690 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-node-pullsecrets\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.793754 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.793779 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.793826 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.793855 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-bin\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.793879 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.793907 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-cni-bin\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794031 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysctl-d\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794051 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794071 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794093 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysctl-d\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794107 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794170 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-kubernetes\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794207 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-textfile\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794237 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf18981-b755-4b11-8793-38bc5e2e755b-serving-cert\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794248 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-kubernetes\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794321 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-textfile\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794344 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/996d4949-f92c-42ac-9bda-8c6ec0295e92-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794389 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794416 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-sys\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794453 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-audit-dir\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794471 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794498 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-run\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794517 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-client-tls\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:03.794517 master-0 kubenswrapper[28120]: I0220 15:01:03.794535 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a39c5481-961c-4ac2-8c5b-a2c0165f4188-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.794581 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-run\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.794638 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-audit-dir\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.794671 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-rootfs\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.794674 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-audit-dir\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.794694 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-audit-dir\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.794697 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/99fe3b99-0b40-4887-bcc8-59caa515b99f-metrics-client-ca\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.794816 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-system-cni-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.794710 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-rootfs\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.794837 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-k8s-cni-cncf-io\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.794883 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-config\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.794888 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-system-cni-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.794901 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-lib-modules\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.794892 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-k8s-cni-cncf-io\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.795044 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-lib-modules\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.795202 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-multus\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.795255 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-systemd-units\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.795282 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-netns\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.795289 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-cni-multus\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.795306 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-netns\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.795327 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-run-netns\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.795333 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-systemd-units\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.795421 master-0 kubenswrapper[28120]: I0220 15:01:03.795332 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-log-socket\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.795360 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-log-socket\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.795368 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-host-run-netns\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.795447 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.795539 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-certs\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.795598 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/caef1c17-56b0-479c-b000-caaac3c2b249-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.795696 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-node-bootstrap-token\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.795779 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.795805 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-server-tls\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.795831 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.795865 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.795867 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.795890 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.795915 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24c827995023caaffd01654949c8d4dd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.795954 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc9pl\" (UniqueName: \"kubernetes.io/projected/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-kube-api-access-lc9pl\") pod \"ingress-canary-5qlzq\" (UID: \"4e7cac87-2eaa-4dad-b2dc-c8ed0557c665\") " pod="openshift-ingress-canary/ingress-canary-5qlzq" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.795970 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-var-lib-kubelet\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.796003 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z67rw\" (UniqueName: \"kubernetes.io/projected/8e8c5772-b6e2-43d8-b173-af74541855fb-kube-api-access-z67rw\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.796105 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-var-lib-kubelet\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.796135 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24c827995023caaffd01654949c8d4dd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.796135 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-wtmp\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.796165 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-socket-dir-parent\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.796190 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.796192 master-0 kubenswrapper[28120]: I0220 15:01:03.796208 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796226 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxjcq\" (UniqueName: \"kubernetes.io/projected/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-kube-api-access-wxjcq\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796252 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-cnibin\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796252 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796291 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6e6d218-d969-40b5-a32b-9b2093089dbf-cnibin\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796305 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zppr\" (UniqueName: \"kubernetes.io/projected/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-kube-api-access-9zppr\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796315 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796334 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr5wk\" (UniqueName: \"kubernetes.io/projected/bdf18981-b755-4b11-8793-38bc5e2e755b-kube-api-access-wr5wk\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796365 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl7tw\" (UniqueName: \"kubernetes.io/projected/a39c5481-961c-4ac2-8c5b-a2c0165f4188-kube-api-access-tl7tw\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796390 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysconfig\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796417 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-socket-dir-parent\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796485 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae43311e-14ba-40a1-bdbf-f02d68031757-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796519 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796579 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf5p9\" (UniqueName: \"kubernetes.io/projected/ae43311e-14ba-40a1-bdbf-f02d68031757-kube-api-access-mf5p9\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796597 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysconfig\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796605 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9fd9f419-2cdc-4991-8fb9-87d76ac58976-host-etc-kube\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796643 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-kubelet\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796643 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9fd9f419-2cdc-4991-8fb9-87d76ac58976-host-etc-kube\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796717 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-host-var-lib-kubelet\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796748 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcqd4\" (UniqueName: \"kubernetes.io/projected/ef3a09a5-b019-48a3-97f8-7ddadb37394e-kube-api-access-pcqd4\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796814 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/84a61910-48eb-4c27-8d69-f6aa7ce912ca-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 15:01:03.796856 master-0 kubenswrapper[28120]: I0220 15:01:03.796851 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-config\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:03.797552 master-0 kubenswrapper[28120]: I0220 15:01:03.796939 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/84a61910-48eb-4c27-8d69-f6aa7ce912ca-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 15:01:03.797552 master-0 kubenswrapper[28120]: I0220 15:01:03.796953 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-serving-certs-ca-bundle\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:03.797552 master-0 kubenswrapper[28120]: I0220 15:01:03.797124 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-modprobe-d\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.797552 master-0 kubenswrapper[28120]: I0220 15:01:03.797162 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.797701 master-0 kubenswrapper[28120]: I0220 15:01:03.797563 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.797701 master-0 kubenswrapper[28120]: I0220 15:01:03.797212 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-modprobe-d\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.797701 master-0 kubenswrapper[28120]: I0220 15:01:03.797628 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.797701 master-0 kubenswrapper[28120]: I0220 15:01:03.797665 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.797943 master-0 kubenswrapper[28120]: I0220 15:01:03.797829 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 15:01:03.797943 master-0 kubenswrapper[28120]: I0220 15:01:03.797904 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-conf-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.798025 master-0 kubenswrapper[28120]: I0220 15:01:03.797787 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.798025 master-0 kubenswrapper[28120]: I0220 15:01:03.797974 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/84a61910-48eb-4c27-8d69-f6aa7ce912ca-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 15:01:03.798164 master-0 kubenswrapper[28120]: I0220 15:01:03.798027 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/84a61910-48eb-4c27-8d69-f6aa7ce912ca-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 15:01:03.798164 master-0 kubenswrapper[28120]: I0220 15:01:03.797740 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-cni-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.798164 master-0 kubenswrapper[28120]: I0220 15:01:03.798044 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-etc-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.798164 master-0 kubenswrapper[28120]: I0220 15:01:03.798067 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-etc-openvswitch\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.798164 master-0 kubenswrapper[28120]: I0220 15:01:03.798111 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-root\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:03.798164 master-0 kubenswrapper[28120]: I0220 15:01:03.798132 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-proxy-ca-bundles\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:03.798164 master-0 kubenswrapper[28120]: I0220 15:01:03.798151 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-serving-cert\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:03.798164 master-0 kubenswrapper[28120]: I0220 15:01:03.797994 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-multus-conf-dir\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.798164 master-0 kubenswrapper[28120]: I0220 15:01:03.798170 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 15:01:03.798509 master-0 kubenswrapper[28120]: I0220 15:01:03.798209 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysctl-conf\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.798509 master-0 kubenswrapper[28120]: I0220 15:01:03.798247 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/caef1c17-56b0-479c-b000-caaac3c2b249-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 15:01:03.798509 master-0 kubenswrapper[28120]: I0220 15:01:03.798277 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-os-release\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.798509 master-0 kubenswrapper[28120]: I0220 15:01:03.798303 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-cnibin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.798509 master-0 kubenswrapper[28120]: I0220 15:01:03.798318 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:01:03.798509 master-0 kubenswrapper[28120]: I0220 15:01:03.798346 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:01:03.798509 master-0 kubenswrapper[28120]: I0220 15:01:03.798362 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-node-log\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.798509 master-0 kubenswrapper[28120]: I0220 15:01:03.798375 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-sysctl-conf\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.798509 master-0 kubenswrapper[28120]: I0220 15:01:03.798385 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-client-ca-bundle\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:03.798509 master-0 kubenswrapper[28120]: I0220 15:01:03.798406 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kgzf\" (UniqueName: \"kubernetes.io/projected/caef1c17-56b0-479c-b000-caaac3c2b249-kube-api-access-8kgzf\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 15:01:03.798509 master-0 kubenswrapper[28120]: I0220 15:01:03.798426 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-tls\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:03.798509 master-0 kubenswrapper[28120]: I0220 15:01:03.798460 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-os-release\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.798509 master-0 kubenswrapper[28120]: I0220 15:01:03.798466 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-host\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.798509 master-0 kubenswrapper[28120]: I0220 15:01:03.798488 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:01:03.798509 master-0 kubenswrapper[28120]: I0220 15:01:03.798514 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-etc-kubernetes\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.798509 master-0 kubenswrapper[28120]: I0220 15:01:03.798518 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.798541 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhzk6\" (UniqueName: \"kubernetes.io/projected/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-api-access-lhzk6\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.798562 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/26473c28-db42-47e6-9164-8c441ccc48ca-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.798593 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-sys\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.798629 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.798675 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k94cb\" (UniqueName: \"kubernetes.io/projected/49defec6-a225-47ab-99ff-7a846f23eb00-kube-api-access-k94cb\") pod \"multus-admission-controller-5f54bf67d4-7j5jb\" (UID: \"49defec6-a225-47ab-99ff-7a846f23eb00\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.798768 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/26473c28-db42-47e6-9164-8c441ccc48ca-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.798769 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-cnibin\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.798782 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-etc-kubernetes\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.798795 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-host\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.798812 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-node-log\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.798832 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-sys\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.798900 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.798942 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kfqn\" (UniqueName: \"kubernetes.io/projected/996d4949-f92c-42ac-9bda-8c6ec0295e92-kube-api-access-4kfqn\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.798964 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.799001 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-hostroot\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.799088 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 15:01:03.799103 master-0 kubenswrapper[28120]: I0220 15:01:03.798965 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/32a79fe0-e619-4a66-8617-e8111bdc7e96-hostroot\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:03.799665 master-0 kubenswrapper[28120]: I0220 15:01:03.799178 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:03.799665 master-0 kubenswrapper[28120]: I0220 15:01:03.799204 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-audit-log\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:03.799665 master-0 kubenswrapper[28120]: I0220 15:01:03.799238 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:03.799665 master-0 kubenswrapper[28120]: I0220 15:01:03.799243 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-systemd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.799665 master-0 kubenswrapper[28120]: I0220 15:01:03.799260 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21384bd0-495c-406a-9462-e9e740c04686-run-systemd\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:03.799665 master-0 kubenswrapper[28120]: I0220 15:01:03.799266 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:03.799665 master-0 kubenswrapper[28120]: I0220 15:01:03.799290 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 15:01:03.799665 master-0 kubenswrapper[28120]: I0220 15:01:03.799347 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-audit-log\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:03.799665 master-0 kubenswrapper[28120]: I0220 15:01:03.799362 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-systemd\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.799665 master-0 kubenswrapper[28120]: I0220 15:01:03.799383 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.799665 master-0 kubenswrapper[28120]: I0220 15:01:03.799461 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 20 15:01:03.799665 master-0 kubenswrapper[28120]: I0220 15:01:03.799460 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/19ce4b45-db46-4fc3-8d72-963de22f026b-etc-systemd\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:03.819327 master-0 kubenswrapper[28120]: I0220 15:01:03.819277 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 20 15:01:03.839577 master-0 kubenswrapper[28120]: I0220 15:01:03.839530 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-jscmz" Feb 20 15:01:03.859781 master-0 kubenswrapper[28120]: I0220 15:01:03.859706 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 20 15:01:03.865178 master-0 kubenswrapper[28120]: I0220 15:01:03.865123 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a278abf-8c59-4454-94d0-a0d0768cbec5-serving-cert\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 15:01:03.879360 master-0 kubenswrapper[28120]: I0220 15:01:03.879296 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 20 15:01:03.899429 master-0 kubenswrapper[28120]: I0220 15:01:03.899317 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 20 15:01:03.900153 master-0 kubenswrapper[28120]: I0220 15:01:03.900116 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:03.900316 master-0 kubenswrapper[28120]: I0220 15:01:03.900246 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:03.901054 master-0 kubenswrapper[28120]: I0220 15:01:03.900987 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-sys\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:03.901158 master-0 kubenswrapper[28120]: I0220 15:01:03.901118 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-sys\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:03.901606 master-0 kubenswrapper[28120]: I0220 15:01:03.901567 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:03.901677 master-0 kubenswrapper[28120]: I0220 15:01:03.901618 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:03.901740 master-0 kubenswrapper[28120]: I0220 15:01:03.901694 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-wtmp\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:03.902002 master-0 kubenswrapper[28120]: I0220 15:01:03.901909 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-wtmp\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:03.902081 master-0 kubenswrapper[28120]: I0220 15:01:03.902023 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:03.902381 master-0 kubenswrapper[28120]: I0220 15:01:03.902338 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-root\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:03.902456 master-0 kubenswrapper[28120]: I0220 15:01:03.902421 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/caef1c17-56b0-479c-b000-caaac3c2b249-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 15:01:03.902569 master-0 kubenswrapper[28120]: I0220 15:01:03.902538 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/caef1c17-56b0-479c-b000-caaac3c2b249-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 15:01:03.902728 master-0 kubenswrapper[28120]: I0220 15:01:03.902683 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/99fe3b99-0b40-4887-bcc8-59caa515b99f-root\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:03.905165 master-0 kubenswrapper[28120]: I0220 15:01:03.905123 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a278abf-8c59-4454-94d0-a0d0768cbec5-service-ca-bundle\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 15:01:03.919640 master-0 kubenswrapper[28120]: I0220 15:01:03.919590 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 20 15:01:03.929028 master-0 kubenswrapper[28120]: I0220 15:01:03.928963 28120 scope.go:117] "RemoveContainer" containerID="77c5708572ab9b4b6918c12a1fcd864571adf469d8703ecc7203af8fab7885f3" Feb 20 15:01:03.939321 master-0 kubenswrapper[28120]: I0220 15:01:03.939267 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-5mrbx" Feb 20 15:01:03.959607 master-0 kubenswrapper[28120]: I0220 15:01:03.959470 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 20 15:01:03.980360 master-0 kubenswrapper[28120]: I0220 15:01:03.979340 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 20 15:01:03.980360 master-0 kubenswrapper[28120]: I0220 15:01:03.979866 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26473c28-db42-47e6-9164-8c441ccc48ca-serving-cert\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 15:01:03.999632 master-0 kubenswrapper[28120]: I0220 15:01:03.999580 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 20 15:01:04.007467 master-0 kubenswrapper[28120]: I0220 15:01:04.007425 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-metrics-tls\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 15:01:04.019495 master-0 kubenswrapper[28120]: I0220 15:01:04.019432 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 20 15:01:04.023493 master-0 kubenswrapper[28120]: I0220 15:01:04.023420 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/26473c28-db42-47e6-9164-8c441ccc48ca-service-ca\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 15:01:04.046855 master-0 kubenswrapper[28120]: I0220 15:01:04.046792 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 20 15:01:04.048661 master-0 kubenswrapper[28120]: I0220 15:01:04.048616 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-trusted-ca\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 15:01:04.059635 master-0 kubenswrapper[28120]: I0220 15:01:04.059596 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 20 15:01:04.069087 master-0 kubenswrapper[28120]: I0220 15:01:04.069024 28120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 20 15:01:04.080850 master-0 kubenswrapper[28120]: I0220 15:01:04.080813 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-4xtlh" Feb 20 15:01:04.105651 master-0 kubenswrapper[28120]: I0220 15:01:04.105555 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 20 15:01:04.119641 master-0 kubenswrapper[28120]: I0220 15:01:04.119585 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xrgsf" Feb 20 15:01:04.140393 master-0 kubenswrapper[28120]: I0220 15:01:04.140342 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 20 15:01:04.144796 master-0 kubenswrapper[28120]: I0220 15:01:04.144737 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ee3a6748-0bbc-41bf-8726-a8db18faf03b-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-92c4x\" (UID: \"ee3a6748-0bbc-41bf-8726-a8db18faf03b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" Feb 20 15:01:04.158713 master-0 kubenswrapper[28120]: I0220 15:01:04.158657 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 20 15:01:04.179848 master-0 kubenswrapper[28120]: I0220 15:01:04.179785 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-c8rnz" Feb 20 15:01:04.199526 master-0 kubenswrapper[28120]: I0220 15:01:04.198464 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 20 15:01:04.233768 master-0 kubenswrapper[28120]: I0220 15:01:04.233410 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 20 15:01:04.239639 master-0 kubenswrapper[28120]: I0220 15:01:04.239597 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 20 15:01:04.244028 master-0 kubenswrapper[28120]: I0220 15:01:04.241840 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/27ab8945-6a5b-4f7d-b893-6358da214499-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-m2bj7\" (UID: \"27ab8945-6a5b-4f7d-b893-6358da214499\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" Feb 20 15:01:04.259200 master-0 kubenswrapper[28120]: I0220 15:01:04.259107 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 20 15:01:04.264016 master-0 kubenswrapper[28120]: I0220 15:01:04.263953 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-script-lib\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:04.279152 master-0 kubenswrapper[28120]: I0220 15:01:04.279102 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 20 15:01:04.280084 master-0 kubenswrapper[28120]: I0220 15:01:04.280044 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 15:01:04.288471 master-0 kubenswrapper[28120]: I0220 15:01:04.288416 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-ovnkube-config\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:04.298607 master-0 kubenswrapper[28120]: I0220 15:01:04.298549 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 20 15:01:04.300013 master-0 kubenswrapper[28120]: I0220 15:01:04.299965 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5d2b154b-de63-4c9b-99d8-487fb3035fb9-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 15:01:04.303533 master-0 kubenswrapper[28120]: I0220 15:01:04.303495 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21384bd0-495c-406a-9462-e9e740c04686-env-overrides\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:04.318847 master-0 kubenswrapper[28120]: I0220 15:01:04.318795 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 20 15:01:04.338881 master-0 kubenswrapper[28120]: I0220 15:01:04.338829 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 20 15:01:04.358989 master-0 kubenswrapper[28120]: I0220 15:01:04.358951 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 20 15:01:04.361379 master-0 kubenswrapper[28120]: I0220 15:01:04.361313 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-ovnkube-identity-cm\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 15:01:04.378438 master-0 kubenswrapper[28120]: I0220 15:01:04.378373 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 20 15:01:04.399312 master-0 kubenswrapper[28120]: I0220 15:01:04.399245 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 20 15:01:04.409213 master-0 kubenswrapper[28120]: I0220 15:01:04.409154 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/33675e96-ce49-49be-9117-954ac7cca5d5-env-overrides\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 15:01:04.419664 master-0 kubenswrapper[28120]: I0220 15:01:04.419616 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 20 15:01:04.439261 master-0 kubenswrapper[28120]: I0220 15:01:04.439171 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 20 15:01:04.467970 master-0 kubenswrapper[28120]: I0220 15:01:04.467894 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-check-endpoints/0.log" Feb 20 15:01:04.470715 master-0 kubenswrapper[28120]: I0220 15:01:04.470680 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 20 15:01:04.473582 master-0 kubenswrapper[28120]: I0220 15:01:04.473553 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:04.477279 master-0 kubenswrapper[28120]: I0220 15:01:04.477238 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 15:01:04.479615 master-0 kubenswrapper[28120]: I0220 15:01:04.479590 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 20 15:01:04.487907 master-0 kubenswrapper[28120]: I0220 15:01:04.487820 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:04.500104 master-0 kubenswrapper[28120]: I0220 15:01:04.500059 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 20 15:01:04.519260 master-0 kubenswrapper[28120]: I0220 15:01:04.519199 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 20 15:01:04.523458 master-0 kubenswrapper[28120]: I0220 15:01:04.523417 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-var-lock\") pod \"fea431d7-394f-4639-abd6-c70a28921fc6\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " Feb 20 15:01:04.523524 master-0 kubenswrapper[28120]: I0220 15:01:04.523506 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-kubelet-dir\") pod \"fea431d7-394f-4639-abd6-c70a28921fc6\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " Feb 20 15:01:04.524428 master-0 kubenswrapper[28120]: I0220 15:01:04.524383 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-var-lock" (OuterVolumeSpecName: "var-lock") pod "fea431d7-394f-4639-abd6-c70a28921fc6" (UID: "fea431d7-394f-4639-abd6-c70a28921fc6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:01:04.524519 master-0 kubenswrapper[28120]: I0220 15:01:04.524479 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fea431d7-394f-4639-abd6-c70a28921fc6" (UID: "fea431d7-394f-4639-abd6-c70a28921fc6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:01:04.525375 master-0 kubenswrapper[28120]: I0220 15:01:04.525353 28120 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 15:01:04.525433 master-0 kubenswrapper[28120]: I0220 15:01:04.525379 28120 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fea431d7-394f-4639-abd6-c70a28921fc6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:01:04.539245 master-0 kubenswrapper[28120]: I0220 15:01:04.539199 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 20 15:01:04.547411 master-0 kubenswrapper[28120]: I0220 15:01:04.547371 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 15:01:04.559273 master-0 kubenswrapper[28120]: I0220 15:01:04.559228 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 20 15:01:04.577498 master-0 kubenswrapper[28120]: I0220 15:01:04.577463 28120 request.go:700] Waited for 1.000067378s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&limit=500&resourceVersion=0 Feb 20 15:01:04.579280 master-0 kubenswrapper[28120]: I0220 15:01:04.579236 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 20 15:01:04.581384 master-0 kubenswrapper[28120]: I0220 15:01:04.581345 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-config\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 15:01:04.611557 master-0 kubenswrapper[28120]: I0220 15:01:04.611503 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 20 15:01:04.612608 master-0 kubenswrapper[28120]: I0220 15:01:04.612505 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8157f73d-c757-40c4-80bc-3c9de2f2288a-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 15:01:04.618714 master-0 kubenswrapper[28120]: I0220 15:01:04.618684 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 20 15:01:04.619205 master-0 kubenswrapper[28120]: I0220 15:01:04.619176 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/87cf4690-1ec1-44fc-94bd-730d9f2e6762-iptables-alerter-script\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 15:01:04.639408 master-0 kubenswrapper[28120]: I0220 15:01:04.639382 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 20 15:01:04.658961 master-0 kubenswrapper[28120]: E0220 15:01:04.658912 28120 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.659041 master-0 kubenswrapper[28120]: E0220 15:01:04.658973 28120 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.659041 master-0 kubenswrapper[28120]: E0220 15:01:04.659018 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-service-ca podName:234a44fd-c153-47a6-a11d-7d4b7165c236 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.158996599 +0000 UTC m=+3.419790182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-service-ca") pod "etcd-operator-545bf96f4d-jhd5c" (UID: "234a44fd-c153-47a6-a11d-7d4b7165c236") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.659110 master-0 kubenswrapper[28120]: E0220 15:01:04.659078 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b385880b-a26b-4353-8f6f-b7f926bcc67c-auth-proxy-config podName:b385880b-a26b-4353-8f6f-b7f926bcc67c nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.15904847 +0000 UTC m=+3.419842053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/b385880b-a26b-4353-8f6f-b7f926bcc67c-auth-proxy-config") pod "cluster-autoscaler-operator-86b8dc6d6-c8w7r" (UID: "b385880b-a26b-4353-8f6f-b7f926bcc67c") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.659147 master-0 kubenswrapper[28120]: E0220 15:01:04.659126 28120 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.659198 master-0 kubenswrapper[28120]: E0220 15:01:04.659173 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/787a4fee-6625-4df5-a432-c7e1190da777-signing-cabundle podName:787a4fee-6625-4df5-a432-c7e1190da777 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.159156853 +0000 UTC m=+3.419950556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/787a4fee-6625-4df5-a432-c7e1190da777-signing-cabundle") pod "service-ca-576b4d78bd-fc795" (UID: "787a4fee-6625-4df5-a432-c7e1190da777") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.659303 master-0 kubenswrapper[28120]: E0220 15:01:04.659228 28120 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.659303 master-0 kubenswrapper[28120]: E0220 15:01:04.659295 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-profile-collector-cert podName:64e9eca9-bbdd-4eca-9219-922bbab9b388 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.159283336 +0000 UTC m=+3.420077009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-profile-collector-cert") pod "olm-operator-5499d7f7bb-57rwb" (UID: "64e9eca9-bbdd-4eca-9219-922bbab9b388") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.659407 master-0 kubenswrapper[28120]: E0220 15:01:04.659389 28120 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.659442 master-0 kubenswrapper[28120]: E0220 15:01:04.659434 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-config podName:86f6836b-b018-4c7a-87ad-51809a4b9c7a nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.15942231 +0000 UTC m=+3.420215883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-config") pod "cluster-baremetal-operator-d6bb9bb76-k2tnk" (UID: "86f6836b-b018-4c7a-87ad-51809a4b9c7a") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.659625 master-0 kubenswrapper[28120]: I0220 15:01:04.659608 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 20 15:01:04.660349 master-0 kubenswrapper[28120]: E0220 15:01:04.660329 28120 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.660393 master-0 kubenswrapper[28120]: E0220 15:01:04.660383 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-config podName:234a44fd-c153-47a6-a11d-7d4b7165c236 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.160368863 +0000 UTC m=+3.421162446 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-config") pod "etcd-operator-545bf96f4d-jhd5c" (UID: "234a44fd-c153-47a6-a11d-7d4b7165c236") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.663651 master-0 kubenswrapper[28120]: I0220 15:01:04.663608 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db9dc349-5216-43ff-8c17-3a9384a010ea-config\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 15:01:04.664348 master-0 kubenswrapper[28120]: E0220 15:01:04.664320 28120 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.664419 master-0 kubenswrapper[28120]: E0220 15:01:04.664402 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-machine-api-operator-tls podName:0bedbe69-fc4b-4bd7-bcc2-acead927eda2 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.164381513 +0000 UTC m=+3.425175086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-machine-api-operator-tls") pod "machine-api-operator-5c7cf458b4-gjdb4" (UID: "0bedbe69-fc4b-4bd7-bcc2-acead927eda2") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.664469 master-0 kubenswrapper[28120]: E0220 15:01:04.664434 28120 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.664500 master-0 kubenswrapper[28120]: E0220 15:01:04.664468 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cluster-baremetal-operator-tls podName:86f6836b-b018-4c7a-87ad-51809a4b9c7a nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.164460395 +0000 UTC m=+3.425253968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-d6bb9bb76-k2tnk" (UID: "86f6836b-b018-4c7a-87ad-51809a4b9c7a") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.664533 master-0 kubenswrapper[28120]: E0220 15:01:04.664498 28120 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.664533 master-0 kubenswrapper[28120]: E0220 15:01:04.664527 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31d71c90-cab7-4411-9426-0713cb026294-trusted-ca podName:31d71c90-cab7-4411-9426-0713cb026294 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.164519517 +0000 UTC m=+3.425313090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/31d71c90-cab7-4411-9426-0713cb026294-trusted-ca") pod "cluster-node-tuning-operator-bcf775fc9-rpvf4" (UID: "31d71c90-cab7-4411-9426-0713cb026294") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.664592 master-0 kubenswrapper[28120]: E0220 15:01:04.664562 28120 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.664623 master-0 kubenswrapper[28120]: E0220 15:01:04.664605 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-images podName:86f6836b-b018-4c7a-87ad-51809a4b9c7a nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.164594068 +0000 UTC m=+3.425387641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-images") pod "cluster-baremetal-operator-d6bb9bb76-k2tnk" (UID: "86f6836b-b018-4c7a-87ad-51809a4b9c7a") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.664623 master-0 kubenswrapper[28120]: E0220 15:01:04.664565 28120 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.664675 master-0 kubenswrapper[28120]: E0220 15:01:04.664647 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-srv-cert podName:2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.16463924 +0000 UTC m=+3.425432813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-srv-cert") pod "catalog-operator-596f79dd6f-2g7jd" (UID: "2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.665825 master-0 kubenswrapper[28120]: E0220 15:01:04.665801 28120 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.665894 master-0 kubenswrapper[28120]: E0220 15:01:04.665872 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-config podName:0bedbe69-fc4b-4bd7-bcc2-acead927eda2 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.16585701 +0000 UTC m=+3.426650643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-config") pod "machine-api-operator-5c7cf458b4-gjdb4" (UID: "0bedbe69-fc4b-4bd7-bcc2-acead927eda2") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.667320 master-0 kubenswrapper[28120]: E0220 15:01:04.667295 28120 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.667369 master-0 kubenswrapper[28120]: E0220 15:01:04.667314 28120 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.667369 master-0 kubenswrapper[28120]: E0220 15:01:04.667360 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49044786-483a-406e-8750-f6ded400841d-control-plane-machine-set-operator-tls podName:49044786-483a-406e-8750-f6ded400841d nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.167347317 +0000 UTC m=+3.428140890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/49044786-483a-406e-8750-f6ded400841d-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-686847ff5f-2tpv8" (UID: "49044786-483a-406e-8750-f6ded400841d") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.667440 master-0 kubenswrapper[28120]: E0220 15:01:04.667383 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-webhook-cert podName:4ecbdf77-0c73-487e-943e-5315a0f8b8d4 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.167373178 +0000 UTC m=+3.428166751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-webhook-cert") pod "packageserver-6c5ff764cd-l2884" (UID: "4ecbdf77-0c73-487e-943e-5315a0f8b8d4") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.668528 master-0 kubenswrapper[28120]: E0220 15:01:04.668495 28120 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.668579 master-0 kubenswrapper[28120]: E0220 15:01:04.668567 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-profile-collector-cert podName:2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.168550577 +0000 UTC m=+3.429344220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-profile-collector-cert") pod "catalog-operator-596f79dd6f-2g7jd" (UID: "2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.676315 master-0 kubenswrapper[28120]: E0220 15:01:04.676278 28120 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.676315 master-0 kubenswrapper[28120]: E0220 15:01:04.676303 28120 secret.go:189] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.676408 master-0 kubenswrapper[28120]: E0220 15:01:04.676362 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4339bd5-b8d1-467e-8158-4464ea901148-serving-cert podName:a4339bd5-b8d1-467e-8158-4464ea901148 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.176342911 +0000 UTC m=+3.437136544 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a4339bd5-b8d1-467e-8158-4464ea901148-serving-cert") pod "openshift-config-operator-6f47d587d6-hsqjc" (UID: "a4339bd5-b8d1-467e-8158-4464ea901148") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.676408 master-0 kubenswrapper[28120]: E0220 15:01:04.676389 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-stats-auth podName:5f55b652-bef8-4f50-9d1d-9d0a340c1dea nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.176376752 +0000 UTC m=+3.437170325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-stats-auth") pod "router-default-7b65dc9fcb-tlsdt" (UID: "5f55b652-bef8-4f50-9d1d-9d0a340c1dea") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.678979 master-0 kubenswrapper[28120]: I0220 15:01:04.678953 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 20 15:01:04.683599 master-0 kubenswrapper[28120]: E0220 15:01:04.683561 28120 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.683668 master-0 kubenswrapper[28120]: E0220 15:01:04.683594 28120 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.683668 master-0 kubenswrapper[28120]: E0220 15:01:04.683642 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-proxy-tls podName:d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.183624182 +0000 UTC m=+3.444417825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-proxy-tls") pod "machine-config-daemon-ztgdm" (UID: "d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.683736 master-0 kubenswrapper[28120]: E0220 15:01:04.683668 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-apiservice-cert podName:4ecbdf77-0c73-487e-943e-5315a0f8b8d4 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.183655833 +0000 UTC m=+3.444449406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-apiservice-cert") pod "packageserver-6c5ff764cd-l2884" (UID: "4ecbdf77-0c73-487e-943e-5315a0f8b8d4") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.683835 master-0 kubenswrapper[28120]: E0220 15:01:04.683803 28120 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.683908 master-0 kubenswrapper[28120]: E0220 15:01:04.683888 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-ca podName:234a44fd-c153-47a6-a11d-7d4b7165c236 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.183870968 +0000 UTC m=+3.444664541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-ca") pod "etcd-operator-545bf96f4d-jhd5c" (UID: "234a44fd-c153-47a6-a11d-7d4b7165c236") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.684945 master-0 kubenswrapper[28120]: E0220 15:01:04.684896 28120 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.685007 master-0 kubenswrapper[28120]: E0220 15:01:04.684992 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-proxy-tls podName:3bf5be04-e4dd-44d9-be1a-3abe6ddd2367 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.184979736 +0000 UTC m=+3.445773319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-proxy-tls") pod "machine-config-controller-54cb48566c-j9q5m" (UID: "3bf5be04-e4dd-44d9-be1a-3abe6ddd2367") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.689023 master-0 kubenswrapper[28120]: E0220 15:01:04.688994 28120 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.689081 master-0 kubenswrapper[28120]: E0220 15:01:04.689057 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-service-ca-bundle podName:5f55b652-bef8-4f50-9d1d-9d0a340c1dea nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.189040937 +0000 UTC m=+3.449834510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-service-ca-bundle") pod "router-default-7b65dc9fcb-tlsdt" (UID: "5f55b652-bef8-4f50-9d1d-9d0a340c1dea") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.690407 master-0 kubenswrapper[28120]: E0220 15:01:04.690381 28120 secret.go:189] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.690469 master-0 kubenswrapper[28120]: E0220 15:01:04.690448 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-default-certificate podName:5f55b652-bef8-4f50-9d1d-9d0a340c1dea nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.190435032 +0000 UTC m=+3.451228605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-default-certificate") pod "router-default-7b65dc9fcb-tlsdt" (UID: "5f55b652-bef8-4f50-9d1d-9d0a340c1dea") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.691783 master-0 kubenswrapper[28120]: E0220 15:01:04.691751 28120 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.691853 master-0 kubenswrapper[28120]: E0220 15:01:04.691831 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cert podName:86f6836b-b018-4c7a-87ad-51809a4b9c7a nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.191808726 +0000 UTC m=+3.452602379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cert") pod "cluster-baremetal-operator-d6bb9bb76-k2tnk" (UID: "86f6836b-b018-4c7a-87ad-51809a4b9c7a") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.692727 master-0 kubenswrapper[28120]: E0220 15:01:04.692702 28120 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.692778 master-0 kubenswrapper[28120]: E0220 15:01:04.692726 28120 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.692778 master-0 kubenswrapper[28120]: E0220 15:01:04.692761 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-images podName:0bedbe69-fc4b-4bd7-bcc2-acead927eda2 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.19274689 +0000 UTC m=+3.453540463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-images") pod "machine-api-operator-5c7cf458b4-gjdb4" (UID: "0bedbe69-fc4b-4bd7-bcc2-acead927eda2") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.692842 master-0 kubenswrapper[28120]: E0220 15:01:04.692788 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-metrics-certs podName:5f55b652-bef8-4f50-9d1d-9d0a340c1dea nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.19277298 +0000 UTC m=+3.453566633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-metrics-certs") pod "router-default-7b65dc9fcb-tlsdt" (UID: "5f55b652-bef8-4f50-9d1d-9d0a340c1dea") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.694366 master-0 kubenswrapper[28120]: E0220 15:01:04.694325 28120 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.694422 master-0 kubenswrapper[28120]: E0220 15:01:04.694390 28120 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.694422 master-0 kubenswrapper[28120]: E0220 15:01:04.694417 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-srv-cert podName:64e9eca9-bbdd-4eca-9219-922bbab9b388 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.194395681 +0000 UTC m=+3.455189284 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-srv-cert") pod "olm-operator-5499d7f7bb-57rwb" (UID: "64e9eca9-bbdd-4eca-9219-922bbab9b388") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.694486 master-0 kubenswrapper[28120]: E0220 15:01:04.694443 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b385880b-a26b-4353-8f6f-b7f926bcc67c-cert podName:b385880b-a26b-4353-8f6f-b7f926bcc67c nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.194430532 +0000 UTC m=+3.455224105 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b385880b-a26b-4353-8f6f-b7f926bcc67c-cert") pod "cluster-autoscaler-operator-86b8dc6d6-c8w7r" (UID: "b385880b-a26b-4353-8f6f-b7f926bcc67c") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.699029 master-0 kubenswrapper[28120]: I0220 15:01:04.698999 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 20 15:01:04.736945 master-0 kubenswrapper[28120]: I0220 15:01:04.736685 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 20 15:01:04.739452 master-0 kubenswrapper[28120]: I0220 15:01:04.739383 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 20 15:01:04.759591 master-0 kubenswrapper[28120]: I0220 15:01:04.759533 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 20 15:01:04.780096 master-0 kubenswrapper[28120]: I0220 15:01:04.780026 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 20 15:01:04.792465 master-0 kubenswrapper[28120]: E0220 15:01:04.792393 28120 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.792671 master-0 kubenswrapper[28120]: E0220 15:01:04.792563 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-metrics-client-ca podName:8e8c5772-b6e2-43d8-b173-af74541855fb nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.292531985 +0000 UTC m=+3.553325558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-metrics-client-ca") pod "telemeter-client-64bcb8ffcf-vwfzx" (UID: "8e8c5772-b6e2-43d8-b173-af74541855fb") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.792671 master-0 kubenswrapper[28120]: E0220 15:01:04.792641 28120 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.792671 master-0 kubenswrapper[28120]: E0220 15:01:04.792639 28120 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.792891 master-0 kubenswrapper[28120]: E0220 15:01:04.792684 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-metrics-server-audit-profiles podName:bdd203e0-3dd9-4e9d-81f1-46f60d235e38 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.292674918 +0000 UTC m=+3.553468501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-metrics-server-audit-profiles") pod "metrics-server-9bcdd7684-kz2z2" (UID: "bdd203e0-3dd9-4e9d-81f1-46f60d235e38") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.792891 master-0 kubenswrapper[28120]: E0220 15:01:04.792718 28120 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.792891 master-0 kubenswrapper[28120]: E0220 15:01:04.792748 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-metrics-client-ca podName:c0b78aa6-7bc8-4221-81f5-bf62a7110380 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.2927405 +0000 UTC m=+3.553534073 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-metrics-client-ca") pod "kube-state-metrics-59584d565f-stlhz" (UID: "c0b78aa6-7bc8-4221-81f5-bf62a7110380") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.792891 master-0 kubenswrapper[28120]: E0220 15:01:04.792766 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-client-certs podName:bdd203e0-3dd9-4e9d-81f1-46f60d235e38 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.292760111 +0000 UTC m=+3.553553684 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-client-certs") pod "metrics-server-9bcdd7684-kz2z2" (UID: "bdd203e0-3dd9-4e9d-81f1-46f60d235e38") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.792876 28120 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793042 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-config podName:996d4949-f92c-42ac-9bda-8c6ec0295e92 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.293011317 +0000 UTC m=+3.553804920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-config") pod "machine-approver-7dd9c7d7b9-xcrlh" (UID: "996d4949-f92c-42ac-9bda-8c6ec0295e92") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793098 28120 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793106 28120 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793117 28120 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793143 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-configmap-kubelet-serving-ca-bundle podName:bdd203e0-3dd9-4e9d-81f1-46f60d235e38 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.29313197 +0000 UTC m=+3.553925543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-configmap-kubelet-serving-ca-bundle") pod "metrics-server-9bcdd7684-kz2z2" (UID: "bdd203e0-3dd9-4e9d-81f1-46f60d235e38") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793167 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-cert podName:4e7cac87-2eaa-4dad-b2dc-c8ed0557c665 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.29315359 +0000 UTC m=+3.553947183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-cert") pod "ingress-canary-5qlzq" (UID: "4e7cac87-2eaa-4dad-b2dc-c8ed0557c665") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793169 28120 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793193 28120 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793256 28120 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793216 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-custom-resource-state-configmap podName:c0b78aa6-7bc8-4221-81f5-bf62a7110380 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.293183991 +0000 UTC m=+3.553977634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-59584d565f-stlhz" (UID: "c0b78aa6-7bc8-4221-81f5-bf62a7110380") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793359 28120 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793367 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-kube-rbac-proxy-config podName:ae43311e-14ba-40a1-bdbf-f02d68031757 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.293342825 +0000 UTC m=+3.554136518 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-754bc4d665-gsn48" (UID: "ae43311e-14ba-40a1-bdbf-f02d68031757") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793412 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49defec6-a225-47ab-99ff-7a846f23eb00-webhook-certs podName:49defec6-a225-47ab-99ff-7a846f23eb00 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.293391016 +0000 UTC m=+3.554184729 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/49defec6-a225-47ab-99ff-7a846f23eb00-webhook-certs") pod "multus-admission-controller-5f54bf67d4-7j5jb" (UID: "49defec6-a225-47ab-99ff-7a846f23eb00") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793449 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client-kube-rbac-proxy-config podName:8e8c5772-b6e2-43d8-b173-af74541855fb nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.293431747 +0000 UTC m=+3.554225440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-64bcb8ffcf-vwfzx" (UID: "8e8c5772-b6e2-43d8-b173-af74541855fb") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793454 28120 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793487 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-client-ca podName:bdf18981-b755-4b11-8793-38bc5e2e755b nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.293468318 +0000 UTC m=+3.554262011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-client-ca") pod "controller-manager-647657fcb-w9586" (UID: "bdf18981-b755-4b11-8793-38bc5e2e755b") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793547 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-client-ca podName:63d49b12-8d51-4d97-9f06-ca4c5bf10dcd nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.293518259 +0000 UTC m=+3.554311932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-client-ca") pod "route-controller-manager-584d5796b9-lf8t5" (UID: "63d49b12-8d51-4d97-9f06-ca4c5bf10dcd") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.793648 master-0 kubenswrapper[28120]: E0220 15:01:04.793622 28120 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.793779 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-federate-client-tls podName:8e8c5772-b6e2-43d8-b173-af74541855fb nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.293740935 +0000 UTC m=+3.554534528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-federate-client-tls") pod "telemeter-client-64bcb8ffcf-vwfzx" (UID: "8e8c5772-b6e2-43d8-b173-af74541855fb") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.793834 28120 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.793845 28120 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.793857 28120 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.793885 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-images podName:caef1c17-56b0-479c-b000-caaac3c2b249 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.293868188 +0000 UTC m=+3.554661791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-images") pod "cluster-cloud-controller-manager-operator-67dd8d7969-855tj" (UID: "caef1c17-56b0-479c-b000-caaac3c2b249") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.793942 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-auth-proxy-config podName:caef1c17-56b0-479c-b000-caaac3c2b249 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.293906849 +0000 UTC m=+3.554700552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-67dd8d7969-855tj" (UID: "caef1c17-56b0-479c-b000-caaac3c2b249") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.793984 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-trusted-ca-bundle podName:8e8c5772-b6e2-43d8-b173-af74541855fb nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.29395794 +0000 UTC m=+3.554751553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-trusted-ca-bundle") pod "telemeter-client-64bcb8ffcf-vwfzx" (UID: "8e8c5772-b6e2-43d8-b173-af74541855fb") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.794836 28120 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.794895 28120 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.794911 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-client-tls podName:8e8c5772-b6e2-43d8-b173-af74541855fb nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.294887994 +0000 UTC m=+3.555681587 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-client-tls") pod "telemeter-client-64bcb8ffcf-vwfzx" (UID: "8e8c5772-b6e2-43d8-b173-af74541855fb") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.794916 28120 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.794988 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/99fe3b99-0b40-4887-bcc8-59caa515b99f-metrics-client-ca podName:99fe3b99-0b40-4887-bcc8-59caa515b99f nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.294968786 +0000 UTC m=+3.555762379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/99fe3b99-0b40-4887-bcc8-59caa515b99f-metrics-client-ca") pod "node-exporter-bk9bp" (UID: "99fe3b99-0b40-4887-bcc8-59caa515b99f") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.795004 28120 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.795034 28120 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.795036 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-auth-proxy-config podName:996d4949-f92c-42ac-9bda-8c6ec0295e92 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.295009617 +0000 UTC m=+3.555803240 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-auth-proxy-config") pod "machine-approver-7dd9c7d7b9-xcrlh" (UID: "996d4949-f92c-42ac-9bda-8c6ec0295e92") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.795054 28120 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.795048 28120 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.795079 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-tls podName:a39c5481-961c-4ac2-8c5b-a2c0165f4188 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.295066868 +0000 UTC m=+3.555860461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-tls") pod "openshift-state-metrics-6dbff8cb4c-dcjr4" (UID: "a39c5481-961c-4ac2-8c5b-a2c0165f4188") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.795071 28120 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.795115 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996d4949-f92c-42ac-9bda-8c6ec0295e92-machine-approver-tls podName:996d4949-f92c-42ac-9bda-8c6ec0295e92 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.295094829 +0000 UTC m=+3.555888432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/996d4949-f92c-42ac-9bda-8c6ec0295e92-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-xcrlh" (UID: "996d4949-f92c-42ac-9bda-8c6ec0295e92") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.795009 28120 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.795145 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-kube-rbac-proxy-config podName:c0b78aa6-7bc8-4221-81f5-bf62a7110380 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.29513137 +0000 UTC m=+3.555924963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-59584d565f-stlhz" (UID: "c0b78aa6-7bc8-4221-81f5-bf62a7110380") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.795203 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a39c5481-961c-4ac2-8c5b-a2c0165f4188-metrics-client-ca podName:a39c5481-961c-4ac2-8c5b-a2c0165f4188 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.29516255 +0000 UTC m=+3.555956143 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/a39c5481-961c-4ac2-8c5b-a2c0165f4188-metrics-client-ca") pod "openshift-state-metrics-6dbff8cb4c-dcjr4" (UID: "a39c5481-961c-4ac2-8c5b-a2c0165f4188") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.795234 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-config podName:bdf18981-b755-4b11-8793-38bc5e2e755b nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.295217412 +0000 UTC m=+3.556011015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-config") pod "controller-manager-647657fcb-w9586" (UID: "bdf18981-b755-4b11-8793-38bc5e2e755b") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.795696 master-0 kubenswrapper[28120]: E0220 15:01:04.795256 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdf18981-b755-4b11-8793-38bc5e2e755b-serving-cert podName:bdf18981-b755-4b11-8793-38bc5e2e755b nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.295246262 +0000 UTC m=+3.556039855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bdf18981-b755-4b11-8793-38bc5e2e755b-serving-cert") pod "controller-manager-647657fcb-w9586" (UID: "bdf18981-b755-4b11-8793-38bc5e2e755b") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.797738 master-0 kubenswrapper[28120]: E0220 15:01:04.796188 28120 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.797738 master-0 kubenswrapper[28120]: E0220 15:01:04.796232 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-node-bootstrap-token podName:ef3a09a5-b019-48a3-97f8-7ddadb37394e nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.296221567 +0000 UTC m=+3.557015130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-node-bootstrap-token") pod "machine-config-server-5frvf" (UID: "ef3a09a5-b019-48a3-97f8-7ddadb37394e") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.797738 master-0 kubenswrapper[28120]: E0220 15:01:04.796260 28120 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.797738 master-0 kubenswrapper[28120]: E0220 15:01:04.796287 28120 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.797738 master-0 kubenswrapper[28120]: E0220 15:01:04.796292 28120 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.797738 master-0 kubenswrapper[28120]: E0220 15:01:04.796320 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-server-tls podName:bdd203e0-3dd9-4e9d-81f1-46f60d235e38 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.296311249 +0000 UTC m=+3.557104932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-server-tls") pod "metrics-server-9bcdd7684-kz2z2" (UID: "bdd203e0-3dd9-4e9d-81f1-46f60d235e38") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.797738 master-0 kubenswrapper[28120]: E0220 15:01:04.796263 28120 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.797738 master-0 kubenswrapper[28120]: E0220 15:01:04.796340 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-certs podName:ef3a09a5-b019-48a3-97f8-7ddadb37394e nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.296330559 +0000 UTC m=+3.557124252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-certs") pod "machine-config-server-5frvf" (UID: "ef3a09a5-b019-48a3-97f8-7ddadb37394e") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.797738 master-0 kubenswrapper[28120]: E0220 15:01:04.796392 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client podName:8e8c5772-b6e2-43d8-b173-af74541855fb nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.29635289 +0000 UTC m=+3.557146593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client") pod "telemeter-client-64bcb8ffcf-vwfzx" (UID: "8e8c5772-b6e2-43d8-b173-af74541855fb") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.797738 master-0 kubenswrapper[28120]: E0220 15:01:04.796413 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caef1c17-56b0-479c-b000-caaac3c2b249-cloud-controller-manager-operator-tls podName:caef1c17-56b0-479c-b000-caaac3c2b249 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.296404121 +0000 UTC m=+3.557197824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/caef1c17-56b0-479c-b000-caaac3c2b249-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-67dd8d7969-855tj" (UID: "caef1c17-56b0-479c-b000-caaac3c2b249") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.799054 master-0 kubenswrapper[28120]: E0220 15:01:04.799008 28120 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7pkl9jqft06ca: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.799198 master-0 kubenswrapper[28120]: I0220 15:01:04.799060 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 20 15:01:04.799198 master-0 kubenswrapper[28120]: E0220 15:01:04.799092 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-client-ca-bundle podName:bdd203e0-3dd9-4e9d-81f1-46f60d235e38 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.299072598 +0000 UTC m=+3.559866191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-client-ca-bundle") pod "metrics-server-9bcdd7684-kz2z2" (UID: "bdd203e0-3dd9-4e9d-81f1-46f60d235e38") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.799198 master-0 kubenswrapper[28120]: E0220 15:01:04.799103 28120 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.799198 master-0 kubenswrapper[28120]: E0220 15:01:04.799129 28120 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.799198 master-0 kubenswrapper[28120]: E0220 15:01:04.799127 28120 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.799198 master-0 kubenswrapper[28120]: E0220 15:01:04.799179 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-kube-rbac-proxy-config podName:a39c5481-961c-4ac2-8c5b-a2c0165f4188 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.299164 +0000 UTC m=+3.559957603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-6dbff8cb4c-dcjr4" (UID: "a39c5481-961c-4ac2-8c5b-a2c0165f4188") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.799198 master-0 kubenswrapper[28120]: E0220 15:01:04.799213 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-serving-cert podName:63d49b12-8d51-4d97-9f06-ca4c5bf10dcd nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.299192041 +0000 UTC m=+3.559985644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-serving-cert") pod "route-controller-manager-584d5796b9-lf8t5" (UID: "63d49b12-8d51-4d97-9f06-ca4c5bf10dcd") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.799726 master-0 kubenswrapper[28120]: E0220 15:01:04.799226 28120 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.799726 master-0 kubenswrapper[28120]: E0220 15:01:04.799244 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-proxy-ca-bundles podName:bdf18981-b755-4b11-8793-38bc5e2e755b nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.299230552 +0000 UTC m=+3.560024145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-proxy-ca-bundles") pod "controller-manager-647657fcb-w9586" (UID: "bdf18981-b755-4b11-8793-38bc5e2e755b") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.799726 master-0 kubenswrapper[28120]: E0220 15:01:04.799269 28120 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.799726 master-0 kubenswrapper[28120]: E0220 15:01:04.799276 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-tls podName:99fe3b99-0b40-4887-bcc8-59caa515b99f nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.299261032 +0000 UTC m=+3.560054625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-tls") pod "node-exporter-bk9bp" (UID: "99fe3b99-0b40-4887-bcc8-59caa515b99f") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.799726 master-0 kubenswrapper[28120]: E0220 15:01:04.799292 28120 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.799726 master-0 kubenswrapper[28120]: E0220 15:01:04.799307 28120 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.799726 master-0 kubenswrapper[28120]: E0220 15:01:04.799348 28120 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.799726 master-0 kubenswrapper[28120]: E0220 15:01:04.799355 28120 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.799726 master-0 kubenswrapper[28120]: E0220 15:01:04.799307 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-kube-rbac-proxy-config podName:99fe3b99-0b40-4887-bcc8-59caa515b99f nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.299295693 +0000 UTC m=+3.560089296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-kube-rbac-proxy-config") pod "node-exporter-bk9bp" (UID: "99fe3b99-0b40-4887-bcc8-59caa515b99f") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.799726 master-0 kubenswrapper[28120]: E0220 15:01:04.799391 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ae43311e-14ba-40a1-bdbf-f02d68031757-metrics-client-ca podName:ae43311e-14ba-40a1-bdbf-f02d68031757 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.299380025 +0000 UTC m=+3.560173618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/ae43311e-14ba-40a1-bdbf-f02d68031757-metrics-client-ca") pod "prometheus-operator-754bc4d665-gsn48" (UID: "ae43311e-14ba-40a1-bdbf-f02d68031757") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.799726 master-0 kubenswrapper[28120]: E0220 15:01:04.799401 28120 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.799726 master-0 kubenswrapper[28120]: E0220 15:01:04.799413 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-config podName:63d49b12-8d51-4d97-9f06-ca4c5bf10dcd nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.299402116 +0000 UTC m=+3.560195719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-config") pod "route-controller-manager-584d5796b9-lf8t5" (UID: "63d49b12-8d51-4d97-9f06-ca4c5bf10dcd") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.799726 master-0 kubenswrapper[28120]: E0220 15:01:04.799441 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-tls podName:ae43311e-14ba-40a1-bdbf-f02d68031757 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.299429227 +0000 UTC m=+3.560222820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-gsn48" (UID: "ae43311e-14ba-40a1-bdbf-f02d68031757") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.799726 master-0 kubenswrapper[28120]: E0220 15:01:04.799464 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-serving-certs-ca-bundle podName:8e8c5772-b6e2-43d8-b173-af74541855fb nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.299452547 +0000 UTC m=+3.560246150 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-serving-certs-ca-bundle") pod "telemeter-client-64bcb8ffcf-vwfzx" (UID: "8e8c5772-b6e2-43d8-b173-af74541855fb") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:04.799726 master-0 kubenswrapper[28120]: E0220 15:01:04.799490 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-tls podName:c0b78aa6-7bc8-4221-81f5-bf62a7110380 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:05.299476248 +0000 UTC m=+3.560269841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-tls") pod "kube-state-metrics-59584d565f-stlhz" (UID: "c0b78aa6-7bc8-4221-81f5-bf62a7110380") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:04.819536 master-0 kubenswrapper[28120]: I0220 15:01:04.819454 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 20 15:01:04.839643 master-0 kubenswrapper[28120]: I0220 15:01:04.839543 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 20 15:01:04.860197 master-0 kubenswrapper[28120]: I0220 15:01:04.860122 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 20 15:01:04.879266 master-0 kubenswrapper[28120]: I0220 15:01:04.879195 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 20 15:01:04.899406 master-0 kubenswrapper[28120]: I0220 15:01:04.899299 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-7zgzx" Feb 20 15:01:04.919565 master-0 kubenswrapper[28120]: I0220 15:01:04.919487 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mvrxq" Feb 20 15:01:04.939514 master-0 kubenswrapper[28120]: I0220 15:01:04.939422 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 20 15:01:04.959690 master-0 kubenswrapper[28120]: I0220 15:01:04.959623 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 20 15:01:04.979227 master-0 kubenswrapper[28120]: I0220 15:01:04.979134 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 20 15:01:05.000124 master-0 kubenswrapper[28120]: I0220 15:01:04.999916 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 20 15:01:05.019665 master-0 kubenswrapper[28120]: I0220 15:01:05.019574 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 20 15:01:05.039289 master-0 kubenswrapper[28120]: I0220 15:01:05.039197 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-btmxs" Feb 20 15:01:05.061507 master-0 kubenswrapper[28120]: I0220 15:01:05.061439 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 20 15:01:05.080730 master-0 kubenswrapper[28120]: I0220 15:01:05.080678 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-n8qfb" Feb 20 15:01:05.099556 master-0 kubenswrapper[28120]: I0220 15:01:05.099493 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 20 15:01:05.118835 master-0 kubenswrapper[28120]: I0220 15:01:05.118740 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 20 15:01:05.139683 master-0 kubenswrapper[28120]: I0220 15:01:05.139617 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-77fdh" Feb 20 15:01:05.159494 master-0 kubenswrapper[28120]: I0220 15:01:05.159428 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-67ksg" Feb 20 15:01:05.179247 master-0 kubenswrapper[28120]: I0220 15:01:05.179167 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 20 15:01:05.202879 master-0 kubenswrapper[28120]: I0220 15:01:05.202837 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 20 15:01:05.219823 master-0 kubenswrapper[28120]: I0220 15:01:05.219772 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 20 15:01:05.240047 master-0 kubenswrapper[28120]: I0220 15:01:05.239988 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 20 15:01:05.252124 master-0 kubenswrapper[28120]: I0220 15:01:05.251978 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 15:01:05.252124 master-0 kubenswrapper[28120]: I0220 15:01:05.252085 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-metrics-certs\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:05.252124 master-0 kubenswrapper[28120]: I0220 15:01:05.252125 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-images\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 15:01:05.252491 master-0 kubenswrapper[28120]: I0220 15:01:05.252155 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b385880b-a26b-4353-8f6f-b7f926bcc67c-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 15:01:05.252491 master-0 kubenswrapper[28120]: I0220 15:01:05.252248 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-srv-cert\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 15:01:05.252491 master-0 kubenswrapper[28120]: I0220 15:01:05.252282 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 15:01:05.252491 master-0 kubenswrapper[28120]: I0220 15:01:05.252371 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/787a4fee-6625-4df5-a432-c7e1190da777-signing-cabundle\") pod \"service-ca-576b4d78bd-fc795\" (UID: \"787a4fee-6625-4df5-a432-c7e1190da777\") " pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 15:01:05.252491 master-0 kubenswrapper[28120]: I0220 15:01:05.252408 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 15:01:05.252491 master-0 kubenswrapper[28120]: I0220 15:01:05.252433 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b385880b-a26b-4353-8f6f-b7f926bcc67c-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 15:01:05.252838 master-0 kubenswrapper[28120]: I0220 15:01:05.252568 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 15:01:05.252838 master-0 kubenswrapper[28120]: I0220 15:01:05.252617 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-config\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 15:01:05.252838 master-0 kubenswrapper[28120]: I0220 15:01:05.252793 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 15:01:05.253052 master-0 kubenswrapper[28120]: I0220 15:01:05.252849 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 15:01:05.253052 master-0 kubenswrapper[28120]: I0220 15:01:05.252889 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 15:01:05.253052 master-0 kubenswrapper[28120]: I0220 15:01:05.252942 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-srv-cert\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 15:01:05.253052 master-0 kubenswrapper[28120]: I0220 15:01:05.252968 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31d71c90-cab7-4411-9426-0713cb026294-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 15:01:05.253052 master-0 kubenswrapper[28120]: I0220 15:01:05.253004 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-config\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 15:01:05.253343 master-0 kubenswrapper[28120]: I0220 15:01:05.253211 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-webhook-cert\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 15:01:05.253343 master-0 kubenswrapper[28120]: I0220 15:01:05.253238 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49044786-483a-406e-8750-f6ded400841d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-2tpv8\" (UID: \"49044786-483a-406e-8750-f6ded400841d\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" Feb 20 15:01:05.253343 master-0 kubenswrapper[28120]: I0220 15:01:05.253282 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 15:01:05.253343 master-0 kubenswrapper[28120]: I0220 15:01:05.253310 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-stats-auth\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:05.253343 master-0 kubenswrapper[28120]: I0220 15:01:05.253338 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4339bd5-b8d1-467e-8158-4464ea901148-serving-cert\") pod \"openshift-config-operator-6f47d587d6-hsqjc\" (UID: \"a4339bd5-b8d1-467e-8158-4464ea901148\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 15:01:05.253648 master-0 kubenswrapper[28120]: I0220 15:01:05.253463 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-apiservice-cert\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 15:01:05.253648 master-0 kubenswrapper[28120]: I0220 15:01:05.253510 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-proxy-tls\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 15:01:05.253648 master-0 kubenswrapper[28120]: I0220 15:01:05.253546 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-ca\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 15:01:05.253648 master-0 kubenswrapper[28120]: I0220 15:01:05.253600 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-proxy-tls\") pod \"machine-config-controller-54cb48566c-j9q5m\" (UID: \"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 15:01:05.253973 master-0 kubenswrapper[28120]: I0220 15:01:05.253749 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-service-ca-bundle\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:05.253973 master-0 kubenswrapper[28120]: I0220 15:01:05.253947 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-default-certificate\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:05.254547 master-0 kubenswrapper[28120]: I0220 15:01:05.254497 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-images\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 15:01:05.254788 master-0 kubenswrapper[28120]: I0220 15:01:05.254746 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b385880b-a26b-4353-8f6f-b7f926bcc67c-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 15:01:05.255095 master-0 kubenswrapper[28120]: I0220 15:01:05.255053 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-srv-cert\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 15:01:05.255364 master-0 kubenswrapper[28120]: I0220 15:01:05.255329 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/64e9eca9-bbdd-4eca-9219-922bbab9b388-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 15:01:05.255574 master-0 kubenswrapper[28120]: I0220 15:01:05.255537 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/787a4fee-6625-4df5-a432-c7e1190da777-signing-cabundle\") pod \"service-ca-576b4d78bd-fc795\" (UID: \"787a4fee-6625-4df5-a432-c7e1190da777\") " pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 15:01:05.255749 master-0 kubenswrapper[28120]: I0220 15:01:05.255715 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 15:01:05.256011 master-0 kubenswrapper[28120]: I0220 15:01:05.255888 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b385880b-a26b-4353-8f6f-b7f926bcc67c-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 15:01:05.256125 master-0 kubenswrapper[28120]: I0220 15:01:05.256104 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-config\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 15:01:05.256363 master-0 kubenswrapper[28120]: I0220 15:01:05.256317 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 15:01:05.256629 master-0 kubenswrapper[28120]: I0220 15:01:05.256602 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-srv-cert\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 15:01:05.256968 master-0 kubenswrapper[28120]: I0220 15:01:05.256905 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31d71c90-cab7-4411-9426-0713cb026294-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 15:01:05.257142 master-0 kubenswrapper[28120]: I0220 15:01:05.257099 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-config\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 15:01:05.257449 master-0 kubenswrapper[28120]: I0220 15:01:05.257407 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49044786-483a-406e-8750-f6ded400841d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-2tpv8\" (UID: \"49044786-483a-406e-8750-f6ded400841d\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" Feb 20 15:01:05.257664 master-0 kubenswrapper[28120]: I0220 15:01:05.257630 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 15:01:05.258057 master-0 kubenswrapper[28120]: I0220 15:01:05.258001 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/234a44fd-c153-47a6-a11d-7d4b7165c236-etcd-ca\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 15:01:05.259884 master-0 kubenswrapper[28120]: I0220 15:01:05.259841 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 20 15:01:05.267566 master-0 kubenswrapper[28120]: I0220 15:01:05.267522 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-webhook-cert\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 15:01:05.268876 master-0 kubenswrapper[28120]: I0220 15:01:05.268847 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-apiservice-cert\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 15:01:05.278644 master-0 kubenswrapper[28120]: I0220 15:01:05.278595 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-tljfd" Feb 20 15:01:05.299850 master-0 kubenswrapper[28120]: I0220 15:01:05.299778 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 20 15:01:05.308501 master-0 kubenswrapper[28120]: I0220 15:01:05.308446 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4339bd5-b8d1-467e-8158-4464ea901148-serving-cert\") pod \"openshift-config-operator-6f47d587d6-hsqjc\" (UID: \"a4339bd5-b8d1-467e-8158-4464ea901148\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 15:01:05.319680 master-0 kubenswrapper[28120]: I0220 15:01:05.319621 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-mxnq7" Feb 20 15:01:05.339873 master-0 kubenswrapper[28120]: I0220 15:01:05.339807 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-9g7zv" Feb 20 15:01:05.355253 master-0 kubenswrapper[28120]: I0220 15:01:05.355189 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-config\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:05.355376 master-0 kubenswrapper[28120]: I0220 15:01:05.355253 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-serving-certs-ca-bundle\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:05.355376 master-0 kubenswrapper[28120]: I0220 15:01:05.355308 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 15:01:05.355376 master-0 kubenswrapper[28120]: I0220 15:01:05.355351 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-proxy-ca-bundles\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:05.355376 master-0 kubenswrapper[28120]: I0220 15:01:05.355375 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-serving-cert\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:05.355631 master-0 kubenswrapper[28120]: I0220 15:01:05.355401 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 15:01:05.355631 master-0 kubenswrapper[28120]: I0220 15:01:05.355453 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-client-ca-bundle\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:05.355631 master-0 kubenswrapper[28120]: I0220 15:01:05.355487 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-tls\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:05.355631 master-0 kubenswrapper[28120]: I0220 15:01:05.355575 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:05.355861 master-0 kubenswrapper[28120]: I0220 15:01:05.355692 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 15:01:05.355861 master-0 kubenswrapper[28120]: I0220 15:01:05.355720 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-metrics-client-ca\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:05.355861 master-0 kubenswrapper[28120]: I0220 15:01:05.355755 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-client-certs\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:05.355861 master-0 kubenswrapper[28120]: I0220 15:01:05.355797 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-metrics-server-audit-profiles\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:05.355861 master-0 kubenswrapper[28120]: I0220 15:01:05.355823 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 15:01:05.355861 master-0 kubenswrapper[28120]: I0220 15:01:05.355848 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 15:01:05.356241 master-0 kubenswrapper[28120]: I0220 15:01:05.355874 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-cert\") pod \"ingress-canary-5qlzq\" (UID: \"4e7cac87-2eaa-4dad-b2dc-c8ed0557c665\") " pod="openshift-ingress-canary/ingress-canary-5qlzq" Feb 20 15:01:05.356241 master-0 kubenswrapper[28120]: I0220 15:01:05.355897 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-config\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 15:01:05.356241 master-0 kubenswrapper[28120]: I0220 15:01:05.355959 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:05.356241 master-0 kubenswrapper[28120]: I0220 15:01:05.355993 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/49defec6-a225-47ab-99ff-7a846f23eb00-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-7j5jb\" (UID: \"49defec6-a225-47ab-99ff-7a846f23eb00\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" Feb 20 15:01:05.356241 master-0 kubenswrapper[28120]: I0220 15:01:05.356018 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 15:01:05.356241 master-0 kubenswrapper[28120]: I0220 15:01:05.356046 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:05.356241 master-0 kubenswrapper[28120]: I0220 15:01:05.356081 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-client-ca\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:05.356241 master-0 kubenswrapper[28120]: I0220 15:01:05.356122 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-client-ca\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:05.356241 master-0 kubenswrapper[28120]: I0220 15:01:05.356155 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-federate-client-tls\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:05.356241 master-0 kubenswrapper[28120]: I0220 15:01:05.356195 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 15:01:05.356241 master-0 kubenswrapper[28120]: I0220 15:01:05.356238 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 15:01:05.356861 master-0 kubenswrapper[28120]: I0220 15:01:05.356263 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-trusted-ca-bundle\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:05.356861 master-0 kubenswrapper[28120]: I0220 15:01:05.356289 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 15:01:05.356861 master-0 kubenswrapper[28120]: I0220 15:01:05.356347 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 15:01:05.356861 master-0 kubenswrapper[28120]: I0220 15:01:05.356373 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 15:01:05.356861 master-0 kubenswrapper[28120]: I0220 15:01:05.356397 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf18981-b755-4b11-8793-38bc5e2e755b-serving-cert\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:05.356861 master-0 kubenswrapper[28120]: I0220 15:01:05.356421 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/996d4949-f92c-42ac-9bda-8c6ec0295e92-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 15:01:05.356861 master-0 kubenswrapper[28120]: I0220 15:01:05.356472 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-client-tls\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:05.356861 master-0 kubenswrapper[28120]: I0220 15:01:05.356497 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a39c5481-961c-4ac2-8c5b-a2c0165f4188-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 15:01:05.356861 master-0 kubenswrapper[28120]: I0220 15:01:05.356566 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/99fe3b99-0b40-4887-bcc8-59caa515b99f-metrics-client-ca\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:05.356861 master-0 kubenswrapper[28120]: I0220 15:01:05.356621 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-config\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:05.356861 master-0 kubenswrapper[28120]: I0220 15:01:05.356694 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:05.356861 master-0 kubenswrapper[28120]: I0220 15:01:05.356722 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-certs\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 15:01:05.356861 master-0 kubenswrapper[28120]: I0220 15:01:05.356748 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/caef1c17-56b0-479c-b000-caaac3c2b249-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 15:01:05.356861 master-0 kubenswrapper[28120]: I0220 15:01:05.356774 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-node-bootstrap-token\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 15:01:05.356861 master-0 kubenswrapper[28120]: I0220 15:01:05.356810 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-server-tls\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:05.357839 master-0 kubenswrapper[28120]: I0220 15:01:05.356987 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae43311e-14ba-40a1-bdbf-f02d68031757-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 15:01:05.361010 master-0 kubenswrapper[28120]: I0220 15:01:05.360957 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 20 15:01:05.364711 master-0 kubenswrapper[28120]: I0220 15:01:05.364647 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 15:01:05.379158 master-0 kubenswrapper[28120]: I0220 15:01:05.379082 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 20 15:01:05.388338 master-0 kubenswrapper[28120]: I0220 15:01:05.388277 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-proxy-tls\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 15:01:05.400512 master-0 kubenswrapper[28120]: I0220 15:01:05.400452 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-8m9cn" Feb 20 15:01:05.428493 master-0 kubenswrapper[28120]: I0220 15:01:05.426244 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 20 15:01:05.428493 master-0 kubenswrapper[28120]: I0220 15:01:05.426862 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/86f6836b-b018-4c7a-87ad-51809a4b9c7a-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 15:01:05.438887 master-0 kubenswrapper[28120]: I0220 15:01:05.438842 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 20 15:01:05.448550 master-0 kubenswrapper[28120]: I0220 15:01:05.448508 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-proxy-tls\") pod \"machine-config-controller-54cb48566c-j9q5m\" (UID: \"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 15:01:05.462333 master-0 kubenswrapper[28120]: I0220 15:01:05.462264 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-tkxrl" Feb 20 15:01:05.482592 master-0 kubenswrapper[28120]: I0220 15:01:05.482519 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-d7z2t" Feb 20 15:01:05.483191 master-0 kubenswrapper[28120]: I0220 15:01:05.483121 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:05.500689 master-0 kubenswrapper[28120]: I0220 15:01:05.500624 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 20 15:01:05.519216 master-0 kubenswrapper[28120]: I0220 15:01:05.519070 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 20 15:01:05.526977 master-0 kubenswrapper[28120]: I0220 15:01:05.526905 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-images\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 15:01:05.539139 master-0 kubenswrapper[28120]: I0220 15:01:05.539068 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 20 15:01:05.547398 master-0 kubenswrapper[28120]: I0220 15:01:05.547326 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f6836b-b018-4c7a-87ad-51809a4b9c7a-config\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 15:01:05.560447 master-0 kubenswrapper[28120]: I0220 15:01:05.560069 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 20 15:01:05.579111 master-0 kubenswrapper[28120]: I0220 15:01:05.579038 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 20 15:01:05.585393 master-0 kubenswrapper[28120]: I0220 15:01:05.585331 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-metrics-certs\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:05.598355 master-0 kubenswrapper[28120]: I0220 15:01:05.598289 28120 request.go:700] Waited for 1.987183684s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-dockercfg-gfr9m&limit=500&resourceVersion=0 Feb 20 15:01:05.600557 master-0 kubenswrapper[28120]: I0220 15:01:05.600487 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-gfr9m" Feb 20 15:01:05.620091 master-0 kubenswrapper[28120]: I0220 15:01:05.620002 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 20 15:01:05.625251 master-0 kubenswrapper[28120]: I0220 15:01:05.625181 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-default-certificate\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:05.639321 master-0 kubenswrapper[28120]: I0220 15:01:05.639248 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 20 15:01:05.648341 master-0 kubenswrapper[28120]: I0220 15:01:05.648284 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-stats-auth\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:05.660269 master-0 kubenswrapper[28120]: I0220 15:01:05.659906 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 20 15:01:05.669006 master-0 kubenswrapper[28120]: I0220 15:01:05.668721 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-service-ca-bundle\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:05.680167 master-0 kubenswrapper[28120]: I0220 15:01:05.680083 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 20 15:01:05.699724 master-0 kubenswrapper[28120]: I0220 15:01:05.699664 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 20 15:01:05.709303 master-0 kubenswrapper[28120]: I0220 15:01:05.709239 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-certs\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 15:01:05.720335 master-0 kubenswrapper[28120]: I0220 15:01:05.720246 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-rnbdm" Feb 20 15:01:05.740383 master-0 kubenswrapper[28120]: I0220 15:01:05.740305 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 20 15:01:05.749201 master-0 kubenswrapper[28120]: I0220 15:01:05.749103 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ef3a09a5-b019-48a3-97f8-7ddadb37394e-node-bootstrap-token\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 15:01:05.760431 master-0 kubenswrapper[28120]: I0220 15:01:05.760367 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 20 15:01:05.768596 master-0 kubenswrapper[28120]: I0220 15:01:05.768534 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-config\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 15:01:05.779709 master-0 kubenswrapper[28120]: I0220 15:01:05.779585 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-ts5zc" Feb 20 15:01:05.799874 master-0 kubenswrapper[28120]: I0220 15:01:05.799798 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 20 15:01:05.809435 master-0 kubenswrapper[28120]: I0220 15:01:05.809358 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/996d4949-f92c-42ac-9bda-8c6ec0295e92-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 15:01:05.820369 master-0 kubenswrapper[28120]: I0220 15:01:05.820297 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 20 15:01:05.840134 master-0 kubenswrapper[28120]: I0220 15:01:05.840052 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 20 15:01:05.859545 master-0 kubenswrapper[28120]: I0220 15:01:05.859473 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 20 15:01:05.869171 master-0 kubenswrapper[28120]: I0220 15:01:05.869111 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/996d4949-f92c-42ac-9bda-8c6ec0295e92-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 15:01:05.880574 master-0 kubenswrapper[28120]: I0220 15:01:05.880480 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 20 15:01:05.889105 master-0 kubenswrapper[28120]: I0220 15:01:05.889017 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/caef1c17-56b0-479c-b000-caaac3c2b249-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 15:01:05.900379 master-0 kubenswrapper[28120]: I0220 15:01:05.900290 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 20 15:01:05.908467 master-0 kubenswrapper[28120]: I0220 15:01:05.908419 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 15:01:05.919410 master-0 kubenswrapper[28120]: I0220 15:01:05.919348 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-mnmfc" Feb 20 15:01:05.939156 master-0 kubenswrapper[28120]: I0220 15:01:05.939061 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 20 15:01:05.948970 master-0 kubenswrapper[28120]: I0220 15:01:05.948910 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/caef1c17-56b0-479c-b000-caaac3c2b249-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 15:01:05.962135 master-0 kubenswrapper[28120]: I0220 15:01:05.962072 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 20 15:01:05.980448 master-0 kubenswrapper[28120]: I0220 15:01:05.980371 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 15:01:06.000285 master-0 kubenswrapper[28120]: I0220 15:01:06.000190 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 20 15:01:06.008335 master-0 kubenswrapper[28120]: I0220 15:01:06.008279 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-metrics-client-ca\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:06.008569 master-0 kubenswrapper[28120]: I0220 15:01:06.008355 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae43311e-14ba-40a1-bdbf-f02d68031757-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 15:01:06.008569 master-0 kubenswrapper[28120]: I0220 15:01:06.008431 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 15:01:06.008725 master-0 kubenswrapper[28120]: I0220 15:01:06.008690 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a39c5481-961c-4ac2-8c5b-a2c0165f4188-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 15:01:06.008866 master-0 kubenswrapper[28120]: I0220 15:01:06.008820 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/99fe3b99-0b40-4887-bcc8-59caa515b99f-metrics-client-ca\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:06.020652 master-0 kubenswrapper[28120]: I0220 15:01:06.020603 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 20 15:01:06.028707 master-0 kubenswrapper[28120]: I0220 15:01:06.028664 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 15:01:06.039786 master-0 kubenswrapper[28120]: I0220 15:01:06.039675 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-7m26s" Feb 20 15:01:06.059233 master-0 kubenswrapper[28120]: I0220 15:01:06.059168 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-cpp79" Feb 20 15:01:06.079371 master-0 kubenswrapper[28120]: I0220 15:01:06.079318 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 20 15:01:06.089173 master-0 kubenswrapper[28120]: I0220 15:01:06.089133 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 15:01:06.099204 master-0 kubenswrapper[28120]: I0220 15:01:06.099165 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 20 15:01:06.108333 master-0 kubenswrapper[28120]: I0220 15:01:06.108298 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a39c5481-961c-4ac2-8c5b-a2c0165f4188-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 15:01:06.119132 master-0 kubenswrapper[28120]: I0220 15:01:06.119083 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-jtt44" Feb 20 15:01:06.139521 master-0 kubenswrapper[28120]: I0220 15:01:06.139447 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 20 15:01:06.148488 master-0 kubenswrapper[28120]: I0220 15:01:06.148416 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 15:01:06.159551 master-0 kubenswrapper[28120]: I0220 15:01:06.159483 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 20 15:01:06.168506 master-0 kubenswrapper[28120]: I0220 15:01:06.168453 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 15:01:06.180218 master-0 kubenswrapper[28120]: I0220 15:01:06.180168 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 20 15:01:06.188400 master-0 kubenswrapper[28120]: I0220 15:01:06.188340 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-tls\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:06.199264 master-0 kubenswrapper[28120]: I0220 15:01:06.199183 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 20 15:01:06.208834 master-0 kubenswrapper[28120]: I0220 15:01:06.208791 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/99fe3b99-0b40-4887-bcc8-59caa515b99f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:06.219661 master-0 kubenswrapper[28120]: I0220 15:01:06.219604 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-s2d9t" Feb 20 15:01:06.240352 master-0 kubenswrapper[28120]: I0220 15:01:06.240258 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 20 15:01:06.248475 master-0 kubenswrapper[28120]: I0220 15:01:06.248406 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 15:01:06.259744 master-0 kubenswrapper[28120]: I0220 15:01:06.259683 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 20 15:01:06.268489 master-0 kubenswrapper[28120]: I0220 15:01:06.268394 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-metrics-server-audit-profiles\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:06.285753 master-0 kubenswrapper[28120]: I0220 15:01:06.285653 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 20 15:01:06.288427 master-0 kubenswrapper[28120]: I0220 15:01:06.288368 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae43311e-14ba-40a1-bdbf-f02d68031757-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 15:01:06.299609 master-0 kubenswrapper[28120]: I0220 15:01:06.299451 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 20 15:01:06.309901 master-0 kubenswrapper[28120]: I0220 15:01:06.309843 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-server-tls\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:06.319642 master-0 kubenswrapper[28120]: I0220 15:01:06.319302 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-dm2ds" Feb 20 15:01:06.340041 master-0 kubenswrapper[28120]: I0220 15:01:06.339973 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 20 15:01:06.348251 master-0 kubenswrapper[28120]: I0220 15:01:06.348191 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-client-certs\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:06.357368 master-0 kubenswrapper[28120]: E0220 15:01:06.357296 28120 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:06.357527 master-0 kubenswrapper[28120]: E0220 15:01:06.357438 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-config podName:63d49b12-8d51-4d97-9f06-ca4c5bf10dcd nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.35741034 +0000 UTC m=+5.618203933 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-config") pod "route-controller-manager-584d5796b9-lf8t5" (UID: "63d49b12-8d51-4d97-9f06-ca4c5bf10dcd") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:06.358207 master-0 kubenswrapper[28120]: E0220 15:01:06.357703 28120 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:06.358207 master-0 kubenswrapper[28120]: E0220 15:01:06.357746 28120 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.358207 master-0 kubenswrapper[28120]: E0220 15:01:06.357712 28120 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7pkl9jqft06ca: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.358207 master-0 kubenswrapper[28120]: E0220 15:01:06.357829 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-serving-certs-ca-bundle podName:8e8c5772-b6e2-43d8-b173-af74541855fb nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.357788509 +0000 UTC m=+5.618582132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-serving-certs-ca-bundle") pod "telemeter-client-64bcb8ffcf-vwfzx" (UID: "8e8c5772-b6e2-43d8-b173-af74541855fb") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:06.358207 master-0 kubenswrapper[28120]: E0220 15:01:06.357851 28120 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.358207 master-0 kubenswrapper[28120]: E0220 15:01:06.357862 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-serving-cert podName:63d49b12-8d51-4d97-9f06-ca4c5bf10dcd nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.357849561 +0000 UTC m=+5.618643164 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-serving-cert") pod "route-controller-manager-584d5796b9-lf8t5" (UID: "63d49b12-8d51-4d97-9f06-ca4c5bf10dcd") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.358207 master-0 kubenswrapper[28120]: E0220 15:01:06.357782 28120 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:06.358207 master-0 kubenswrapper[28120]: E0220 15:01:06.357913 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-cert podName:4e7cac87-2eaa-4dad-b2dc-c8ed0557c665 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.357881571 +0000 UTC m=+5.618675164 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-cert") pod "ingress-canary-5qlzq" (UID: "4e7cac87-2eaa-4dad-b2dc-c8ed0557c665") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.358207 master-0 kubenswrapper[28120]: E0220 15:01:06.357982 28120 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.358207 master-0 kubenswrapper[28120]: E0220 15:01:06.358051 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-client-ca-bundle podName:bdd203e0-3dd9-4e9d-81f1-46f60d235e38 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.357969024 +0000 UTC m=+5.618762627 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-client-ca-bundle") pod "metrics-server-9bcdd7684-kz2z2" (UID: "bdd203e0-3dd9-4e9d-81f1-46f60d235e38") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.358207 master-0 kubenswrapper[28120]: E0220 15:01:06.358073 28120 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.358207 master-0 kubenswrapper[28120]: E0220 15:01:06.358084 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-proxy-ca-bundles podName:bdf18981-b755-4b11-8793-38bc5e2e755b nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.358073996 +0000 UTC m=+5.618867569 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-proxy-ca-bundles") pod "controller-manager-647657fcb-w9586" (UID: "bdf18981-b755-4b11-8793-38bc5e2e755b") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:06.358207 master-0 kubenswrapper[28120]: E0220 15:01:06.358075 28120 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:06.359302 master-0 kubenswrapper[28120]: E0220 15:01:06.358247 28120 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:06.359302 master-0 kubenswrapper[28120]: E0220 15:01:06.358343 28120 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:06.359302 master-0 kubenswrapper[28120]: E0220 15:01:06.358422 28120 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.359302 master-0 kubenswrapper[28120]: E0220 15:01:06.358359 28120 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:06.359302 master-0 kubenswrapper[28120]: E0220 15:01:06.358510 28120 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.359302 master-0 kubenswrapper[28120]: E0220 15:01:06.358510 28120 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:06.359302 master-0 kubenswrapper[28120]: E0220 15:01:06.358552 28120 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.359302 master-0 kubenswrapper[28120]: E0220 15:01:06.358342 28120 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.359302 master-0 kubenswrapper[28120]: E0220 15:01:06.359058 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client-kube-rbac-proxy-config podName:8e8c5772-b6e2-43d8-b173-af74541855fb nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.358127357 +0000 UTC m=+5.618920950 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-64bcb8ffcf-vwfzx" (UID: "8e8c5772-b6e2-43d8-b173-af74541855fb") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.359302 master-0 kubenswrapper[28120]: E0220 15:01:06.359115 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-configmap-kubelet-serving-ca-bundle podName:bdd203e0-3dd9-4e9d-81f1-46f60d235e38 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.359097222 +0000 UTC m=+5.619890815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-configmap-kubelet-serving-ca-bundle") pod "metrics-server-9bcdd7684-kz2z2" (UID: "bdd203e0-3dd9-4e9d-81f1-46f60d235e38") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:06.359302 master-0 kubenswrapper[28120]: E0220 15:01:06.359207 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-client-ca podName:bdf18981-b755-4b11-8793-38bc5e2e755b nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.359129872 +0000 UTC m=+5.619923475 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-client-ca") pod "controller-manager-647657fcb-w9586" (UID: "bdf18981-b755-4b11-8793-38bc5e2e755b") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:06.359302 master-0 kubenswrapper[28120]: E0220 15:01:06.359275 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-trusted-ca-bundle podName:8e8c5772-b6e2-43d8-b173-af74541855fb nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.359231065 +0000 UTC m=+5.620024668 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-trusted-ca-bundle") pod "telemeter-client-64bcb8ffcf-vwfzx" (UID: "8e8c5772-b6e2-43d8-b173-af74541855fb") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:06.360354 master-0 kubenswrapper[28120]: I0220 15:01:06.359470 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 20 15:01:06.360354 master-0 kubenswrapper[28120]: E0220 15:01:06.360033 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdf18981-b755-4b11-8793-38bc5e2e755b-serving-cert podName:bdf18981-b755-4b11-8793-38bc5e2e755b nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.359303577 +0000 UTC m=+5.620097170 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bdf18981-b755-4b11-8793-38bc5e2e755b-serving-cert") pod "controller-manager-647657fcb-w9586" (UID: "bdf18981-b755-4b11-8793-38bc5e2e755b") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.360354 master-0 kubenswrapper[28120]: E0220 15:01:06.360088 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-client-ca podName:63d49b12-8d51-4d97-9f06-ca4c5bf10dcd nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.360077126 +0000 UTC m=+5.620870729 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-client-ca") pod "route-controller-manager-584d5796b9-lf8t5" (UID: "63d49b12-8d51-4d97-9f06-ca4c5bf10dcd") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:06.360354 master-0 kubenswrapper[28120]: E0220 15:01:06.360110 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-client-tls podName:8e8c5772-b6e2-43d8-b173-af74541855fb nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.360100167 +0000 UTC m=+5.620893770 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-client-tls") pod "telemeter-client-64bcb8ffcf-vwfzx" (UID: "8e8c5772-b6e2-43d8-b173-af74541855fb") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.360354 master-0 kubenswrapper[28120]: E0220 15:01:06.360138 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-config podName:bdf18981-b755-4b11-8793-38bc5e2e755b nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.360129707 +0000 UTC m=+5.620923310 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-config") pod "controller-manager-647657fcb-w9586" (UID: "bdf18981-b755-4b11-8793-38bc5e2e755b") : failed to sync configmap cache: timed out waiting for the condition Feb 20 15:01:06.360354 master-0 kubenswrapper[28120]: E0220 15:01:06.360159 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client podName:8e8c5772-b6e2-43d8-b173-af74541855fb nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.360149318 +0000 UTC m=+5.620942921 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client") pod "telemeter-client-64bcb8ffcf-vwfzx" (UID: "8e8c5772-b6e2-43d8-b173-af74541855fb") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.360354 master-0 kubenswrapper[28120]: E0220 15:01:06.360188 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-federate-client-tls podName:8e8c5772-b6e2-43d8-b173-af74541855fb nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.360177799 +0000 UTC m=+5.620971402 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-federate-client-tls") pod "telemeter-client-64bcb8ffcf-vwfzx" (UID: "8e8c5772-b6e2-43d8-b173-af74541855fb") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.360354 master-0 kubenswrapper[28120]: E0220 15:01:06.360213 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49defec6-a225-47ab-99ff-7a846f23eb00-webhook-certs podName:49defec6-a225-47ab-99ff-7a846f23eb00 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:07.360202749 +0000 UTC m=+5.620996342 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/49defec6-a225-47ab-99ff-7a846f23eb00-webhook-certs") pod "multus-admission-controller-5f54bf67d4-7j5jb" (UID: "49defec6-a225-47ab-99ff-7a846f23eb00") : failed to sync secret cache: timed out waiting for the condition Feb 20 15:01:06.379093 master-0 kubenswrapper[28120]: I0220 15:01:06.379038 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 20 15:01:06.399701 master-0 kubenswrapper[28120]: I0220 15:01:06.399611 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7pkl9jqft06ca" Feb 20 15:01:06.420218 master-0 kubenswrapper[28120]: I0220 15:01:06.420147 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 20 15:01:06.440464 master-0 kubenswrapper[28120]: I0220 15:01:06.440192 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 15:01:06.459768 master-0 kubenswrapper[28120]: I0220 15:01:06.459675 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 15:01:06.480164 master-0 kubenswrapper[28120]: I0220 15:01:06.480100 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-c2dd6" Feb 20 15:01:06.499169 master-0 kubenswrapper[28120]: I0220 15:01:06.499083 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-9zvh6" Feb 20 15:01:06.529888 master-0 kubenswrapper[28120]: I0220 15:01:06.529815 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 15:01:06.539273 master-0 kubenswrapper[28120]: I0220 15:01:06.539222 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 15:01:06.560017 master-0 kubenswrapper[28120]: I0220 15:01:06.559864 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 15:01:06.579702 master-0 kubenswrapper[28120]: I0220 15:01:06.579654 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 20 15:01:06.599916 master-0 kubenswrapper[28120]: I0220 15:01:06.599843 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 15:01:06.617834 master-0 kubenswrapper[28120]: I0220 15:01:06.617747 28120 request.go:700] Waited for 2.990075542s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0 Feb 20 15:01:06.619561 master-0 kubenswrapper[28120]: I0220 15:01:06.619514 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 20 15:01:06.639580 master-0 kubenswrapper[28120]: I0220 15:01:06.639513 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 20 15:01:06.662347 master-0 kubenswrapper[28120]: I0220 15:01:06.662200 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-2w8rc" Feb 20 15:01:06.679614 master-0 kubenswrapper[28120]: I0220 15:01:06.679543 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 20 15:01:06.699696 master-0 kubenswrapper[28120]: I0220 15:01:06.699402 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 20 15:01:06.720145 master-0 kubenswrapper[28120]: I0220 15:01:06.720059 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 20 15:01:06.760071 master-0 kubenswrapper[28120]: I0220 15:01:06.759828 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 20 15:01:06.779151 master-0 kubenswrapper[28120]: I0220 15:01:06.779087 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-qb2q7" Feb 20 15:01:06.799324 master-0 kubenswrapper[28120]: I0220 15:01:06.799257 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 20 15:01:06.819186 master-0 kubenswrapper[28120]: I0220 15:01:06.819069 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-7vdpw" Feb 20 15:01:06.839556 master-0 kubenswrapper[28120]: I0220 15:01:06.839491 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 20 15:01:06.859565 master-0 kubenswrapper[28120]: I0220 15:01:06.859477 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 20 15:01:06.879208 master-0 kubenswrapper[28120]: I0220 15:01:06.879148 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 20 15:01:06.898946 master-0 kubenswrapper[28120]: I0220 15:01:06.898863 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 20 15:01:06.930298 master-0 kubenswrapper[28120]: I0220 15:01:06.930203 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 20 15:01:06.966717 master-0 kubenswrapper[28120]: I0220 15:01:06.966614 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj796\" (UniqueName: \"kubernetes.io/projected/5f55b652-bef8-4f50-9d1d-9d0a340c1dea-kube-api-access-rj796\") pod \"router-default-7b65dc9fcb-tlsdt\" (UID: \"5f55b652-bef8-4f50-9d1d-9d0a340c1dea\") " pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:06.978382 master-0 kubenswrapper[28120]: I0220 15:01:06.978308 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tthkk\" (UniqueName: \"kubernetes.io/projected/bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0-kube-api-access-tthkk\") pod \"dns-default-dzfl8\" (UID: \"bf7fe27e-1de0-4d90-9cd9-8625ac4e01d0\") " pod="openshift-dns/dns-default-dzfl8" Feb 20 15:01:06.997583 master-0 kubenswrapper[28120]: I0220 15:01:06.997505 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jshgm\" (UniqueName: \"kubernetes.io/projected/27ab8945-6a5b-4f7d-b893-6358da214499-kube-api-access-jshgm\") pod \"cluster-storage-operator-f94476f49-m2bj7\" (UID: \"27ab8945-6a5b-4f7d-b893-6358da214499\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-m2bj7" Feb 20 15:01:07.022704 master-0 kubenswrapper[28120]: I0220 15:01:07.022588 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mchbh\" (UniqueName: \"kubernetes.io/projected/a8c0a6d2-f1f9-49e3-9475-4983b50667bf-kube-api-access-mchbh\") pod \"apiserver-7659f6b598-z8454\" (UID: \"a8c0a6d2-f1f9-49e3-9475-4983b50667bf\") " pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:07.042459 master-0 kubenswrapper[28120]: I0220 15:01:07.042391 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-bound-sa-token\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 15:01:07.056975 master-0 kubenswrapper[28120]: I0220 15:01:07.056879 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smglm\" (UniqueName: \"kubernetes.io/projected/db9dc349-5216-43ff-8c17-3a9384a010ea-kube-api-access-smglm\") pod \"openshift-apiserver-operator-8586dccc9b-pwm24\" (UID: \"db9dc349-5216-43ff-8c17-3a9384a010ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-pwm24" Feb 20 15:01:07.084354 master-0 kubenswrapper[28120]: I0220 15:01:07.084196 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk2lq\" (UniqueName: \"kubernetes.io/projected/0bedbe69-fc4b-4bd7-bcc2-acead927eda2-kube-api-access-gk2lq\") pod \"machine-api-operator-5c7cf458b4-gjdb4\" (UID: \"0bedbe69-fc4b-4bd7-bcc2-acead927eda2\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-gjdb4" Feb 20 15:01:07.097567 master-0 kubenswrapper[28120]: I0220 15:01:07.097460 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm5p2\" (UniqueName: \"kubernetes.io/projected/d28490b0-96ca-4fe0-8fae-e6f8390f933b-kube-api-access-qm5p2\") pod \"dns-operator-8c7d49845-gkrph\" (UID: \"d28490b0-96ca-4fe0-8fae-e6f8390f933b\") " pod="openshift-dns-operator/dns-operator-8c7d49845-gkrph" Feb 20 15:01:07.123871 master-0 kubenswrapper[28120]: I0220 15:01:07.123757 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr6nr\" (UniqueName: \"kubernetes.io/projected/21384bd0-495c-406a-9462-e9e740c04686-kube-api-access-gr6nr\") pod \"ovnkube-node-5gzs6\" (UID: \"21384bd0-495c-406a-9462-e9e740c04686\") " pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:07.134451 master-0 kubenswrapper[28120]: I0220 15:01:07.134383 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrrq4\" (UniqueName: \"kubernetes.io/projected/af7b6f34-adca-4bdb-9e41-e2995a1d67a8-kube-api-access-nrrq4\") pod \"migrator-5c85bff57-9mbsh\" (UID: \"af7b6f34-adca-4bdb-9e41-e2995a1d67a8\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-9mbsh" Feb 20 15:01:07.156384 master-0 kubenswrapper[28120]: I0220 15:01:07.156300 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntlv2\" (UniqueName: \"kubernetes.io/projected/4ecbdf77-0c73-487e-943e-5315a0f8b8d4-kube-api-access-ntlv2\") pod \"packageserver-6c5ff764cd-l2884\" (UID: \"4ecbdf77-0c73-487e-943e-5315a0f8b8d4\") " pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 15:01:07.187760 master-0 kubenswrapper[28120]: I0220 15:01:07.187681 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/989af121-da08-4f40-b08c-dd2aa67bc60c-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-n29zt\" (UID: \"989af121-da08-4f40-b08c-dd2aa67bc60c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-n29zt" Feb 20 15:01:07.210797 master-0 kubenswrapper[28120]: I0220 15:01:07.210719 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mpr8\" (UniqueName: \"kubernetes.io/projected/929dffba-46da-4d81-a437-bc6a9fe79811-kube-api-access-9mpr8\") pod \"network-check-target-ljvkb\" (UID: \"929dffba-46da-4d81-a437-bc6a9fe79811\") " pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 15:01:07.223965 master-0 kubenswrapper[28120]: I0220 15:01:07.223718 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fttgr\" (UniqueName: \"kubernetes.io/projected/419f28a9-8fd7-4b59-9554-4d884a1208b5-kube-api-access-fttgr\") pod \"cluster-monitoring-operator-6bb6d78bf-p7mjp\" (UID: \"419f28a9-8fd7-4b59-9554-4d884a1208b5\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-p7mjp" Feb 20 15:01:07.249660 master-0 kubenswrapper[28120]: I0220 15:01:07.249572 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9c94\" (UniqueName: \"kubernetes.io/projected/87cf4690-1ec1-44fc-94bd-730d9f2e6762-kube-api-access-r9c94\") pod \"iptables-alerter-cgp8r\" (UID: \"87cf4690-1ec1-44fc-94bd-730d9f2e6762\") " pod="openshift-network-operator/iptables-alerter-cgp8r" Feb 20 15:01:07.252679 master-0 kubenswrapper[28120]: I0220 15:01:07.252613 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb46b\" (UniqueName: \"kubernetes.io/projected/8b73ae08-0ad7-4f99-8002-6df0d984cd2c-kube-api-access-mb46b\") pod \"kube-storage-version-migrator-operator-fc889cfd5-hxgzq\" (UID: \"8b73ae08-0ad7-4f99-8002-6df0d984cd2c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-hxgzq" Feb 20 15:01:07.270117 master-0 kubenswrapper[28120]: I0220 15:01:07.270055 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwb5n\" (UniqueName: \"kubernetes.io/projected/234a44fd-c153-47a6-a11d-7d4b7165c236-kube-api-access-gwb5n\") pod \"etcd-operator-545bf96f4d-jhd5c\" (UID: \"234a44fd-c153-47a6-a11d-7d4b7165c236\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-jhd5c" Feb 20 15:01:07.301323 master-0 kubenswrapper[28120]: I0220 15:01:07.301246 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl7wm\" (UniqueName: \"kubernetes.io/projected/2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a-kube-api-access-tl7wm\") pod \"catalog-operator-596f79dd6f-2g7jd\" (UID: \"2edb5bfc-a0a7-4bc9-80f5-c14436f9af7a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 15:01:07.324251 master-0 kubenswrapper[28120]: I0220 15:01:07.324183 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b54xg\" (UniqueName: \"kubernetes.io/projected/c5429ce9-f3b7-4024-ac77-3a93a2ac77bb-kube-api-access-b54xg\") pod \"apiserver-776c8f54bc-gmvx8\" (UID: \"c5429ce9-f3b7-4024-ac77-3a93a2ac77bb\") " pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:07.343578 master-0 kubenswrapper[28120]: I0220 15:01:07.343427 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57cks\" (UniqueName: \"kubernetes.io/projected/31d71c90-cab7-4411-9426-0713cb026294-kube-api-access-57cks\") pod \"cluster-node-tuning-operator-bcf775fc9-rpvf4\" (UID: \"31d71c90-cab7-4411-9426-0713cb026294\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-rpvf4" Feb 20 15:01:07.361565 master-0 kubenswrapper[28120]: I0220 15:01:07.361513 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43e9807a-859c-44c1-8511-0066b0f59ff8-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-pptg6\" (UID: \"43e9807a-859c-44c1-8511-0066b0f59ff8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-pptg6" Feb 20 15:01:07.386252 master-0 kubenswrapper[28120]: I0220 15:01:07.386170 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jljjg\" (UniqueName: \"kubernetes.io/projected/49044786-483a-406e-8750-f6ded400841d-kube-api-access-jljjg\") pod \"control-plane-machine-set-operator-686847ff5f-2tpv8\" (UID: \"49044786-483a-406e-8750-f6ded400841d\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-2tpv8" Feb 20 15:01:07.402414 master-0 kubenswrapper[28120]: I0220 15:01:07.402348 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n85mh\" (UniqueName: \"kubernetes.io/projected/900e244c-67aa-402f-b5f0-d37c5c1cedf7-kube-api-access-n85mh\") pod \"csi-snapshot-controller-operator-6fb4df594f-p29qr\" (UID: \"900e244c-67aa-402f-b5f0-d37c5c1cedf7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-p29qr" Feb 20 15:01:07.417865 master-0 kubenswrapper[28120]: I0220 15:01:07.417807 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-serving-certs-ca-bundle\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:07.418033 master-0 kubenswrapper[28120]: I0220 15:01:07.417993 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-proxy-ca-bundles\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:07.418111 master-0 kubenswrapper[28120]: I0220 15:01:07.418084 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-serving-cert\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:07.418179 master-0 kubenswrapper[28120]: I0220 15:01:07.418144 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-serving-certs-ca-bundle\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:07.418737 master-0 kubenswrapper[28120]: I0220 15:01:07.418695 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-client-ca-bundle\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:07.419154 master-0 kubenswrapper[28120]: I0220 15:01:07.419111 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-client-ca-bundle\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:07.419154 master-0 kubenswrapper[28120]: I0220 15:01:07.419123 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-cert\") pod \"ingress-canary-5qlzq\" (UID: \"4e7cac87-2eaa-4dad-b2dc-c8ed0557c665\") " pod="openshift-ingress-canary/ingress-canary-5qlzq" Feb 20 15:01:07.419331 master-0 kubenswrapper[28120]: I0220 15:01:07.419145 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-serving-cert\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:07.419331 master-0 kubenswrapper[28120]: I0220 15:01:07.419290 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-proxy-ca-bundles\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:07.419331 master-0 kubenswrapper[28120]: I0220 15:01:07.419310 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:07.419525 master-0 kubenswrapper[28120]: I0220 15:01:07.419386 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/49defec6-a225-47ab-99ff-7a846f23eb00-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-7j5jb\" (UID: \"49defec6-a225-47ab-99ff-7a846f23eb00\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" Feb 20 15:01:07.419525 master-0 kubenswrapper[28120]: I0220 15:01:07.419426 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:07.419702 master-0 kubenswrapper[28120]: I0220 15:01:07.419654 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:07.419795 master-0 kubenswrapper[28120]: I0220 15:01:07.419692 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-cert\") pod \"ingress-canary-5qlzq\" (UID: \"4e7cac87-2eaa-4dad-b2dc-c8ed0557c665\") " pod="openshift-ingress-canary/ingress-canary-5qlzq" Feb 20 15:01:07.419795 master-0 kubenswrapper[28120]: I0220 15:01:07.419781 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-client-ca\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:07.419952 master-0 kubenswrapper[28120]: I0220 15:01:07.419858 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-client-ca\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:07.420057 master-0 kubenswrapper[28120]: I0220 15:01:07.419918 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-federate-client-tls\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:07.420149 master-0 kubenswrapper[28120]: I0220 15:01:07.420059 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-trusted-ca-bundle\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:07.420149 master-0 kubenswrapper[28120]: I0220 15:01:07.420118 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:07.420287 master-0 kubenswrapper[28120]: I0220 15:01:07.420147 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/49defec6-a225-47ab-99ff-7a846f23eb00-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-7j5jb\" (UID: \"49defec6-a225-47ab-99ff-7a846f23eb00\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" Feb 20 15:01:07.420287 master-0 kubenswrapper[28120]: I0220 15:01:07.420172 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf18981-b755-4b11-8793-38bc5e2e755b-serving-cert\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:07.420422 master-0 kubenswrapper[28120]: I0220 15:01:07.420311 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-client-tls\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:07.420490 master-0 kubenswrapper[28120]: I0220 15:01:07.420425 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-client-ca\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:07.420490 master-0 kubenswrapper[28120]: I0220 15:01:07.420451 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-config\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:07.420627 master-0 kubenswrapper[28120]: I0220 15:01:07.420506 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf18981-b755-4b11-8793-38bc5e2e755b-serving-cert\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:07.420777 master-0 kubenswrapper[28120]: I0220 15:01:07.420732 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:07.421013 master-0 kubenswrapper[28120]: I0220 15:01:07.420970 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-trusted-ca-bundle\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:07.421157 master-0 kubenswrapper[28120]: I0220 15:01:07.421077 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-telemeter-client-tls\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:07.421293 master-0 kubenswrapper[28120]: I0220 15:01:07.421236 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-config\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:07.421293 master-0 kubenswrapper[28120]: I0220 15:01:07.421275 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-client-ca\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:07.421501 master-0 kubenswrapper[28120]: I0220 15:01:07.421304 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-config\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:07.421599 master-0 kubenswrapper[28120]: I0220 15:01:07.421483 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-secret-telemeter-client\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:07.421599 master-0 kubenswrapper[28120]: I0220 15:01:07.421546 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/8e8c5772-b6e2-43d8-b173-af74541855fb-federate-client-tls\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:07.421781 master-0 kubenswrapper[28120]: I0220 15:01:07.421747 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-config\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:07.423657 master-0 kubenswrapper[28120]: I0220 15:01:07.423584 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv57n\" (UniqueName: \"kubernetes.io/projected/448aafd2-ffb3-42c5-8085-f6194d7862e5-kube-api-access-nv57n\") pod \"node-resolver-djs75\" (UID: \"448aafd2-ffb3-42c5-8085-f6194d7862e5\") " pod="openshift-dns/node-resolver-djs75" Feb 20 15:01:07.448328 master-0 kubenswrapper[28120]: I0220 15:01:07.448237 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcfnf\" (UniqueName: \"kubernetes.io/projected/d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de-kube-api-access-wcfnf\") pod \"machine-config-daemon-ztgdm\" (UID: \"d3feb3da-f4fa-4b30-a55c-a0ac9c28b5de\") " pod="openshift-machine-config-operator/machine-config-daemon-ztgdm" Feb 20 15:01:07.454281 master-0 kubenswrapper[28120]: I0220 15:01:07.454220 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svlzf\" (UniqueName: \"kubernetes.io/projected/9fd9f419-2cdc-4991-8fb9-87d76ac58976-kube-api-access-svlzf\") pod \"network-operator-7d7db75979-tj8fx\" (UID: \"9fd9f419-2cdc-4991-8fb9-87d76ac58976\") " pod="openshift-network-operator/network-operator-7d7db75979-tj8fx" Feb 20 15:01:07.477321 master-0 kubenswrapper[28120]: I0220 15:01:07.477232 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnwtd\" (UniqueName: \"kubernetes.io/projected/1fe69517-eec2-4721-933c-fa27cea7ab1f-kube-api-access-rnwtd\") pod \"package-server-manager-5c75f78c8b-2sw9z\" (UID: \"1fe69517-eec2-4721-933c-fa27cea7ab1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 15:01:07.506802 master-0 kubenswrapper[28120]: I0220 15:01:07.506719 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rln42\" (UniqueName: \"kubernetes.io/projected/ac3680de-aabf-414b-a340-5e5e6aea4822-kube-api-access-rln42\") pod \"redhat-marketplace-n2cdp\" (UID: \"ac3680de-aabf-414b-a340-5e5e6aea4822\") " pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 15:01:07.523652 master-0 kubenswrapper[28120]: I0220 15:01:07.523575 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45226\" (UniqueName: \"kubernetes.io/projected/19ce4b45-db46-4fc3-8d72-963de22f026b-kube-api-access-45226\") pod \"tuned-jc4wl\" (UID: \"19ce4b45-db46-4fc3-8d72-963de22f026b\") " pod="openshift-cluster-node-tuning-operator/tuned-jc4wl" Feb 20 15:01:07.545245 master-0 kubenswrapper[28120]: I0220 15:01:07.545167 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vz22\" (UniqueName: \"kubernetes.io/projected/3bf5be04-e4dd-44d9-be1a-3abe6ddd2367-kube-api-access-2vz22\") pod \"machine-config-controller-54cb48566c-j9q5m\" (UID: \"3bf5be04-e4dd-44d9-be1a-3abe6ddd2367\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-j9q5m" Feb 20 15:01:07.563142 master-0 kubenswrapper[28120]: I0220 15:01:07.563056 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk2pl\" (UniqueName: \"kubernetes.io/projected/ee3a6748-0bbc-41bf-8726-a8db18faf03b-kube-api-access-mk2pl\") pod \"cluster-samples-operator-65c5c48b9b-92c4x\" (UID: \"ee3a6748-0bbc-41bf-8726-a8db18faf03b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-92c4x" Feb 20 15:01:07.589301 master-0 kubenswrapper[28120]: I0220 15:01:07.589195 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5fng\" (UniqueName: \"kubernetes.io/projected/84a61910-48eb-4c27-8d69-f6aa7ce912ca-kube-api-access-l5fng\") pod \"operator-controller-controller-manager-9cc7d7bb-6qqvd\" (UID: \"84a61910-48eb-4c27-8d69-f6aa7ce912ca\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 15:01:07.603052 master-0 kubenswrapper[28120]: I0220 15:01:07.602865 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xd6r\" (UniqueName: \"kubernetes.io/projected/6e5d953b-dbc7-48df-9d6b-d61030ffd6e3-kube-api-access-2xd6r\") pod \"community-operators-x5fhb\" (UID: \"6e5d953b-dbc7-48df-9d6b-d61030ffd6e3\") " pod="openshift-marketplace/community-operators-x5fhb" Feb 20 15:01:07.623478 master-0 kubenswrapper[28120]: I0220 15:01:07.623413 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mclrj\" (UniqueName: \"kubernetes.io/projected/5d2b154b-de63-4c9b-99d8-487fb3035fb9-kube-api-access-mclrj\") pod \"ovnkube-control-plane-5d8dfcdc87-wrzfx\" (UID: \"5d2b154b-de63-4c9b-99d8-487fb3035fb9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-wrzfx" Feb 20 15:01:07.637610 master-0 kubenswrapper[28120]: I0220 15:01:07.637521 28120 request.go:700] Waited for 3.954319628s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools/token Feb 20 15:01:07.648237 master-0 kubenswrapper[28120]: I0220 15:01:07.648169 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbw6n\" (UniqueName: \"kubernetes.io/projected/33675e96-ce49-49be-9117-954ac7cca5d5-kube-api-access-hbw6n\") pod \"network-node-identity-gprr4\" (UID: \"33675e96-ce49-49be-9117-954ac7cca5d5\") " pod="openshift-network-node-identity/network-node-identity-gprr4" Feb 20 15:01:07.655369 master-0 kubenswrapper[28120]: I0220 15:01:07.655284 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psd59\" (UniqueName: \"kubernetes.io/projected/b6e6d218-d969-40b5-a32b-9b2093089dbf-kube-api-access-psd59\") pod \"multus-additional-cni-plugins-6ts4p\" (UID: \"b6e6d218-d969-40b5-a32b-9b2093089dbf\") " pod="openshift-multus/multus-additional-cni-plugins-6ts4p" Feb 20 15:01:07.683574 master-0 kubenswrapper[28120]: I0220 15:01:07.683505 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtgrt\" (UniqueName: \"kubernetes.io/projected/b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d-kube-api-access-xtgrt\") pod \"certified-operators-9wddt\" (UID: \"b011cf4d-4822-4fc7-9f11-62f1f8c5cf4d\") " pod="openshift-marketplace/certified-operators-9wddt" Feb 20 15:01:07.701045 master-0 kubenswrapper[28120]: I0220 15:01:07.700969 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 15:01:07.721284 master-0 kubenswrapper[28120]: I0220 15:01:07.721174 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj4dx\" (UniqueName: \"kubernetes.io/projected/c81ad608-a8ad-4289-a8d2-d48acb9b540c-kube-api-access-wj4dx\") pod \"service-ca-operator-c48c8bf7c-pvlhj\" (UID: \"c81ad608-a8ad-4289-a8d2-d48acb9b540c\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-pvlhj" Feb 20 15:01:07.741593 master-0 kubenswrapper[28120]: I0220 15:01:07.741494 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k6br\" (UniqueName: \"kubernetes.io/projected/787a4fee-6625-4df5-a432-c7e1190da777-kube-api-access-9k6br\") pod \"service-ca-576b4d78bd-fc795\" (UID: \"787a4fee-6625-4df5-a432-c7e1190da777\") " pod="openshift-service-ca/service-ca-576b4d78bd-fc795" Feb 20 15:01:07.763856 master-0 kubenswrapper[28120]: I0220 15:01:07.763794 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvthk\" (UniqueName: \"kubernetes.io/projected/a4339bd5-b8d1-467e-8158-4464ea901148-kube-api-access-jvthk\") pod \"openshift-config-operator-6f47d587d6-hsqjc\" (UID: \"a4339bd5-b8d1-467e-8158-4464ea901148\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 15:01:07.782113 master-0 kubenswrapper[28120]: I0220 15:01:07.782031 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkq7j\" (UniqueName: \"kubernetes.io/projected/32a79fe0-e619-4a66-8617-e8111bdc7e96-kube-api-access-jkq7j\") pod \"multus-m6hpf\" (UID: \"32a79fe0-e619-4a66-8617-e8111bdc7e96\") " pod="openshift-multus/multus-m6hpf" Feb 20 15:01:07.801982 master-0 kubenswrapper[28120]: I0220 15:01:07.801866 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nlf9\" (UniqueName: \"kubernetes.io/projected/5ea4c132-b6d0-4dc9-942d-48e359eed418-kube-api-access-7nlf9\") pod \"network-metrics-daemon-99lkv\" (UID: \"5ea4c132-b6d0-4dc9-942d-48e359eed418\") " pod="openshift-multus/network-metrics-daemon-99lkv" Feb 20 15:01:07.830285 master-0 kubenswrapper[28120]: I0220 15:01:07.830166 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzmqr\" (UniqueName: \"kubernetes.io/projected/b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1-kube-api-access-pzmqr\") pod \"cluster-image-registry-operator-779979bdf7-g7glt\" (UID: \"b72c3f49-e00f-4ae5-a36f-d2f6ac8241e1\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-g7glt" Feb 20 15:01:07.844895 master-0 kubenswrapper[28120]: I0220 15:01:07.844773 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwclx\" (UniqueName: \"kubernetes.io/projected/b385880b-a26b-4353-8f6f-b7f926bcc67c-kube-api-access-fwclx\") pod \"cluster-autoscaler-operator-86b8dc6d6-c8w7r\" (UID: \"b385880b-a26b-4353-8f6f-b7f926bcc67c\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-c8w7r" Feb 20 15:01:07.861874 master-0 kubenswrapper[28120]: I0220 15:01:07.861727 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk5m4\" (UniqueName: \"kubernetes.io/projected/8157f73d-c757-40c4-80bc-3c9de2f2288a-kube-api-access-bk5m4\") pod \"authentication-operator-5bd7c86784-6r5qx\" (UID: \"8157f73d-c757-40c4-80bc-3c9de2f2288a\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-6r5qx" Feb 20 15:01:07.882404 master-0 kubenswrapper[28120]: I0220 15:01:07.882338 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svhtr\" (UniqueName: \"kubernetes.io/projected/45d7ef0c-272b-4d1e-965f-484975d5d25c-kube-api-access-svhtr\") pod \"openshift-controller-manager-operator-584cc7bcb5-j66jm\" (UID: \"45d7ef0c-272b-4d1e-965f-484975d5d25c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-j66jm" Feb 20 15:01:07.904969 master-0 kubenswrapper[28120]: I0220 15:01:07.904879 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26473c28-db42-47e6-9164-8c441ccc48ca-kube-api-access\") pod \"cluster-version-operator-57476485-nl7tx\" (UID: \"26473c28-db42-47e6-9164-8c441ccc48ca\") " pod="openshift-cluster-version/cluster-version-operator-57476485-nl7tx" Feb 20 15:01:07.922137 master-0 kubenswrapper[28120]: I0220 15:01:07.922058 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jn8g\" (UniqueName: \"kubernetes.io/projected/d3ca2d2f-9f31-4524-a28f-cf16b02dd711-kube-api-access-4jn8g\") pod \"cluster-olm-operator-5bd7768f54-dv88s\" (UID: \"d3ca2d2f-9f31-4524-a28f-cf16b02dd711\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-dv88s" Feb 20 15:01:07.935044 master-0 kubenswrapper[28120]: I0220 15:01:07.934973 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9crd\" (UniqueName: \"kubernetes.io/projected/8a278abf-8c59-4454-94d0-a0d0768cbec5-kube-api-access-r9crd\") pod \"insights-operator-59b498fcfb-b9jmk\" (UID: \"8a278abf-8c59-4454-94d0-a0d0768cbec5\") " pod="openshift-insights/insights-operator-59b498fcfb-b9jmk" Feb 20 15:01:07.957846 master-0 kubenswrapper[28120]: I0220 15:01:07.957761 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxncg\" (UniqueName: \"kubernetes.io/projected/16d6dd52-d73b-4696-873e-00a6d4bb2c77-kube-api-access-sxncg\") pod \"machine-config-operator-7f8c75f984-fphk7\" (UID: \"16d6dd52-d73b-4696-873e-00a6d4bb2c77\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-fphk7" Feb 20 15:01:07.983882 master-0 kubenswrapper[28120]: I0220 15:01:07.983816 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcffg\" (UniqueName: \"kubernetes.io/projected/86f6836b-b018-4c7a-87ad-51809a4b9c7a-kube-api-access-wcffg\") pod \"cluster-baremetal-operator-d6bb9bb76-k2tnk\" (UID: \"86f6836b-b018-4c7a-87ad-51809a4b9c7a\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-k2tnk" Feb 20 15:01:08.012678 master-0 kubenswrapper[28120]: I0220 15:01:08.012612 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwnq7\" (UniqueName: \"kubernetes.io/projected/4b6a656c-40d6-4c63-9c6f-ac943eae4c9a-kube-api-access-mwnq7\") pod \"ingress-operator-6569778c84-fjtrw\" (UID: \"4b6a656c-40d6-4c63-9c6f-ac943eae4c9a\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-fjtrw" Feb 20 15:01:08.023393 master-0 kubenswrapper[28120]: I0220 15:01:08.023318 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c31b8a7-edcb-403d-9122-7eb740f7d659-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-lt7ww\" (UID: \"4c31b8a7-edcb-403d-9122-7eb740f7d659\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-lt7ww" Feb 20 15:01:08.045219 master-0 kubenswrapper[28120]: I0220 15:01:08.045161 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lcqg\" (UniqueName: \"kubernetes.io/projected/fc334fff-c0bf-4905-bcdb-b0d2a35b0590-kube-api-access-9lcqg\") pod \"catalogd-controller-manager-84b8d9d697-jl7zr\" (UID: \"fc334fff-c0bf-4905-bcdb-b0d2a35b0590\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 15:01:08.063233 master-0 kubenswrapper[28120]: I0220 15:01:08.063141 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lp29\" (UniqueName: \"kubernetes.io/projected/a1af84e0-776b-4285-906a-6880dbc82a7b-kube-api-access-6lp29\") pod \"csi-snapshot-controller-6847bb4785-2mtj6\" (UID: \"a1af84e0-776b-4285-906a-6880dbc82a7b\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-2mtj6" Feb 20 15:01:08.077128 master-0 kubenswrapper[28120]: I0220 15:01:08.077051 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm2jn\" (UniqueName: \"kubernetes.io/projected/93786626-fac4-48f0-bf72-992bc39f4a82-kube-api-access-fm2jn\") pod \"redhat-operators-z4wzg\" (UID: \"93786626-fac4-48f0-bf72-992bc39f4a82\") " pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 15:01:08.103045 master-0 kubenswrapper[28120]: I0220 15:01:08.102964 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4dn4\" (UniqueName: \"kubernetes.io/projected/92008ac4-8deb-4fb9-9116-14d2d005bd36-kube-api-access-n4dn4\") pod \"network-check-source-58fb6744f5-nth67\" (UID: \"92008ac4-8deb-4fb9-9116-14d2d005bd36\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-nth67" Feb 20 15:01:08.124641 master-0 kubenswrapper[28120]: I0220 15:01:08.124486 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d5fq\" (UniqueName: \"kubernetes.io/projected/c0a3548f-299c-4234-9bf1-c93efcb9740b-kube-api-access-7d5fq\") pod \"marketplace-operator-6f5488b997-97m7r\" (UID: \"c0a3548f-299c-4234-9bf1-c93efcb9740b\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 15:01:08.143278 master-0 kubenswrapper[28120]: I0220 15:01:08.143225 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47sqj\" (UniqueName: \"kubernetes.io/projected/64e9eca9-bbdd-4eca-9219-922bbab9b388-kube-api-access-47sqj\") pod \"olm-operator-5499d7f7bb-57rwb\" (UID: \"64e9eca9-bbdd-4eca-9219-922bbab9b388\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 15:01:08.168122 master-0 kubenswrapper[28120]: I0220 15:01:08.168025 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpt8j\" (UniqueName: \"kubernetes.io/projected/6949e9d5-460c-4b63-94cb-1b20ad75ee1c-kube-api-access-jpt8j\") pod \"cloud-credential-operator-6968c58f46-mv42p\" (UID: \"6949e9d5-460c-4b63-94cb-1b20ad75ee1c\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-mv42p" Feb 20 15:01:08.187108 master-0 kubenswrapper[28120]: I0220 15:01:08.185499 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkc7z\" (UniqueName: \"kubernetes.io/projected/99fe3b99-0b40-4887-bcc8-59caa515b99f-kube-api-access-dkc7z\") pod \"node-exporter-bk9bp\" (UID: \"99fe3b99-0b40-4887-bcc8-59caa515b99f\") " pod="openshift-monitoring/node-exporter-bk9bp" Feb 20 15:01:08.208053 master-0 kubenswrapper[28120]: I0220 15:01:08.207955 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z67rw\" (UniqueName: \"kubernetes.io/projected/8e8c5772-b6e2-43d8-b173-af74541855fb-kube-api-access-z67rw\") pod \"telemeter-client-64bcb8ffcf-vwfzx\" (UID: \"8e8c5772-b6e2-43d8-b173-af74541855fb\") " pod="openshift-monitoring/telemeter-client-64bcb8ffcf-vwfzx" Feb 20 15:01:08.228314 master-0 kubenswrapper[28120]: I0220 15:01:08.228212 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc9pl\" (UniqueName: \"kubernetes.io/projected/4e7cac87-2eaa-4dad-b2dc-c8ed0557c665-kube-api-access-lc9pl\") pod \"ingress-canary-5qlzq\" (UID: \"4e7cac87-2eaa-4dad-b2dc-c8ed0557c665\") " pod="openshift-ingress-canary/ingress-canary-5qlzq" Feb 20 15:01:08.240881 master-0 kubenswrapper[28120]: I0220 15:01:08.240770 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxjcq\" (UniqueName: \"kubernetes.io/projected/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-kube-api-access-wxjcq\") pod \"route-controller-manager-584d5796b9-lf8t5\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:08.263762 master-0 kubenswrapper[28120]: I0220 15:01:08.263633 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zppr\" (UniqueName: \"kubernetes.io/projected/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-kube-api-access-9zppr\") pod \"metrics-server-9bcdd7684-kz2z2\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:08.276429 master-0 kubenswrapper[28120]: I0220 15:01:08.276361 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl7tw\" (UniqueName: \"kubernetes.io/projected/a39c5481-961c-4ac2-8c5b-a2c0165f4188-kube-api-access-tl7tw\") pod \"openshift-state-metrics-6dbff8cb4c-dcjr4\" (UID: \"a39c5481-961c-4ac2-8c5b-a2c0165f4188\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-dcjr4" Feb 20 15:01:08.298252 master-0 kubenswrapper[28120]: I0220 15:01:08.298163 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr5wk\" (UniqueName: \"kubernetes.io/projected/bdf18981-b755-4b11-8793-38bc5e2e755b-kube-api-access-wr5wk\") pod \"controller-manager-647657fcb-w9586\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:08.326301 master-0 kubenswrapper[28120]: I0220 15:01:08.326209 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcqd4\" (UniqueName: \"kubernetes.io/projected/ef3a09a5-b019-48a3-97f8-7ddadb37394e-kube-api-access-pcqd4\") pod \"machine-config-server-5frvf\" (UID: \"ef3a09a5-b019-48a3-97f8-7ddadb37394e\") " pod="openshift-machine-config-operator/machine-config-server-5frvf" Feb 20 15:01:08.341252 master-0 kubenswrapper[28120]: I0220 15:01:08.341195 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf5p9\" (UniqueName: \"kubernetes.io/projected/ae43311e-14ba-40a1-bdbf-f02d68031757-kube-api-access-mf5p9\") pod \"prometheus-operator-754bc4d665-gsn48\" (UID: \"ae43311e-14ba-40a1-bdbf-f02d68031757\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-gsn48" Feb 20 15:01:08.355598 master-0 kubenswrapper[28120]: I0220 15:01:08.355538 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhzk6\" (UniqueName: \"kubernetes.io/projected/c0b78aa6-7bc8-4221-81f5-bf62a7110380-kube-api-access-lhzk6\") pod \"kube-state-metrics-59584d565f-stlhz\" (UID: \"c0b78aa6-7bc8-4221-81f5-bf62a7110380\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-stlhz" Feb 20 15:01:08.388256 master-0 kubenswrapper[28120]: I0220 15:01:08.388094 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k94cb\" (UniqueName: \"kubernetes.io/projected/49defec6-a225-47ab-99ff-7a846f23eb00-kube-api-access-k94cb\") pod \"multus-admission-controller-5f54bf67d4-7j5jb\" (UID: \"49defec6-a225-47ab-99ff-7a846f23eb00\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-7j5jb" Feb 20 15:01:08.405842 master-0 kubenswrapper[28120]: I0220 15:01:08.405744 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kgzf\" (UniqueName: \"kubernetes.io/projected/caef1c17-56b0-479c-b000-caaac3c2b249-kube-api-access-8kgzf\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-855tj\" (UID: \"caef1c17-56b0-479c-b000-caaac3c2b249\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-855tj" Feb 20 15:01:08.423524 master-0 kubenswrapper[28120]: I0220 15:01:08.423458 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kfqn\" (UniqueName: \"kubernetes.io/projected/996d4949-f92c-42ac-9bda-8c6ec0295e92-kube-api-access-4kfqn\") pod \"machine-approver-7dd9c7d7b9-xcrlh\" (UID: \"996d4949-f92c-42ac-9bda-8c6ec0295e92\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-xcrlh" Feb 20 15:01:08.430962 master-0 kubenswrapper[28120]: E0220 15:01:08.430883 28120 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:08.430962 master-0 kubenswrapper[28120]: E0220 15:01:08.430950 28120 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:08.431182 master-0 kubenswrapper[28120]: E0220 15:01:08.431018 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access podName:fea431d7-394f-4639-abd6-c70a28921fc6 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:08.930998756 +0000 UTC m=+7.191792329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "fea431d7-394f-4639-abd6-c70a28921fc6") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:08.448424 master-0 kubenswrapper[28120]: E0220 15:01:08.448347 28120 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.39s" Feb 20 15:01:08.459008 master-0 kubenswrapper[28120]: I0220 15:01:08.458943 28120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 20 15:01:08.481774 master-0 kubenswrapper[28120]: I0220 15:01:08.481666 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 20 15:01:08.481774 master-0 kubenswrapper[28120]: I0220 15:01:08.481771 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:01:08.482128 master-0 kubenswrapper[28120]: I0220 15:01:08.481805 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 20 15:01:08.482128 master-0 kubenswrapper[28120]: I0220 15:01:08.481830 28120 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="8dae063b-c9d3-429d-96d3-35490fb40222" Feb 20 15:01:08.482128 master-0 kubenswrapper[28120]: I0220 15:01:08.481880 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"70c289149386153ec1fc6e6d4d67a823a97c082332c284d864c056158a4eb662"} Feb 20 15:01:08.482343 master-0 kubenswrapper[28120]: I0220 15:01:08.482198 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:01:08.482343 master-0 kubenswrapper[28120]: I0220 15:01:08.482261 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:08.483340 master-0 kubenswrapper[28120]: I0220 15:01:08.483288 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 20 15:01:08.483451 master-0 kubenswrapper[28120]: I0220 15:01:08.483362 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:01:08.483451 master-0 kubenswrapper[28120]: I0220 15:01:08.483401 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 20 15:01:08.483451 master-0 kubenswrapper[28120]: I0220 15:01:08.483427 28120 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="8dae063b-c9d3-429d-96d3-35490fb40222" Feb 20 15:01:08.483679 master-0 kubenswrapper[28120]: I0220 15:01:08.483467 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:08.483679 master-0 kubenswrapper[28120]: I0220 15:01:08.483581 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:01:08.483679 master-0 kubenswrapper[28120]: I0220 15:01:08.483614 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:08.483679 master-0 kubenswrapper[28120]: I0220 15:01:08.483640 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:08.484010 master-0 kubenswrapper[28120]: I0220 15:01:08.483708 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-dzfl8" Feb 20 15:01:08.484010 master-0 kubenswrapper[28120]: I0220 15:01:08.483737 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:08.484010 master-0 kubenswrapper[28120]: I0220 15:01:08.483792 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-dzfl8" Feb 20 15:01:08.484010 master-0 kubenswrapper[28120]: I0220 15:01:08.483847 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:08.484010 master-0 kubenswrapper[28120]: I0220 15:01:08.483982 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:08.484364 master-0 kubenswrapper[28120]: I0220 15:01:08.484093 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:08.484364 master-0 kubenswrapper[28120]: I0220 15:01:08.484183 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:08.484364 master-0 kubenswrapper[28120]: I0220 15:01:08.484218 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:08.484364 master-0 kubenswrapper[28120]: I0220 15:01:08.484279 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:08.484364 master-0 kubenswrapper[28120]: I0220 15:01:08.484337 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 15:01:08.484703 master-0 kubenswrapper[28120]: I0220 15:01:08.484386 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-hsqjc" Feb 20 15:01:08.513088 master-0 kubenswrapper[28120]: I0220 15:01:08.512519 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:08.513088 master-0 kubenswrapper[28120]: I0220 15:01:08.512753 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:08.513088 master-0 kubenswrapper[28120]: I0220 15:01:08.512798 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:08.529037 master-0 kubenswrapper[28120]: I0220 15:01:08.528915 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:01:08.534405 master-0 kubenswrapper[28120]: I0220 15:01:08.534322 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 15:01:08.543211 master-0 kubenswrapper[28120]: I0220 15:01:08.543153 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-jl7zr" Feb 20 15:01:08.605034 master-0 kubenswrapper[28120]: I0220 15:01:08.604977 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 15:01:08.640478 master-0 kubenswrapper[28120]: I0220 15:01:08.640357 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:08.644985 master-0 kubenswrapper[28120]: I0220 15:01:08.644912 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:08.655057 master-0 kubenswrapper[28120]: I0220 15:01:08.654954 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 15:01:08.665703 master-0 kubenswrapper[28120]: I0220 15:01:08.665479 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 15:01:08.968512 master-0 kubenswrapper[28120]: I0220 15:01:08.968380 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:08.969076 master-0 kubenswrapper[28120]: E0220 15:01:08.968554 28120 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:08.969076 master-0 kubenswrapper[28120]: E0220 15:01:08.968580 28120 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:08.969076 master-0 kubenswrapper[28120]: E0220 15:01:08.968624 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access podName:fea431d7-394f-4639-abd6-c70a28921fc6 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:09.968606786 +0000 UTC m=+8.229400369 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "fea431d7-394f-4639-abd6-c70a28921fc6") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:09.204052 master-0 kubenswrapper[28120]: I0220 15:01:09.203976 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 15:01:09.204245 master-0 kubenswrapper[28120]: I0220 15:01:09.204113 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:09.206818 master-0 kubenswrapper[28120]: I0220 15:01:09.206787 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-2g7jd" Feb 20 15:01:09.517215 master-0 kubenswrapper[28120]: I0220 15:01:09.517142 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:09.525207 master-0 kubenswrapper[28120]: I0220 15:01:09.524704 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:09.526256 master-0 kubenswrapper[28120]: I0220 15:01:09.526186 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:09.583666 master-0 kubenswrapper[28120]: I0220 15:01:09.583572 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x5fhb" Feb 20 15:01:09.985641 master-0 kubenswrapper[28120]: I0220 15:01:09.985482 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:09.986121 master-0 kubenswrapper[28120]: E0220 15:01:09.985771 28120 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:09.986121 master-0 kubenswrapper[28120]: E0220 15:01:09.985830 28120 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:09.986121 master-0 kubenswrapper[28120]: E0220 15:01:09.985954 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access podName:fea431d7-394f-4639-abd6-c70a28921fc6 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:11.985898472 +0000 UTC m=+10.246692065 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "fea431d7-394f-4639-abd6-c70a28921fc6") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:10.122012 master-0 kubenswrapper[28120]: I0220 15:01:10.121865 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=12.121838148 podStartE2EDuration="12.121838148s" podCreationTimestamp="2026-02-20 15:00:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:01:10.120545075 +0000 UTC m=+8.381338678" watchObservedRunningTime="2026-02-20 15:01:10.121838148 +0000 UTC m=+8.382631751" Feb 20 15:01:10.146958 master-0 kubenswrapper[28120]: I0220 15:01:10.146843 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:01:10.191555 master-0 kubenswrapper[28120]: I0220 15:01:10.191452 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:10.191842 master-0 kubenswrapper[28120]: I0220 15:01:10.191575 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:10.194196 master-0 kubenswrapper[28120]: I0220 15:01:10.194135 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" Feb 20 15:01:10.246045 master-0 kubenswrapper[28120]: I0220 15:01:10.245110 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=7.244905973 podStartE2EDuration="7.244905973s" podCreationTimestamp="2026-02-20 15:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:01:10.242491803 +0000 UTC m=+8.503285396" watchObservedRunningTime="2026-02-20 15:01:10.244905973 +0000 UTC m=+8.505699576" Feb 20 15:01:10.515334 master-0 kubenswrapper[28120]: I0220 15:01:10.515216 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 15:01:10.515520 master-0 kubenswrapper[28120]: I0220 15:01:10.515354 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:10.520560 master-0 kubenswrapper[28120]: I0220 15:01:10.520274 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-6c5ff764cd-l2884" Feb 20 15:01:10.532142 master-0 kubenswrapper[28120]: I0220 15:01:10.532114 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:11.145794 master-0 kubenswrapper[28120]: I0220 15:01:11.145743 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:01:11.150119 master-0 kubenswrapper[28120]: I0220 15:01:11.150062 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:01:11.167646 master-0 kubenswrapper[28120]: I0220 15:01:11.167601 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp" Feb 20 15:01:11.167751 master-0 kubenswrapper[28120]: I0220 15:01:11.167730 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:11.171264 master-0 kubenswrapper[28120]: I0220 15:01:11.171216 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-d8fvp" Feb 20 15:01:11.553867 master-0 kubenswrapper[28120]: I0220 15:01:11.553809 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:01:11.991317 master-0 kubenswrapper[28120]: I0220 15:01:11.991243 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9wddt" Feb 20 15:01:12.016726 master-0 kubenswrapper[28120]: I0220 15:01:12.016651 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:12.016985 master-0 kubenswrapper[28120]: E0220 15:01:12.016865 28120 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:12.016985 master-0 kubenswrapper[28120]: E0220 15:01:12.016909 28120 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:12.017146 master-0 kubenswrapper[28120]: E0220 15:01:12.017070 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access podName:fea431d7-394f-4639-abd6-c70a28921fc6 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:16.017037321 +0000 UTC m=+14.277830924 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "fea431d7-394f-4639-abd6-c70a28921fc6") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:12.192948 master-0 kubenswrapper[28120]: I0220 15:01:12.192181 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7659f6b598-z8454" Feb 20 15:01:12.477202 master-0 kubenswrapper[28120]: I0220 15:01:12.477101 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-776c8f54bc-gmvx8" Feb 20 15:01:12.788171 master-0 kubenswrapper[28120]: I0220 15:01:12.788086 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:01:13.079501 master-0 kubenswrapper[28120]: I0220 15:01:13.079347 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:13.079738 master-0 kubenswrapper[28120]: I0220 15:01:13.079549 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:13.079738 master-0 kubenswrapper[28120]: I0220 15:01:13.079564 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:13.125013 master-0 kubenswrapper[28120]: I0220 15:01:13.124952 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:13.556945 master-0 kubenswrapper[28120]: I0220 15:01:13.555450 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:13.780330 master-0 kubenswrapper[28120]: I0220 15:01:13.780233 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:01:13.787527 master-0 kubenswrapper[28120]: I0220 15:01:13.787438 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:01:13.809478 master-0 kubenswrapper[28120]: I0220 15:01:13.809285 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:01:13.811237 master-0 kubenswrapper[28120]: I0220 15:01:13.809495 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:13.819722 master-0 kubenswrapper[28120]: I0220 15:01:13.819591 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:01:14.190124 master-0 kubenswrapper[28120]: I0220 15:01:14.190023 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 20 15:01:14.207848 master-0 kubenswrapper[28120]: I0220 15:01:14.207805 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 20 15:01:14.222136 master-0 kubenswrapper[28120]: I0220 15:01:14.222088 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 15:01:14.222368 master-0 kubenswrapper[28120]: I0220 15:01:14.222231 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:14.228690 master-0 kubenswrapper[28120]: I0220 15:01:14.228630 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-57rwb" Feb 20 15:01:14.568726 master-0 kubenswrapper[28120]: I0220 15:01:14.568669 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:01:15.392951 master-0 kubenswrapper[28120]: I0220 15:01:15.392820 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 15:01:15.393130 master-0 kubenswrapper[28120]: I0220 15:01:15.393098 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:15.394838 master-0 kubenswrapper[28120]: I0220 15:01:15.394806 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-ljvkb" Feb 20 15:01:15.491994 master-0 kubenswrapper[28120]: I0220 15:01:15.491591 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:16.075728 master-0 kubenswrapper[28120]: I0220 15:01:16.075658 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:16.076331 master-0 kubenswrapper[28120]: E0220 15:01:16.075866 28120 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:16.076331 master-0 kubenswrapper[28120]: E0220 15:01:16.075904 28120 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:16.076331 master-0 kubenswrapper[28120]: E0220 15:01:16.075988 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access podName:fea431d7-394f-4639-abd6-c70a28921fc6 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:24.075965473 +0000 UTC m=+22.336759046 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "fea431d7-394f-4639-abd6-c70a28921fc6") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:16.354960 master-0 kubenswrapper[28120]: I0220 15:01:16.354808 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 15:01:16.355152 master-0 kubenswrapper[28120]: I0220 15:01:16.355091 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:16.357521 master-0 kubenswrapper[28120]: I0220 15:01:16.357475 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-6qqvd" Feb 20 15:01:16.638008 master-0 kubenswrapper[28120]: I0220 15:01:16.637804 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:16.638252 master-0 kubenswrapper[28120]: I0220 15:01:16.638118 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:16.672777 master-0 kubenswrapper[28120]: I0220 15:01:16.671650 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5gzs6" Feb 20 15:01:16.685432 master-0 kubenswrapper[28120]: I0220 15:01:16.685376 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9wddt" Feb 20 15:01:16.744405 master-0 kubenswrapper[28120]: I0220 15:01:16.744353 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9wddt" Feb 20 15:01:17.240093 master-0 kubenswrapper[28120]: I0220 15:01:17.240006 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 15:01:17.240900 master-0 kubenswrapper[28120]: I0220 15:01:17.240204 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:17.242611 master-0 kubenswrapper[28120]: I0220 15:01:17.242406 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-2sw9z" Feb 20 15:01:17.450001 master-0 kubenswrapper[28120]: I0220 15:01:17.448909 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x5fhb" Feb 20 15:01:17.494999 master-0 kubenswrapper[28120]: I0220 15:01:17.492004 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x5fhb" Feb 20 15:01:17.621609 master-0 kubenswrapper[28120]: I0220 15:01:17.621556 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x5fhb" Feb 20 15:01:17.629498 master-0 kubenswrapper[28120]: I0220 15:01:17.629460 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9wddt" Feb 20 15:01:17.645113 master-0 kubenswrapper[28120]: I0220 15:01:17.645063 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:17.653597 master-0 kubenswrapper[28120]: I0220 15:01:17.653519 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:17.668781 master-0 kubenswrapper[28120]: I0220 15:01:17.664878 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 15:01:17.707963 master-0 kubenswrapper[28120]: I0220 15:01:17.701122 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 15:01:17.707963 master-0 kubenswrapper[28120]: I0220 15:01:17.701230 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:17.714357 master-0 kubenswrapper[28120]: I0220 15:01:17.714311 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 15:01:17.747679 master-0 kubenswrapper[28120]: I0220 15:01:17.743959 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n2cdp" Feb 20 15:01:18.292955 master-0 kubenswrapper[28120]: I0220 15:01:18.292881 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 15:01:18.293524 master-0 kubenswrapper[28120]: I0220 15:01:18.293118 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:01:18.295347 master-0 kubenswrapper[28120]: I0220 15:01:18.295304 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 15:01:18.597057 master-0 kubenswrapper[28120]: I0220 15:01:18.596812 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:01:18.675988 master-0 kubenswrapper[28120]: I0220 15:01:18.673907 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z4wzg" Feb 20 15:01:22.807081 master-0 kubenswrapper[28120]: I0220 15:01:22.807010 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:01:24.131598 master-0 kubenswrapper[28120]: I0220 15:01:24.131491 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:24.132413 master-0 kubenswrapper[28120]: E0220 15:01:24.131758 28120 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:24.132413 master-0 kubenswrapper[28120]: E0220 15:01:24.131827 28120 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:24.132413 master-0 kubenswrapper[28120]: E0220 15:01:24.131992 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access podName:fea431d7-394f-4639-abd6-c70a28921fc6 nodeName:}" failed. No retries permitted until 2026-02-20 15:01:40.131912696 +0000 UTC m=+38.392706299 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "fea431d7-394f-4639-abd6-c70a28921fc6") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:26.079627 master-0 kubenswrapper[28120]: I0220 15:01:26.079572 28120 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 20 15:01:26.080115 master-0 kubenswrapper[28120]: I0220 15:01:26.079819 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" containerID="cri-o://648933d86ebc41b4f0c29dee7c6def360e8626c8f16e72ee5fb4e3e4b02a93f1" gracePeriod=5 Feb 20 15:01:31.684129 master-0 kubenswrapper[28120]: I0220 15:01:31.684087 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_5c4f5d60772fa42f26e9c219bffa62b9/startup-monitor/0.log" Feb 20 15:01:31.684808 master-0 kubenswrapper[28120]: I0220 15:01:31.684777 28120 generic.go:334] "Generic (PLEG): container finished" podID="5c4f5d60772fa42f26e9c219bffa62b9" containerID="648933d86ebc41b4f0c29dee7c6def360e8626c8f16e72ee5fb4e3e4b02a93f1" exitCode=137 Feb 20 15:01:31.684946 master-0 kubenswrapper[28120]: I0220 15:01:31.684912 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29c1db2527f092355034b5557942ea50b25282b9b77501d427c1a6d0e01d2771" Feb 20 15:01:31.693597 master-0 kubenswrapper[28120]: I0220 15:01:31.693554 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_5c4f5d60772fa42f26e9c219bffa62b9/startup-monitor/0.log" Feb 20 15:01:31.693693 master-0 kubenswrapper[28120]: I0220 15:01:31.693635 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:31.739353 master-0 kubenswrapper[28120]: I0220 15:01:31.739280 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 20 15:01:31.739581 master-0 kubenswrapper[28120]: I0220 15:01:31.739366 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 20 15:01:31.739581 master-0 kubenswrapper[28120]: I0220 15:01:31.739439 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 20 15:01:31.739581 master-0 kubenswrapper[28120]: I0220 15:01:31.739561 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 20 15:01:31.739725 master-0 kubenswrapper[28120]: I0220 15:01:31.739554 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:01:31.739849 master-0 kubenswrapper[28120]: I0220 15:01:31.739675 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests" (OuterVolumeSpecName: "manifests") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:01:31.739950 master-0 kubenswrapper[28120]: I0220 15:01:31.739684 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:01:31.739950 master-0 kubenswrapper[28120]: I0220 15:01:31.739683 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log" (OuterVolumeSpecName: "var-log") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:01:31.739950 master-0 kubenswrapper[28120]: I0220 15:01:31.739616 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 20 15:01:31.740720 master-0 kubenswrapper[28120]: I0220 15:01:31.740681 28120 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") on node \"master-0\" DevicePath \"\"" Feb 20 15:01:31.740843 master-0 kubenswrapper[28120]: I0220 15:01:31.740732 28120 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 15:01:31.740843 master-0 kubenswrapper[28120]: I0220 15:01:31.740760 28120 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:01:31.740843 master-0 kubenswrapper[28120]: I0220 15:01:31.740788 28120 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") on node \"master-0\" DevicePath \"\"" Feb 20 15:01:31.747836 master-0 kubenswrapper[28120]: I0220 15:01:31.747784 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:01:31.842557 master-0 kubenswrapper[28120]: I0220 15:01:31.842474 28120 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:01:32.070059 master-0 kubenswrapper[28120]: I0220 15:01:32.069988 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c4f5d60772fa42f26e9c219bffa62b9" path="/var/lib/kubelet/pods/5c4f5d60772fa42f26e9c219bffa62b9/volumes" Feb 20 15:01:32.070594 master-0 kubenswrapper[28120]: I0220 15:01:32.070554 28120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 20 15:01:32.094231 master-0 kubenswrapper[28120]: I0220 15:01:32.094168 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 20 15:01:32.094231 master-0 kubenswrapper[28120]: I0220 15:01:32.094213 28120 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="791aca8a-307f-45db-8a5e-24044606e652" Feb 20 15:01:32.096945 master-0 kubenswrapper[28120]: I0220 15:01:32.096855 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 20 15:01:32.096945 master-0 kubenswrapper[28120]: I0220 15:01:32.096941 28120 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="791aca8a-307f-45db-8a5e-24044606e652" Feb 20 15:01:32.692397 master-0 kubenswrapper[28120]: I0220 15:01:32.692303 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:01:35.826210 master-0 kubenswrapper[28120]: I0220 15:01:35.826119 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: E0220 15:01:35.826526 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: I0220 15:01:35.826547 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: E0220 15:01:35.826576 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef51d3b-cd8b-4f34-961e-8daebbed3ca6" containerName="installer" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: I0220 15:01:35.826589 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef51d3b-cd8b-4f34-961e-8daebbed3ca6" containerName="installer" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: E0220 15:01:35.826622 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975d0fde-cb2f-4599-b3b7-7de876307a61" containerName="installer" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: I0220 15:01:35.826635 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="975d0fde-cb2f-4599-b3b7-7de876307a61" containerName="installer" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: E0220 15:01:35.826660 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014f3913-ac7e-431a-880c-91d979a5dfc7" containerName="assisted-installer-controller" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: I0220 15:01:35.826672 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="014f3913-ac7e-431a-880c-91d979a5dfc7" containerName="assisted-installer-controller" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: E0220 15:01:35.826699 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: I0220 15:01:35.826711 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: E0220 15:01:35.826734 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: I0220 15:01:35.826745 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: E0220 15:01:35.826768 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277ab008-e6f0-49cd-801d-54d3071036d4" containerName="installer" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: I0220 15:01:35.826780 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="277ab008-e6f0-49cd-801d-54d3071036d4" containerName="installer" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: E0220 15:01:35.826810 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: I0220 15:01:35.826823 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: E0220 15:01:35.826844 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986049a1-b3e4-4dca-b178-55eaa7a27bfb" containerName="installer" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: I0220 15:01:35.826856 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="986049a1-b3e4-4dca-b178-55eaa7a27bfb" containerName="installer" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: E0220 15:01:35.826886 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53835140-8eed-401c-ac07-f89b554ff616" containerName="installer" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: I0220 15:01:35.826898 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="53835140-8eed-401c-ac07-f89b554ff616" containerName="installer" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: E0220 15:01:35.826917 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6285323-3e75-4d44-ad05-98890c097dd2" containerName="installer" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: I0220 15:01:35.826954 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6285323-3e75-4d44-ad05-98890c097dd2" containerName="installer" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: E0220 15:01:35.826972 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c60ad1f-f8d9-4c67-97a3-f9fa491bd463" containerName="collect-profiles" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: I0220 15:01:35.826984 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c60ad1f-f8d9-4c67-97a3-f9fa491bd463" containerName="collect-profiles" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: E0220 15:01:35.827010 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="380174fb-b30c-4f45-9119-397cdca91756" containerName="installer" Feb 20 15:01:35.827005 master-0 kubenswrapper[28120]: I0220 15:01:35.827022 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="380174fb-b30c-4f45-9119-397cdca91756" containerName="installer" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: E0220 15:01:35.827043 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3c370c-58b4-4115-a359-b3f55c87284d" containerName="installer" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827055 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3c370c-58b4-4115-a359-b3f55c87284d" containerName="installer" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: E0220 15:01:35.827075 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fea431d7-394f-4639-abd6-c70a28921fc6" containerName="installer" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827087 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="fea431d7-394f-4639-abd6-c70a28921fc6" containerName="installer" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: E0220 15:01:35.827114 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8741d7-c96b-41cc-80cb-81683bb68480" containerName="installer" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827126 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8741d7-c96b-41cc-80cb-81683bb68480" containerName="installer" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827317 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827375 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="380174fb-b30c-4f45-9119-397cdca91756" containerName="installer" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827421 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef51d3b-cd8b-4f34-961e-8daebbed3ca6" containerName="installer" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827454 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827492 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="975d0fde-cb2f-4599-b3b7-7de876307a61" containerName="installer" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827539 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8741d7-c96b-41cc-80cb-81683bb68480" containerName="installer" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827568 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="277ab008-e6f0-49cd-801d-54d3071036d4" containerName="installer" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827596 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="fea431d7-394f-4639-abd6-c70a28921fc6" containerName="installer" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827620 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827643 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827667 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3c370c-58b4-4115-a359-b3f55c87284d" containerName="installer" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827692 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c60ad1f-f8d9-4c67-97a3-f9fa491bd463" containerName="collect-profiles" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827713 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="53835140-8eed-401c-ac07-f89b554ff616" containerName="installer" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827734 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="986049a1-b3e4-4dca-b178-55eaa7a27bfb" containerName="installer" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827765 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="014f3913-ac7e-431a-880c-91d979a5dfc7" containerName="assisted-installer-controller" Feb 20 15:01:35.828506 master-0 kubenswrapper[28120]: I0220 15:01:35.827799 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6285323-3e75-4d44-ad05-98890c097dd2" containerName="installer" Feb 20 15:01:35.829786 master-0 kubenswrapper[28120]: I0220 15:01:35.828770 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 15:01:35.831269 master-0 kubenswrapper[28120]: I0220 15:01:35.831223 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 20 15:01:35.831423 master-0 kubenswrapper[28120]: I0220 15:01:35.831308 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-j2nvb" Feb 20 15:01:35.843826 master-0 kubenswrapper[28120]: I0220 15:01:35.843753 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 20 15:01:35.910946 master-0 kubenswrapper[28120]: I0220 15:01:35.906527 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d093ed7-8264-4834-ac33-5fa08f4b27c2-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5d093ed7-8264-4834-ac33-5fa08f4b27c2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 15:01:35.910946 master-0 kubenswrapper[28120]: I0220 15:01:35.906649 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d093ed7-8264-4834-ac33-5fa08f4b27c2-var-lock\") pod \"installer-4-master-0\" (UID: \"5d093ed7-8264-4834-ac33-5fa08f4b27c2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 15:01:35.910946 master-0 kubenswrapper[28120]: I0220 15:01:35.906745 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d093ed7-8264-4834-ac33-5fa08f4b27c2-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5d093ed7-8264-4834-ac33-5fa08f4b27c2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 15:01:36.007487 master-0 kubenswrapper[28120]: I0220 15:01:36.007428 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d093ed7-8264-4834-ac33-5fa08f4b27c2-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5d093ed7-8264-4834-ac33-5fa08f4b27c2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 15:01:36.007726 master-0 kubenswrapper[28120]: I0220 15:01:36.007586 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d093ed7-8264-4834-ac33-5fa08f4b27c2-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5d093ed7-8264-4834-ac33-5fa08f4b27c2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 15:01:36.007726 master-0 kubenswrapper[28120]: I0220 15:01:36.007639 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d093ed7-8264-4834-ac33-5fa08f4b27c2-var-lock\") pod \"installer-4-master-0\" (UID: \"5d093ed7-8264-4834-ac33-5fa08f4b27c2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 15:01:36.007726 master-0 kubenswrapper[28120]: I0220 15:01:36.007682 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d093ed7-8264-4834-ac33-5fa08f4b27c2-var-lock\") pod \"installer-4-master-0\" (UID: \"5d093ed7-8264-4834-ac33-5fa08f4b27c2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 15:01:36.007881 master-0 kubenswrapper[28120]: I0220 15:01:36.007843 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d093ed7-8264-4834-ac33-5fa08f4b27c2-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5d093ed7-8264-4834-ac33-5fa08f4b27c2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 15:01:36.022336 master-0 kubenswrapper[28120]: I0220 15:01:36.022283 28120 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 20 15:01:36.028079 master-0 kubenswrapper[28120]: I0220 15:01:36.024693 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d093ed7-8264-4834-ac33-5fa08f4b27c2-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5d093ed7-8264-4834-ac33-5fa08f4b27c2\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 15:01:36.166675 master-0 kubenswrapper[28120]: I0220 15:01:36.166520 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 15:01:36.669055 master-0 kubenswrapper[28120]: I0220 15:01:36.669009 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 20 15:01:36.673409 master-0 kubenswrapper[28120]: W0220 15:01:36.673344 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5d093ed7_8264_4834_ac33_5fa08f4b27c2.slice/crio-cccd26426182e81137b9535965baa43a046299ee365b6481daa843ad12599e9e WatchSource:0}: Error finding container cccd26426182e81137b9535965baa43a046299ee365b6481daa843ad12599e9e: Status 404 returned error can't find the container with id cccd26426182e81137b9535965baa43a046299ee365b6481daa843ad12599e9e Feb 20 15:01:36.730327 master-0 kubenswrapper[28120]: I0220 15:01:36.730179 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"5d093ed7-8264-4834-ac33-5fa08f4b27c2","Type":"ContainerStarted","Data":"cccd26426182e81137b9535965baa43a046299ee365b6481daa843ad12599e9e"} Feb 20 15:01:36.736670 master-0 kubenswrapper[28120]: I0220 15:01:36.736606 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 20 15:01:36.739303 master-0 kubenswrapper[28120]: I0220 15:01:36.739183 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 20 15:01:36.741218 master-0 kubenswrapper[28120]: I0220 15:01:36.741169 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-fjspz" Feb 20 15:01:36.741482 master-0 kubenswrapper[28120]: I0220 15:01:36.741454 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 20 15:01:36.758965 master-0 kubenswrapper[28120]: I0220 15:01:36.754211 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 20 15:01:36.823410 master-0 kubenswrapper[28120]: I0220 15:01:36.823348 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70fc5e85-d7d2-45f7-89ff-889a248d6103-kube-api-access\") pod \"installer-5-master-0\" (UID: \"70fc5e85-d7d2-45f7-89ff-889a248d6103\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 20 15:01:36.823621 master-0 kubenswrapper[28120]: I0220 15:01:36.823574 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70fc5e85-d7d2-45f7-89ff-889a248d6103-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"70fc5e85-d7d2-45f7-89ff-889a248d6103\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 20 15:01:36.823879 master-0 kubenswrapper[28120]: I0220 15:01:36.823831 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70fc5e85-d7d2-45f7-89ff-889a248d6103-var-lock\") pod \"installer-5-master-0\" (UID: \"70fc5e85-d7d2-45f7-89ff-889a248d6103\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 20 15:01:36.925913 master-0 kubenswrapper[28120]: I0220 15:01:36.925762 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70fc5e85-d7d2-45f7-89ff-889a248d6103-var-lock\") pod \"installer-5-master-0\" (UID: \"70fc5e85-d7d2-45f7-89ff-889a248d6103\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 20 15:01:36.925913 master-0 kubenswrapper[28120]: I0220 15:01:36.925887 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70fc5e85-d7d2-45f7-89ff-889a248d6103-var-lock\") pod \"installer-5-master-0\" (UID: \"70fc5e85-d7d2-45f7-89ff-889a248d6103\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 20 15:01:36.936832 master-0 kubenswrapper[28120]: I0220 15:01:36.925906 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70fc5e85-d7d2-45f7-89ff-889a248d6103-kube-api-access\") pod \"installer-5-master-0\" (UID: \"70fc5e85-d7d2-45f7-89ff-889a248d6103\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 20 15:01:36.936832 master-0 kubenswrapper[28120]: I0220 15:01:36.926144 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70fc5e85-d7d2-45f7-89ff-889a248d6103-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"70fc5e85-d7d2-45f7-89ff-889a248d6103\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 20 15:01:36.936832 master-0 kubenswrapper[28120]: I0220 15:01:36.926308 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70fc5e85-d7d2-45f7-89ff-889a248d6103-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"70fc5e85-d7d2-45f7-89ff-889a248d6103\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 20 15:01:36.946643 master-0 kubenswrapper[28120]: I0220 15:01:36.946594 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70fc5e85-d7d2-45f7-89ff-889a248d6103-kube-api-access\") pod \"installer-5-master-0\" (UID: \"70fc5e85-d7d2-45f7-89ff-889a248d6103\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 20 15:01:37.073647 master-0 kubenswrapper[28120]: I0220 15:01:37.073612 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 20 15:01:37.598533 master-0 kubenswrapper[28120]: I0220 15:01:37.598470 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 20 15:01:37.607278 master-0 kubenswrapper[28120]: W0220 15:01:37.607225 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod70fc5e85_d7d2_45f7_89ff_889a248d6103.slice/crio-05835867e5ee070c821b046db82a7d3ad2696b555f519aadc50a86a08db40896 WatchSource:0}: Error finding container 05835867e5ee070c821b046db82a7d3ad2696b555f519aadc50a86a08db40896: Status 404 returned error can't find the container with id 05835867e5ee070c821b046db82a7d3ad2696b555f519aadc50a86a08db40896 Feb 20 15:01:37.751273 master-0 kubenswrapper[28120]: I0220 15:01:37.751205 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"70fc5e85-d7d2-45f7-89ff-889a248d6103","Type":"ContainerStarted","Data":"05835867e5ee070c821b046db82a7d3ad2696b555f519aadc50a86a08db40896"} Feb 20 15:01:37.756491 master-0 kubenswrapper[28120]: I0220 15:01:37.755600 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"5d093ed7-8264-4834-ac33-5fa08f4b27c2","Type":"ContainerStarted","Data":"b34d21b695cd7db4b5e00374b0cee29abc38c42fe0f55b67b7403761107934f5"} Feb 20 15:01:37.778963 master-0 kubenswrapper[28120]: I0220 15:01:37.775043 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.775028197 podStartE2EDuration="2.775028197s" podCreationTimestamp="2026-02-20 15:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:01:37.77355599 +0000 UTC m=+36.034349573" watchObservedRunningTime="2026-02-20 15:01:37.775028197 +0000 UTC m=+36.035821750" Feb 20 15:01:38.763523 master-0 kubenswrapper[28120]: I0220 15:01:38.763468 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"70fc5e85-d7d2-45f7-89ff-889a248d6103","Type":"ContainerStarted","Data":"cbe7e13cae8823ba9986d71a265d2c28a0e68eafe45c89b8009a07e5eb6852e2"} Feb 20 15:01:38.784043 master-0 kubenswrapper[28120]: I0220 15:01:38.783974 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=2.783950254 podStartE2EDuration="2.783950254s" podCreationTimestamp="2026-02-20 15:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:01:38.78218054 +0000 UTC m=+37.042974103" watchObservedRunningTime="2026-02-20 15:01:38.783950254 +0000 UTC m=+37.044743827" Feb 20 15:01:40.174591 master-0 kubenswrapper[28120]: I0220 15:01:40.174508 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:01:40.175437 master-0 kubenswrapper[28120]: E0220 15:01:40.174663 28120 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:40.175437 master-0 kubenswrapper[28120]: E0220 15:01:40.174684 28120 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:40.175437 master-0 kubenswrapper[28120]: E0220 15:01:40.174726 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access podName:fea431d7-394f-4639-abd6-c70a28921fc6 nodeName:}" failed. No retries permitted until 2026-02-20 15:02:12.174710563 +0000 UTC m=+70.435504126 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "fea431d7-394f-4639-abd6-c70a28921fc6") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 20 15:01:51.621302 master-0 kubenswrapper[28120]: I0220 15:01:51.621158 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-647657fcb-w9586"] Feb 20 15:01:51.622005 master-0 kubenswrapper[28120]: I0220 15:01:51.621516 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" podUID="bdf18981-b755-4b11-8793-38bc5e2e755b" containerName="controller-manager" containerID="cri-o://db8b2b97e53f2e0f9eb8b077984d360867eb853438c79f964c4316743bc03b9a" gracePeriod=30 Feb 20 15:01:51.648983 master-0 kubenswrapper[28120]: I0220 15:01:51.648909 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5"] Feb 20 15:01:51.649545 master-0 kubenswrapper[28120]: I0220 15:01:51.649220 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" podUID="63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" containerName="route-controller-manager" containerID="cri-o://00f9e9b6b6ccf56cbc32cbe6a3bf7dcabdcf2702c8bfb772dfa8c5e881fe2a66" gracePeriod=30 Feb 20 15:01:51.870626 master-0 kubenswrapper[28120]: I0220 15:01:51.870561 28120 generic.go:334] "Generic (PLEG): container finished" podID="bdf18981-b755-4b11-8793-38bc5e2e755b" containerID="db8b2b97e53f2e0f9eb8b077984d360867eb853438c79f964c4316743bc03b9a" exitCode=0 Feb 20 15:01:51.870809 master-0 kubenswrapper[28120]: I0220 15:01:51.870633 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" event={"ID":"bdf18981-b755-4b11-8793-38bc5e2e755b","Type":"ContainerDied","Data":"db8b2b97e53f2e0f9eb8b077984d360867eb853438c79f964c4316743bc03b9a"} Feb 20 15:01:51.870809 master-0 kubenswrapper[28120]: I0220 15:01:51.870669 28120 scope.go:117] "RemoveContainer" containerID="71a3faa6e2a13b4bcadc91647966380b556ee1824a73e0209af007ec80d749b3" Feb 20 15:01:51.873052 master-0 kubenswrapper[28120]: I0220 15:01:51.873024 28120 generic.go:334] "Generic (PLEG): container finished" podID="63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" containerID="00f9e9b6b6ccf56cbc32cbe6a3bf7dcabdcf2702c8bfb772dfa8c5e881fe2a66" exitCode=0 Feb 20 15:01:51.873052 master-0 kubenswrapper[28120]: I0220 15:01:51.873051 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" event={"ID":"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd","Type":"ContainerDied","Data":"00f9e9b6b6ccf56cbc32cbe6a3bf7dcabdcf2702c8bfb772dfa8c5e881fe2a66"} Feb 20 15:01:51.919102 master-0 kubenswrapper[28120]: I0220 15:01:51.919051 28120 scope.go:117] "RemoveContainer" containerID="ce6cf48b03cf7ea4bb59cbc88338b3797dd3cd5289e6bbf78ef6ac04abd04f98" Feb 20 15:01:52.319697 master-0 kubenswrapper[28120]: I0220 15:01:52.319629 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:52.478955 master-0 kubenswrapper[28120]: I0220 15:01:52.478477 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxjcq\" (UniqueName: \"kubernetes.io/projected/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-kube-api-access-wxjcq\") pod \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " Feb 20 15:01:52.478955 master-0 kubenswrapper[28120]: I0220 15:01:52.478589 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-config\") pod \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " Feb 20 15:01:52.478955 master-0 kubenswrapper[28120]: I0220 15:01:52.478624 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-client-ca\") pod \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " Feb 20 15:01:52.478955 master-0 kubenswrapper[28120]: I0220 15:01:52.478651 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-serving-cert\") pod \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\" (UID: \"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd\") " Feb 20 15:01:52.482969 master-0 kubenswrapper[28120]: I0220 15:01:52.482428 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-client-ca" (OuterVolumeSpecName: "client-ca") pod "63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" (UID: "63d49b12-8d51-4d97-9f06-ca4c5bf10dcd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:01:52.483143 master-0 kubenswrapper[28120]: I0220 15:01:52.483067 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-config" (OuterVolumeSpecName: "config") pod "63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" (UID: "63d49b12-8d51-4d97-9f06-ca4c5bf10dcd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:01:52.485341 master-0 kubenswrapper[28120]: I0220 15:01:52.484315 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" (UID: "63d49b12-8d51-4d97-9f06-ca4c5bf10dcd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:01:52.486654 master-0 kubenswrapper[28120]: I0220 15:01:52.486614 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-kube-api-access-wxjcq" (OuterVolumeSpecName: "kube-api-access-wxjcq") pod "63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" (UID: "63d49b12-8d51-4d97-9f06-ca4c5bf10dcd"). InnerVolumeSpecName "kube-api-access-wxjcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:01:52.518154 master-0 kubenswrapper[28120]: I0220 15:01:52.518114 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:52.581669 master-0 kubenswrapper[28120]: I0220 15:01:52.580347 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxjcq\" (UniqueName: \"kubernetes.io/projected/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-kube-api-access-wxjcq\") on node \"master-0\" DevicePath \"\"" Feb 20 15:01:52.581669 master-0 kubenswrapper[28120]: I0220 15:01:52.580429 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:01:52.581669 master-0 kubenswrapper[28120]: I0220 15:01:52.580451 28120 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 15:01:52.581669 master-0 kubenswrapper[28120]: I0220 15:01:52.580472 28120 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 15:01:52.681154 master-0 kubenswrapper[28120]: I0220 15:01:52.681096 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr5wk\" (UniqueName: \"kubernetes.io/projected/bdf18981-b755-4b11-8793-38bc5e2e755b-kube-api-access-wr5wk\") pod \"bdf18981-b755-4b11-8793-38bc5e2e755b\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " Feb 20 15:01:52.681713 master-0 kubenswrapper[28120]: I0220 15:01:52.681262 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-proxy-ca-bundles\") pod \"bdf18981-b755-4b11-8793-38bc5e2e755b\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " Feb 20 15:01:52.681713 master-0 kubenswrapper[28120]: I0220 15:01:52.681305 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-config\") pod \"bdf18981-b755-4b11-8793-38bc5e2e755b\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " Feb 20 15:01:52.681713 master-0 kubenswrapper[28120]: I0220 15:01:52.681360 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-client-ca\") pod \"bdf18981-b755-4b11-8793-38bc5e2e755b\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " Feb 20 15:01:52.681713 master-0 kubenswrapper[28120]: I0220 15:01:52.681441 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf18981-b755-4b11-8793-38bc5e2e755b-serving-cert\") pod \"bdf18981-b755-4b11-8793-38bc5e2e755b\" (UID: \"bdf18981-b755-4b11-8793-38bc5e2e755b\") " Feb 20 15:01:52.682264 master-0 kubenswrapper[28120]: I0220 15:01:52.682190 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-client-ca" (OuterVolumeSpecName: "client-ca") pod "bdf18981-b755-4b11-8793-38bc5e2e755b" (UID: "bdf18981-b755-4b11-8793-38bc5e2e755b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:01:52.682488 master-0 kubenswrapper[28120]: I0220 15:01:52.682412 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bdf18981-b755-4b11-8793-38bc5e2e755b" (UID: "bdf18981-b755-4b11-8793-38bc5e2e755b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:01:52.682488 master-0 kubenswrapper[28120]: I0220 15:01:52.682460 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-config" (OuterVolumeSpecName: "config") pod "bdf18981-b755-4b11-8793-38bc5e2e755b" (UID: "bdf18981-b755-4b11-8793-38bc5e2e755b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:01:52.685587 master-0 kubenswrapper[28120]: I0220 15:01:52.685540 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf18981-b755-4b11-8793-38bc5e2e755b-kube-api-access-wr5wk" (OuterVolumeSpecName: "kube-api-access-wr5wk") pod "bdf18981-b755-4b11-8793-38bc5e2e755b" (UID: "bdf18981-b755-4b11-8793-38bc5e2e755b"). InnerVolumeSpecName "kube-api-access-wr5wk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:01:52.686465 master-0 kubenswrapper[28120]: I0220 15:01:52.686422 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf18981-b755-4b11-8793-38bc5e2e755b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bdf18981-b755-4b11-8793-38bc5e2e755b" (UID: "bdf18981-b755-4b11-8793-38bc5e2e755b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:01:52.783095 master-0 kubenswrapper[28120]: I0220 15:01:52.783030 28120 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf18981-b755-4b11-8793-38bc5e2e755b-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 15:01:52.783095 master-0 kubenswrapper[28120]: I0220 15:01:52.783091 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr5wk\" (UniqueName: \"kubernetes.io/projected/bdf18981-b755-4b11-8793-38bc5e2e755b-kube-api-access-wr5wk\") on node \"master-0\" DevicePath \"\"" Feb 20 15:01:52.783312 master-0 kubenswrapper[28120]: I0220 15:01:52.783118 28120 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 20 15:01:52.783312 master-0 kubenswrapper[28120]: I0220 15:01:52.783146 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:01:52.783312 master-0 kubenswrapper[28120]: I0220 15:01:52.783174 28120 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdf18981-b755-4b11-8793-38bc5e2e755b-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 15:01:52.885691 master-0 kubenswrapper[28120]: I0220 15:01:52.885517 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" event={"ID":"bdf18981-b755-4b11-8793-38bc5e2e755b","Type":"ContainerDied","Data":"88c6fd1112c1b3efe31f79a2dc6cd9198555dc6b1c7c6547da60005b56efbb9b"} Feb 20 15:01:52.885691 master-0 kubenswrapper[28120]: I0220 15:01:52.885542 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-647657fcb-w9586" Feb 20 15:01:52.886036 master-0 kubenswrapper[28120]: I0220 15:01:52.885608 28120 scope.go:117] "RemoveContainer" containerID="db8b2b97e53f2e0f9eb8b077984d360867eb853438c79f964c4316743bc03b9a" Feb 20 15:01:52.894153 master-0 kubenswrapper[28120]: I0220 15:01:52.891905 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" event={"ID":"63d49b12-8d51-4d97-9f06-ca4c5bf10dcd","Type":"ContainerDied","Data":"a3b80d783578c7d5bcce0396d10b0b7507567b7ddeed1d7dec131680bd38e6da"} Feb 20 15:01:52.894153 master-0 kubenswrapper[28120]: I0220 15:01:52.892025 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5" Feb 20 15:01:52.917463 master-0 kubenswrapper[28120]: I0220 15:01:52.917413 28120 scope.go:117] "RemoveContainer" containerID="00f9e9b6b6ccf56cbc32cbe6a3bf7dcabdcf2702c8bfb772dfa8c5e881fe2a66" Feb 20 15:01:52.937663 master-0 kubenswrapper[28120]: I0220 15:01:52.936477 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-647657fcb-w9586"] Feb 20 15:01:52.940015 master-0 kubenswrapper[28120]: I0220 15:01:52.939891 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-647657fcb-w9586"] Feb 20 15:01:52.960342 master-0 kubenswrapper[28120]: I0220 15:01:52.959046 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5"] Feb 20 15:01:52.960745 master-0 kubenswrapper[28120]: I0220 15:01:52.960728 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-584d5796b9-lf8t5"] Feb 20 15:01:54.068973 master-0 kubenswrapper[28120]: I0220 15:01:54.068852 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" path="/var/lib/kubelet/pods/63d49b12-8d51-4d97-9f06-ca4c5bf10dcd/volumes" Feb 20 15:01:54.070206 master-0 kubenswrapper[28120]: I0220 15:01:54.070147 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf18981-b755-4b11-8793-38bc5e2e755b" path="/var/lib/kubelet/pods/bdf18981-b755-4b11-8793-38bc5e2e755b/volumes" Feb 20 15:01:58.407824 master-0 kubenswrapper[28120]: I0220 15:01:58.407743 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 20 15:01:58.408803 master-0 kubenswrapper[28120]: E0220 15:01:58.408151 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" containerName="route-controller-manager" Feb 20 15:01:58.408803 master-0 kubenswrapper[28120]: I0220 15:01:58.408180 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" containerName="route-controller-manager" Feb 20 15:01:58.408803 master-0 kubenswrapper[28120]: E0220 15:01:58.408226 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf18981-b755-4b11-8793-38bc5e2e755b" containerName="controller-manager" Feb 20 15:01:58.408803 master-0 kubenswrapper[28120]: I0220 15:01:58.408240 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf18981-b755-4b11-8793-38bc5e2e755b" containerName="controller-manager" Feb 20 15:01:58.408803 master-0 kubenswrapper[28120]: E0220 15:01:58.408254 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" containerName="route-controller-manager" Feb 20 15:01:58.408803 master-0 kubenswrapper[28120]: I0220 15:01:58.408268 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" containerName="route-controller-manager" Feb 20 15:01:58.408803 master-0 kubenswrapper[28120]: E0220 15:01:58.408306 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf18981-b755-4b11-8793-38bc5e2e755b" containerName="controller-manager" Feb 20 15:01:58.408803 master-0 kubenswrapper[28120]: I0220 15:01:58.408319 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf18981-b755-4b11-8793-38bc5e2e755b" containerName="controller-manager" Feb 20 15:01:58.408803 master-0 kubenswrapper[28120]: I0220 15:01:58.408528 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf18981-b755-4b11-8793-38bc5e2e755b" containerName="controller-manager" Feb 20 15:01:58.408803 master-0 kubenswrapper[28120]: I0220 15:01:58.408572 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" containerName="route-controller-manager" Feb 20 15:01:58.408803 master-0 kubenswrapper[28120]: I0220 15:01:58.408590 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf18981-b755-4b11-8793-38bc5e2e755b" containerName="controller-manager" Feb 20 15:01:58.408803 master-0 kubenswrapper[28120]: I0220 15:01:58.408647 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="63d49b12-8d51-4d97-9f06-ca4c5bf10dcd" containerName="route-controller-manager" Feb 20 15:01:58.409534 master-0 kubenswrapper[28120]: I0220 15:01:58.409300 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 15:01:58.415806 master-0 kubenswrapper[28120]: I0220 15:01:58.415743 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-w4mx6" Feb 20 15:01:58.416416 master-0 kubenswrapper[28120]: I0220 15:01:58.416368 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 20 15:01:58.424459 master-0 kubenswrapper[28120]: I0220 15:01:58.424384 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 20 15:01:58.580213 master-0 kubenswrapper[28120]: I0220 15:01:58.580135 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1bf2da21-9f35-4d95-aabf-7a9c8264474f-var-lock\") pod \"installer-4-master-0\" (UID: \"1bf2da21-9f35-4d95-aabf-7a9c8264474f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 15:01:58.580523 master-0 kubenswrapper[28120]: I0220 15:01:58.580294 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bf2da21-9f35-4d95-aabf-7a9c8264474f-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"1bf2da21-9f35-4d95-aabf-7a9c8264474f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 15:01:58.580608 master-0 kubenswrapper[28120]: I0220 15:01:58.580504 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bf2da21-9f35-4d95-aabf-7a9c8264474f-kube-api-access\") pod \"installer-4-master-0\" (UID: \"1bf2da21-9f35-4d95-aabf-7a9c8264474f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 15:01:58.682523 master-0 kubenswrapper[28120]: I0220 15:01:58.682246 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1bf2da21-9f35-4d95-aabf-7a9c8264474f-var-lock\") pod \"installer-4-master-0\" (UID: \"1bf2da21-9f35-4d95-aabf-7a9c8264474f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 15:01:58.682523 master-0 kubenswrapper[28120]: I0220 15:01:58.682392 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1bf2da21-9f35-4d95-aabf-7a9c8264474f-var-lock\") pod \"installer-4-master-0\" (UID: \"1bf2da21-9f35-4d95-aabf-7a9c8264474f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 15:01:58.682523 master-0 kubenswrapper[28120]: I0220 15:01:58.682470 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bf2da21-9f35-4d95-aabf-7a9c8264474f-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"1bf2da21-9f35-4d95-aabf-7a9c8264474f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 15:01:58.682523 master-0 kubenswrapper[28120]: I0220 15:01:58.682406 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bf2da21-9f35-4d95-aabf-7a9c8264474f-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"1bf2da21-9f35-4d95-aabf-7a9c8264474f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 15:01:58.683045 master-0 kubenswrapper[28120]: I0220 15:01:58.682675 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bf2da21-9f35-4d95-aabf-7a9c8264474f-kube-api-access\") pod \"installer-4-master-0\" (UID: \"1bf2da21-9f35-4d95-aabf-7a9c8264474f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 15:01:58.716706 master-0 kubenswrapper[28120]: I0220 15:01:58.713527 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bf2da21-9f35-4d95-aabf-7a9c8264474f-kube-api-access\") pod \"installer-4-master-0\" (UID: \"1bf2da21-9f35-4d95-aabf-7a9c8264474f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 15:01:58.744396 master-0 kubenswrapper[28120]: I0220 15:01:58.744207 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 15:01:59.281101 master-0 kubenswrapper[28120]: I0220 15:01:59.281033 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 20 15:01:59.288202 master-0 kubenswrapper[28120]: W0220 15:01:59.288122 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1bf2da21_9f35_4d95_aabf_7a9c8264474f.slice/crio-273eb8665f38754c3ce35c1f750d3b1d7eb49cd90191a9bd12e88453101394cb WatchSource:0}: Error finding container 273eb8665f38754c3ce35c1f750d3b1d7eb49cd90191a9bd12e88453101394cb: Status 404 returned error can't find the container with id 273eb8665f38754c3ce35c1f750d3b1d7eb49cd90191a9bd12e88453101394cb Feb 20 15:01:59.965632 master-0 kubenswrapper[28120]: I0220 15:01:59.965438 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"1bf2da21-9f35-4d95-aabf-7a9c8264474f","Type":"ContainerStarted","Data":"a40f295459b4a2af08fc1680c69148647ff567e1baab8a7d9c6564aa36182635"} Feb 20 15:01:59.965632 master-0 kubenswrapper[28120]: I0220 15:01:59.965530 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"1bf2da21-9f35-4d95-aabf-7a9c8264474f","Type":"ContainerStarted","Data":"273eb8665f38754c3ce35c1f750d3b1d7eb49cd90191a9bd12e88453101394cb"} Feb 20 15:02:00.000102 master-0 kubenswrapper[28120]: I0220 15:01:59.998802 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=1.998778977 podStartE2EDuration="1.998778977s" podCreationTimestamp="2026-02-20 15:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:01:59.988487831 +0000 UTC m=+58.249281404" watchObservedRunningTime="2026-02-20 15:01:59.998778977 +0000 UTC m=+58.259572550" Feb 20 15:02:02.044431 master-0 kubenswrapper[28120]: I0220 15:02:02.040416 28120 scope.go:117] "RemoveContainer" containerID="3c2b6c4d3887c6ce78fb1f319d3d917dd19b6ede5e9ab3d53c00d05b6ea4ef23" Feb 20 15:02:02.080204 master-0 kubenswrapper[28120]: I0220 15:02:02.079274 28120 scope.go:117] "RemoveContainer" containerID="321be2d7453c33396b3363bf789e4d552d4e8d66090aa9915bf60f644a971c6e" Feb 20 15:02:02.105200 master-0 kubenswrapper[28120]: I0220 15:02:02.105138 28120 scope.go:117] "RemoveContainer" containerID="92784546c39ab249199b64e99295b360ac694daa7345bcc5ca4290c1679248d5" Feb 20 15:02:09.155515 master-0 kubenswrapper[28120]: I0220 15:02:09.155442 28120 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 20 15:02:09.156425 master-0 kubenswrapper[28120]: I0220 15:02:09.155911 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler-cert-syncer" containerID="cri-o://87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d" gracePeriod=30 Feb 20 15:02:09.156425 master-0 kubenswrapper[28120]: I0220 15:02:09.155980 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler" containerID="cri-o://2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181" gracePeriod=30 Feb 20 15:02:09.156425 master-0 kubenswrapper[28120]: I0220 15:02:09.156046 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler-recovery-controller" containerID="cri-o://dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c" gracePeriod=30 Feb 20 15:02:09.158434 master-0 kubenswrapper[28120]: I0220 15:02:09.158366 28120 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 20 15:02:09.158919 master-0 kubenswrapper[28120]: E0220 15:02:09.158858 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="wait-for-host-port" Feb 20 15:02:09.158919 master-0 kubenswrapper[28120]: I0220 15:02:09.158909 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="wait-for-host-port" Feb 20 15:02:09.159127 master-0 kubenswrapper[28120]: E0220 15:02:09.158996 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler" Feb 20 15:02:09.159127 master-0 kubenswrapper[28120]: I0220 15:02:09.159018 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler" Feb 20 15:02:09.159127 master-0 kubenswrapper[28120]: E0220 15:02:09.159070 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler-recovery-controller" Feb 20 15:02:09.159127 master-0 kubenswrapper[28120]: I0220 15:02:09.159092 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler-recovery-controller" Feb 20 15:02:09.159127 master-0 kubenswrapper[28120]: E0220 15:02:09.159129 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler" Feb 20 15:02:09.159410 master-0 kubenswrapper[28120]: I0220 15:02:09.159146 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler" Feb 20 15:02:09.159410 master-0 kubenswrapper[28120]: E0220 15:02:09.159172 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler-cert-syncer" Feb 20 15:02:09.159410 master-0 kubenswrapper[28120]: I0220 15:02:09.159189 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler-cert-syncer" Feb 20 15:02:09.159592 master-0 kubenswrapper[28120]: I0220 15:02:09.159459 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler-cert-syncer" Feb 20 15:02:09.159592 master-0 kubenswrapper[28120]: I0220 15:02:09.159500 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler-recovery-controller" Feb 20 15:02:09.159592 master-0 kubenswrapper[28120]: I0220 15:02:09.159541 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="wait-for-host-port" Feb 20 15:02:09.159592 master-0 kubenswrapper[28120]: I0220 15:02:09.159568 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler" Feb 20 15:02:09.159825 master-0 kubenswrapper[28120]: I0220 15:02:09.159611 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" containerName="kube-scheduler" Feb 20 15:02:09.188331 master-0 kubenswrapper[28120]: E0220 15:02:09.188253 28120 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ff46cdb00d28519af7c0cdc9ea8d11.slice/crio-conmon-87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d.scope\": RecentStats: unable to find data in memory cache]" Feb 20 15:02:09.256206 master-0 kubenswrapper[28120]: I0220 15:02:09.256113 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:02:09.256459 master-0 kubenswrapper[28120]: I0220 15:02:09.256243 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:02:09.336132 master-0 kubenswrapper[28120]: I0220 15:02:09.336082 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_56ff46cdb00d28519af7c0cdc9ea8d11/kube-scheduler-cert-syncer/0.log" Feb 20 15:02:09.337035 master-0 kubenswrapper[28120]: I0220 15:02:09.336974 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_56ff46cdb00d28519af7c0cdc9ea8d11/kube-scheduler/0.log" Feb 20 15:02:09.337789 master-0 kubenswrapper[28120]: I0220 15:02:09.337757 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:02:09.342129 master-0 kubenswrapper[28120]: I0220 15:02:09.342027 28120 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="56ff46cdb00d28519af7c0cdc9ea8d11" podUID="d03a1e6620a92c780b0a91c72a55bc8b" Feb 20 15:02:09.357530 master-0 kubenswrapper[28120]: I0220 15:02:09.357469 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") pod \"56ff46cdb00d28519af7c0cdc9ea8d11\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " Feb 20 15:02:09.357672 master-0 kubenswrapper[28120]: I0220 15:02:09.357567 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "56ff46cdb00d28519af7c0cdc9ea8d11" (UID: "56ff46cdb00d28519af7c0cdc9ea8d11"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:02:09.357672 master-0 kubenswrapper[28120]: I0220 15:02:09.357573 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") pod \"56ff46cdb00d28519af7c0cdc9ea8d11\" (UID: \"56ff46cdb00d28519af7c0cdc9ea8d11\") " Feb 20 15:02:09.357672 master-0 kubenswrapper[28120]: I0220 15:02:09.357640 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "56ff46cdb00d28519af7c0cdc9ea8d11" (UID: "56ff46cdb00d28519af7c0cdc9ea8d11"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:02:09.357874 master-0 kubenswrapper[28120]: I0220 15:02:09.357717 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:02:09.357874 master-0 kubenswrapper[28120]: I0220 15:02:09.357770 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:02:09.358058 master-0 kubenswrapper[28120]: I0220 15:02:09.357899 28120 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:02:09.358058 master-0 kubenswrapper[28120]: I0220 15:02:09.357918 28120 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56ff46cdb00d28519af7c0cdc9ea8d11-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:02:09.358211 master-0 kubenswrapper[28120]: I0220 15:02:09.358032 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:02:09.359137 master-0 kubenswrapper[28120]: I0220 15:02:09.359061 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d03a1e6620a92c780b0a91c72a55bc8b-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"d03a1e6620a92c780b0a91c72a55bc8b\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:02:10.003703 master-0 kubenswrapper[28120]: I0220 15:02:10.003618 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 20 15:02:10.004052 master-0 kubenswrapper[28120]: I0220 15:02:10.003908 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-4-master-0" podUID="1bf2da21-9f35-4d95-aabf-7a9c8264474f" containerName="installer" containerID="cri-o://a40f295459b4a2af08fc1680c69148647ff567e1baab8a7d9c6564aa36182635" gracePeriod=30 Feb 20 15:02:10.025145 master-0 kubenswrapper[28120]: I0220 15:02:10.025049 28120 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 15:02:10.025608 master-0 kubenswrapper[28120]: I0220 15:02:10.025545 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24c827995023caaffd01654949c8d4dd" containerName="kube-controller-manager" containerID="cri-o://d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5" gracePeriod=30 Feb 20 15:02:10.025716 master-0 kubenswrapper[28120]: I0220 15:02:10.025625 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24c827995023caaffd01654949c8d4dd" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639" gracePeriod=30 Feb 20 15:02:10.025815 master-0 kubenswrapper[28120]: I0220 15:02:10.025644 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24c827995023caaffd01654949c8d4dd" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10" gracePeriod=30 Feb 20 15:02:10.025815 master-0 kubenswrapper[28120]: I0220 15:02:10.025751 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24c827995023caaffd01654949c8d4dd" containerName="cluster-policy-controller" containerID="cri-o://180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b" gracePeriod=30 Feb 20 15:02:10.027474 master-0 kubenswrapper[28120]: I0220 15:02:10.027216 28120 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 15:02:10.027742 master-0 kubenswrapper[28120]: E0220 15:02:10.027694 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c827995023caaffd01654949c8d4dd" containerName="kube-controller-manager-recovery-controller" Feb 20 15:02:10.027742 master-0 kubenswrapper[28120]: I0220 15:02:10.027723 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c827995023caaffd01654949c8d4dd" containerName="kube-controller-manager-recovery-controller" Feb 20 15:02:10.028001 master-0 kubenswrapper[28120]: E0220 15:02:10.027762 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c827995023caaffd01654949c8d4dd" containerName="cluster-policy-controller" Feb 20 15:02:10.028001 master-0 kubenswrapper[28120]: I0220 15:02:10.027775 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c827995023caaffd01654949c8d4dd" containerName="cluster-policy-controller" Feb 20 15:02:10.028001 master-0 kubenswrapper[28120]: E0220 15:02:10.027797 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c827995023caaffd01654949c8d4dd" containerName="kube-controller-manager" Feb 20 15:02:10.028001 master-0 kubenswrapper[28120]: I0220 15:02:10.027805 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c827995023caaffd01654949c8d4dd" containerName="kube-controller-manager" Feb 20 15:02:10.028001 master-0 kubenswrapper[28120]: E0220 15:02:10.027816 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c827995023caaffd01654949c8d4dd" containerName="kube-controller-manager-cert-syncer" Feb 20 15:02:10.028001 master-0 kubenswrapper[28120]: I0220 15:02:10.027824 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c827995023caaffd01654949c8d4dd" containerName="kube-controller-manager-cert-syncer" Feb 20 15:02:10.028001 master-0 kubenswrapper[28120]: I0220 15:02:10.027989 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c827995023caaffd01654949c8d4dd" containerName="kube-controller-manager-recovery-controller" Feb 20 15:02:10.028001 master-0 kubenswrapper[28120]: I0220 15:02:10.028014 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c827995023caaffd01654949c8d4dd" containerName="cluster-policy-controller" Feb 20 15:02:10.028586 master-0 kubenswrapper[28120]: I0220 15:02:10.028034 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c827995023caaffd01654949c8d4dd" containerName="kube-controller-manager" Feb 20 15:02:10.028586 master-0 kubenswrapper[28120]: I0220 15:02:10.028053 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c827995023caaffd01654949c8d4dd" containerName="kube-controller-manager-cert-syncer" Feb 20 15:02:10.067992 master-0 kubenswrapper[28120]: I0220 15:02:10.067303 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/84d9b64313fdfb9864d29171f85c889a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"84d9b64313fdfb9864d29171f85c889a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:10.067992 master-0 kubenswrapper[28120]: I0220 15:02:10.067394 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/84d9b64313fdfb9864d29171f85c889a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"84d9b64313fdfb9864d29171f85c889a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:10.069446 master-0 kubenswrapper[28120]: I0220 15:02:10.069099 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_56ff46cdb00d28519af7c0cdc9ea8d11/kube-scheduler-cert-syncer/0.log" Feb 20 15:02:10.071314 master-0 kubenswrapper[28120]: I0220 15:02:10.070388 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_56ff46cdb00d28519af7c0cdc9ea8d11/kube-scheduler/0.log" Feb 20 15:02:10.071314 master-0 kubenswrapper[28120]: I0220 15:02:10.070973 28120 generic.go:334] "Generic (PLEG): container finished" podID="56ff46cdb00d28519af7c0cdc9ea8d11" containerID="2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181" exitCode=0 Feb 20 15:02:10.071314 master-0 kubenswrapper[28120]: I0220 15:02:10.071014 28120 generic.go:334] "Generic (PLEG): container finished" podID="56ff46cdb00d28519af7c0cdc9ea8d11" containerID="dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c" exitCode=0 Feb 20 15:02:10.071314 master-0 kubenswrapper[28120]: I0220 15:02:10.071036 28120 generic.go:334] "Generic (PLEG): container finished" podID="56ff46cdb00d28519af7c0cdc9ea8d11" containerID="87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d" exitCode=2 Feb 20 15:02:10.071314 master-0 kubenswrapper[28120]: I0220 15:02:10.071154 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:02:10.071707 master-0 kubenswrapper[28120]: I0220 15:02:10.071655 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" path="/var/lib/kubelet/pods/56ff46cdb00d28519af7c0cdc9ea8d11/volumes" Feb 20 15:02:10.080466 master-0 kubenswrapper[28120]: I0220 15:02:10.079245 28120 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="56ff46cdb00d28519af7c0cdc9ea8d11" podUID="d03a1e6620a92c780b0a91c72a55bc8b" Feb 20 15:02:10.080635 master-0 kubenswrapper[28120]: I0220 15:02:10.080525 28120 scope.go:117] "RemoveContainer" containerID="2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181" Feb 20 15:02:10.083209 master-0 kubenswrapper[28120]: I0220 15:02:10.081605 28120 generic.go:334] "Generic (PLEG): container finished" podID="70fc5e85-d7d2-45f7-89ff-889a248d6103" containerID="cbe7e13cae8823ba9986d71a265d2c28a0e68eafe45c89b8009a07e5eb6852e2" exitCode=0 Feb 20 15:02:10.083209 master-0 kubenswrapper[28120]: I0220 15:02:10.081655 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"70fc5e85-d7d2-45f7-89ff-889a248d6103","Type":"ContainerDied","Data":"cbe7e13cae8823ba9986d71a265d2c28a0e68eafe45c89b8009a07e5eb6852e2"} Feb 20 15:02:10.168565 master-0 kubenswrapper[28120]: I0220 15:02:10.168504 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/84d9b64313fdfb9864d29171f85c889a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"84d9b64313fdfb9864d29171f85c889a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:10.169126 master-0 kubenswrapper[28120]: I0220 15:02:10.168685 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/84d9b64313fdfb9864d29171f85c889a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"84d9b64313fdfb9864d29171f85c889a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:10.169126 master-0 kubenswrapper[28120]: I0220 15:02:10.168693 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/84d9b64313fdfb9864d29171f85c889a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"84d9b64313fdfb9864d29171f85c889a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:10.169126 master-0 kubenswrapper[28120]: I0220 15:02:10.168871 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/84d9b64313fdfb9864d29171f85c889a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"84d9b64313fdfb9864d29171f85c889a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:10.237316 master-0 kubenswrapper[28120]: I0220 15:02:10.237254 28120 scope.go:117] "RemoveContainer" containerID="dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c" Feb 20 15:02:10.243146 master-0 kubenswrapper[28120]: I0220 15:02:10.243091 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24c827995023caaffd01654949c8d4dd/kube-controller-manager-cert-syncer/0.log" Feb 20 15:02:10.244194 master-0 kubenswrapper[28120]: I0220 15:02:10.244145 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:10.247497 master-0 kubenswrapper[28120]: I0220 15:02:10.247427 28120 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="24c827995023caaffd01654949c8d4dd" podUID="84d9b64313fdfb9864d29171f85c889a" Feb 20 15:02:10.265100 master-0 kubenswrapper[28120]: I0220 15:02:10.265041 28120 scope.go:117] "RemoveContainer" containerID="87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d" Feb 20 15:02:10.269786 master-0 kubenswrapper[28120]: I0220 15:02:10.269716 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-resource-dir\") pod \"24c827995023caaffd01654949c8d4dd\" (UID: \"24c827995023caaffd01654949c8d4dd\") " Feb 20 15:02:10.269786 master-0 kubenswrapper[28120]: I0220 15:02:10.269788 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-cert-dir\") pod \"24c827995023caaffd01654949c8d4dd\" (UID: \"24c827995023caaffd01654949c8d4dd\") " Feb 20 15:02:10.270092 master-0 kubenswrapper[28120]: I0220 15:02:10.270059 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "24c827995023caaffd01654949c8d4dd" (UID: "24c827995023caaffd01654949c8d4dd"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:02:10.270213 master-0 kubenswrapper[28120]: I0220 15:02:10.270100 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "24c827995023caaffd01654949c8d4dd" (UID: "24c827995023caaffd01654949c8d4dd"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:02:10.287691 master-0 kubenswrapper[28120]: I0220 15:02:10.287642 28120 scope.go:117] "RemoveContainer" containerID="c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b" Feb 20 15:02:10.314400 master-0 kubenswrapper[28120]: I0220 15:02:10.314303 28120 scope.go:117] "RemoveContainer" containerID="5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f" Feb 20 15:02:10.344733 master-0 kubenswrapper[28120]: I0220 15:02:10.344684 28120 scope.go:117] "RemoveContainer" containerID="2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181" Feb 20 15:02:10.345258 master-0 kubenswrapper[28120]: E0220 15:02:10.345205 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181\": container with ID starting with 2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181 not found: ID does not exist" containerID="2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181" Feb 20 15:02:10.345348 master-0 kubenswrapper[28120]: I0220 15:02:10.345268 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181"} err="failed to get container status \"2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181\": rpc error: code = NotFound desc = could not find container \"2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181\": container with ID starting with 2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181 not found: ID does not exist" Feb 20 15:02:10.345348 master-0 kubenswrapper[28120]: I0220 15:02:10.345307 28120 scope.go:117] "RemoveContainer" containerID="dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c" Feb 20 15:02:10.345684 master-0 kubenswrapper[28120]: E0220 15:02:10.345657 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c\": container with ID starting with dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c not found: ID does not exist" containerID="dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c" Feb 20 15:02:10.345765 master-0 kubenswrapper[28120]: I0220 15:02:10.345683 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c"} err="failed to get container status \"dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c\": rpc error: code = NotFound desc = could not find container \"dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c\": container with ID starting with dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c not found: ID does not exist" Feb 20 15:02:10.345765 master-0 kubenswrapper[28120]: I0220 15:02:10.345702 28120 scope.go:117] "RemoveContainer" containerID="87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d" Feb 20 15:02:10.346066 master-0 kubenswrapper[28120]: E0220 15:02:10.346004 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d\": container with ID starting with 87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d not found: ID does not exist" containerID="87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d" Feb 20 15:02:10.346549 master-0 kubenswrapper[28120]: I0220 15:02:10.346067 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d"} err="failed to get container status \"87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d\": rpc error: code = NotFound desc = could not find container \"87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d\": container with ID starting with 87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d not found: ID does not exist" Feb 20 15:02:10.346549 master-0 kubenswrapper[28120]: I0220 15:02:10.346108 28120 scope.go:117] "RemoveContainer" containerID="c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b" Feb 20 15:02:10.346549 master-0 kubenswrapper[28120]: E0220 15:02:10.346433 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b\": container with ID starting with c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b not found: ID does not exist" containerID="c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b" Feb 20 15:02:10.346549 master-0 kubenswrapper[28120]: I0220 15:02:10.346458 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b"} err="failed to get container status \"c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b\": rpc error: code = NotFound desc = could not find container \"c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b\": container with ID starting with c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b not found: ID does not exist" Feb 20 15:02:10.346549 master-0 kubenswrapper[28120]: I0220 15:02:10.346480 28120 scope.go:117] "RemoveContainer" containerID="5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f" Feb 20 15:02:10.346897 master-0 kubenswrapper[28120]: E0220 15:02:10.346829 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f\": container with ID starting with 5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f not found: ID does not exist" containerID="5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f" Feb 20 15:02:10.347003 master-0 kubenswrapper[28120]: I0220 15:02:10.346907 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f"} err="failed to get container status \"5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f\": rpc error: code = NotFound desc = could not find container \"5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f\": container with ID starting with 5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f not found: ID does not exist" Feb 20 15:02:10.347003 master-0 kubenswrapper[28120]: I0220 15:02:10.346971 28120 scope.go:117] "RemoveContainer" containerID="2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181" Feb 20 15:02:10.347438 master-0 kubenswrapper[28120]: I0220 15:02:10.347383 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181"} err="failed to get container status \"2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181\": rpc error: code = NotFound desc = could not find container \"2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181\": container with ID starting with 2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181 not found: ID does not exist" Feb 20 15:02:10.347438 master-0 kubenswrapper[28120]: I0220 15:02:10.347422 28120 scope.go:117] "RemoveContainer" containerID="dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c" Feb 20 15:02:10.347849 master-0 kubenswrapper[28120]: I0220 15:02:10.347743 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c"} err="failed to get container status \"dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c\": rpc error: code = NotFound desc = could not find container \"dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c\": container with ID starting with dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c not found: ID does not exist" Feb 20 15:02:10.347849 master-0 kubenswrapper[28120]: I0220 15:02:10.347837 28120 scope.go:117] "RemoveContainer" containerID="87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d" Feb 20 15:02:10.348369 master-0 kubenswrapper[28120]: I0220 15:02:10.348261 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d"} err="failed to get container status \"87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d\": rpc error: code = NotFound desc = could not find container \"87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d\": container with ID starting with 87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d not found: ID does not exist" Feb 20 15:02:10.348369 master-0 kubenswrapper[28120]: I0220 15:02:10.348336 28120 scope.go:117] "RemoveContainer" containerID="c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b" Feb 20 15:02:10.348845 master-0 kubenswrapper[28120]: I0220 15:02:10.348650 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b"} err="failed to get container status \"c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b\": rpc error: code = NotFound desc = could not find container \"c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b\": container with ID starting with c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b not found: ID does not exist" Feb 20 15:02:10.348845 master-0 kubenswrapper[28120]: I0220 15:02:10.348833 28120 scope.go:117] "RemoveContainer" containerID="5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f" Feb 20 15:02:10.349110 master-0 kubenswrapper[28120]: I0220 15:02:10.349072 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f"} err="failed to get container status \"5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f\": rpc error: code = NotFound desc = could not find container \"5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f\": container with ID starting with 5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f not found: ID does not exist" Feb 20 15:02:10.349110 master-0 kubenswrapper[28120]: I0220 15:02:10.349102 28120 scope.go:117] "RemoveContainer" containerID="2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181" Feb 20 15:02:10.349431 master-0 kubenswrapper[28120]: I0220 15:02:10.349397 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181"} err="failed to get container status \"2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181\": rpc error: code = NotFound desc = could not find container \"2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181\": container with ID starting with 2d69a8a671a418a2381ced63f417b7176debbb1396560ddc4494cbd5fd7c1181 not found: ID does not exist" Feb 20 15:02:10.349431 master-0 kubenswrapper[28120]: I0220 15:02:10.349421 28120 scope.go:117] "RemoveContainer" containerID="dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c" Feb 20 15:02:10.349728 master-0 kubenswrapper[28120]: I0220 15:02:10.349683 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c"} err="failed to get container status \"dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c\": rpc error: code = NotFound desc = could not find container \"dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c\": container with ID starting with dfe0488e465509557fc7c21a3d0a73f49fa948422001e6f0b5507cb47b8f099c not found: ID does not exist" Feb 20 15:02:10.349728 master-0 kubenswrapper[28120]: I0220 15:02:10.349707 28120 scope.go:117] "RemoveContainer" containerID="87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d" Feb 20 15:02:10.350009 master-0 kubenswrapper[28120]: I0220 15:02:10.349983 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d"} err="failed to get container status \"87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d\": rpc error: code = NotFound desc = could not find container \"87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d\": container with ID starting with 87c89bfe5e12b01109fe824108fee94ebfb249d11cc151574900dd86bef9864d not found: ID does not exist" Feb 20 15:02:10.350069 master-0 kubenswrapper[28120]: I0220 15:02:10.350008 28120 scope.go:117] "RemoveContainer" containerID="c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b" Feb 20 15:02:10.350257 master-0 kubenswrapper[28120]: I0220 15:02:10.350220 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b"} err="failed to get container status \"c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b\": rpc error: code = NotFound desc = could not find container \"c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b\": container with ID starting with c5a6623ab7ff0bf9731c96cb7b1d96c4cfae871e88253ecb7ca918d5f76ded0b not found: ID does not exist" Feb 20 15:02:10.350257 master-0 kubenswrapper[28120]: I0220 15:02:10.350245 28120 scope.go:117] "RemoveContainer" containerID="5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f" Feb 20 15:02:10.350450 master-0 kubenswrapper[28120]: I0220 15:02:10.350423 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f"} err="failed to get container status \"5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f\": rpc error: code = NotFound desc = could not find container \"5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f\": container with ID starting with 5fd11c241512437f7ede9a9872d81a94c5fdc3791c360491e8d4397e63b0e19f not found: ID does not exist" Feb 20 15:02:10.371312 master-0 kubenswrapper[28120]: I0220 15:02:10.371279 28120 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:02:10.371312 master-0 kubenswrapper[28120]: I0220 15:02:10.371306 28120 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24c827995023caaffd01654949c8d4dd-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:02:11.097601 master-0 kubenswrapper[28120]: I0220 15:02:11.097510 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24c827995023caaffd01654949c8d4dd/kube-controller-manager-cert-syncer/0.log" Feb 20 15:02:11.098628 master-0 kubenswrapper[28120]: I0220 15:02:11.098581 28120 generic.go:334] "Generic (PLEG): container finished" podID="24c827995023caaffd01654949c8d4dd" containerID="23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639" exitCode=0 Feb 20 15:02:11.098628 master-0 kubenswrapper[28120]: I0220 15:02:11.098623 28120 generic.go:334] "Generic (PLEG): container finished" podID="24c827995023caaffd01654949c8d4dd" containerID="a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10" exitCode=2 Feb 20 15:02:11.099438 master-0 kubenswrapper[28120]: I0220 15:02:11.098641 28120 generic.go:334] "Generic (PLEG): container finished" podID="24c827995023caaffd01654949c8d4dd" containerID="180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b" exitCode=0 Feb 20 15:02:11.099438 master-0 kubenswrapper[28120]: I0220 15:02:11.098655 28120 generic.go:334] "Generic (PLEG): container finished" podID="24c827995023caaffd01654949c8d4dd" containerID="d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5" exitCode=0 Feb 20 15:02:11.099438 master-0 kubenswrapper[28120]: I0220 15:02:11.098738 28120 scope.go:117] "RemoveContainer" containerID="23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639" Feb 20 15:02:11.099438 master-0 kubenswrapper[28120]: I0220 15:02:11.098845 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:11.110967 master-0 kubenswrapper[28120]: I0220 15:02:11.105024 28120 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="24c827995023caaffd01654949c8d4dd" podUID="84d9b64313fdfb9864d29171f85c889a" Feb 20 15:02:11.110967 master-0 kubenswrapper[28120]: I0220 15:02:11.106055 28120 generic.go:334] "Generic (PLEG): container finished" podID="5d093ed7-8264-4834-ac33-5fa08f4b27c2" containerID="b34d21b695cd7db4b5e00374b0cee29abc38c42fe0f55b67b7403761107934f5" exitCode=0 Feb 20 15:02:11.110967 master-0 kubenswrapper[28120]: I0220 15:02:11.106472 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"5d093ed7-8264-4834-ac33-5fa08f4b27c2","Type":"ContainerDied","Data":"b34d21b695cd7db4b5e00374b0cee29abc38c42fe0f55b67b7403761107934f5"} Feb 20 15:02:11.126975 master-0 kubenswrapper[28120]: I0220 15:02:11.126808 28120 scope.go:117] "RemoveContainer" containerID="a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10" Feb 20 15:02:11.150514 master-0 kubenswrapper[28120]: I0220 15:02:11.150121 28120 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="24c827995023caaffd01654949c8d4dd" podUID="84d9b64313fdfb9864d29171f85c889a" Feb 20 15:02:11.168510 master-0 kubenswrapper[28120]: I0220 15:02:11.168433 28120 scope.go:117] "RemoveContainer" containerID="180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b" Feb 20 15:02:11.192124 master-0 kubenswrapper[28120]: I0220 15:02:11.192059 28120 scope.go:117] "RemoveContainer" containerID="d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5" Feb 20 15:02:11.220066 master-0 kubenswrapper[28120]: I0220 15:02:11.220002 28120 scope.go:117] "RemoveContainer" containerID="23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639" Feb 20 15:02:11.220842 master-0 kubenswrapper[28120]: E0220 15:02:11.220783 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639\": container with ID starting with 23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639 not found: ID does not exist" containerID="23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639" Feb 20 15:02:11.220984 master-0 kubenswrapper[28120]: I0220 15:02:11.220841 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639"} err="failed to get container status \"23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639\": rpc error: code = NotFound desc = could not find container \"23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639\": container with ID starting with 23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639 not found: ID does not exist" Feb 20 15:02:11.220984 master-0 kubenswrapper[28120]: I0220 15:02:11.220880 28120 scope.go:117] "RemoveContainer" containerID="a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10" Feb 20 15:02:11.221522 master-0 kubenswrapper[28120]: E0220 15:02:11.221453 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10\": container with ID starting with a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10 not found: ID does not exist" containerID="a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10" Feb 20 15:02:11.221613 master-0 kubenswrapper[28120]: I0220 15:02:11.221520 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10"} err="failed to get container status \"a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10\": rpc error: code = NotFound desc = could not find container \"a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10\": container with ID starting with a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10 not found: ID does not exist" Feb 20 15:02:11.221613 master-0 kubenswrapper[28120]: I0220 15:02:11.221563 28120 scope.go:117] "RemoveContainer" containerID="180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b" Feb 20 15:02:11.222205 master-0 kubenswrapper[28120]: E0220 15:02:11.222148 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b\": container with ID starting with 180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b not found: ID does not exist" containerID="180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b" Feb 20 15:02:11.222302 master-0 kubenswrapper[28120]: I0220 15:02:11.222200 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b"} err="failed to get container status \"180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b\": rpc error: code = NotFound desc = could not find container \"180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b\": container with ID starting with 180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b not found: ID does not exist" Feb 20 15:02:11.222302 master-0 kubenswrapper[28120]: I0220 15:02:11.222239 28120 scope.go:117] "RemoveContainer" containerID="d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5" Feb 20 15:02:11.222691 master-0 kubenswrapper[28120]: E0220 15:02:11.222639 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5\": container with ID starting with d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5 not found: ID does not exist" containerID="d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5" Feb 20 15:02:11.222691 master-0 kubenswrapper[28120]: I0220 15:02:11.222678 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5"} err="failed to get container status \"d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5\": rpc error: code = NotFound desc = could not find container \"d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5\": container with ID starting with d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5 not found: ID does not exist" Feb 20 15:02:11.222848 master-0 kubenswrapper[28120]: I0220 15:02:11.222705 28120 scope.go:117] "RemoveContainer" containerID="23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639" Feb 20 15:02:11.223218 master-0 kubenswrapper[28120]: I0220 15:02:11.223137 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639"} err="failed to get container status \"23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639\": rpc error: code = NotFound desc = could not find container \"23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639\": container with ID starting with 23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639 not found: ID does not exist" Feb 20 15:02:11.223326 master-0 kubenswrapper[28120]: I0220 15:02:11.223231 28120 scope.go:117] "RemoveContainer" containerID="a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10" Feb 20 15:02:11.223635 master-0 kubenswrapper[28120]: I0220 15:02:11.223570 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10"} err="failed to get container status \"a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10\": rpc error: code = NotFound desc = could not find container \"a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10\": container with ID starting with a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10 not found: ID does not exist" Feb 20 15:02:11.223635 master-0 kubenswrapper[28120]: I0220 15:02:11.223609 28120 scope.go:117] "RemoveContainer" containerID="180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b" Feb 20 15:02:11.223892 master-0 kubenswrapper[28120]: I0220 15:02:11.223850 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b"} err="failed to get container status \"180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b\": rpc error: code = NotFound desc = could not find container \"180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b\": container with ID starting with 180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b not found: ID does not exist" Feb 20 15:02:11.223892 master-0 kubenswrapper[28120]: I0220 15:02:11.223882 28120 scope.go:117] "RemoveContainer" containerID="d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5" Feb 20 15:02:11.224183 master-0 kubenswrapper[28120]: I0220 15:02:11.224128 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5"} err="failed to get container status \"d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5\": rpc error: code = NotFound desc = could not find container \"d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5\": container with ID starting with d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5 not found: ID does not exist" Feb 20 15:02:11.224183 master-0 kubenswrapper[28120]: I0220 15:02:11.224162 28120 scope.go:117] "RemoveContainer" containerID="23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639" Feb 20 15:02:11.224607 master-0 kubenswrapper[28120]: I0220 15:02:11.224550 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639"} err="failed to get container status \"23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639\": rpc error: code = NotFound desc = could not find container \"23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639\": container with ID starting with 23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639 not found: ID does not exist" Feb 20 15:02:11.224607 master-0 kubenswrapper[28120]: I0220 15:02:11.224592 28120 scope.go:117] "RemoveContainer" containerID="a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10" Feb 20 15:02:11.225198 master-0 kubenswrapper[28120]: I0220 15:02:11.225139 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10"} err="failed to get container status \"a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10\": rpc error: code = NotFound desc = could not find container \"a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10\": container with ID starting with a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10 not found: ID does not exist" Feb 20 15:02:11.225321 master-0 kubenswrapper[28120]: I0220 15:02:11.225182 28120 scope.go:117] "RemoveContainer" containerID="180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b" Feb 20 15:02:11.226273 master-0 kubenswrapper[28120]: I0220 15:02:11.226210 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b"} err="failed to get container status \"180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b\": rpc error: code = NotFound desc = could not find container \"180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b\": container with ID starting with 180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b not found: ID does not exist" Feb 20 15:02:11.226385 master-0 kubenswrapper[28120]: I0220 15:02:11.226297 28120 scope.go:117] "RemoveContainer" containerID="d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5" Feb 20 15:02:11.227228 master-0 kubenswrapper[28120]: I0220 15:02:11.227166 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5"} err="failed to get container status \"d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5\": rpc error: code = NotFound desc = could not find container \"d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5\": container with ID starting with d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5 not found: ID does not exist" Feb 20 15:02:11.227228 master-0 kubenswrapper[28120]: I0220 15:02:11.227210 28120 scope.go:117] "RemoveContainer" containerID="23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639" Feb 20 15:02:11.227590 master-0 kubenswrapper[28120]: I0220 15:02:11.227535 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639"} err="failed to get container status \"23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639\": rpc error: code = NotFound desc = could not find container \"23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639\": container with ID starting with 23fb7b63e11a53b5e439e8fb99e6e20be8b931075556620be9501071ce681639 not found: ID does not exist" Feb 20 15:02:11.227590 master-0 kubenswrapper[28120]: I0220 15:02:11.227572 28120 scope.go:117] "RemoveContainer" containerID="a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10" Feb 20 15:02:11.228567 master-0 kubenswrapper[28120]: I0220 15:02:11.228471 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10"} err="failed to get container status \"a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10\": rpc error: code = NotFound desc = could not find container \"a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10\": container with ID starting with a41b224da92416973656c73b2f60ee0fb0d37c915f4f61237168c061de5d4b10 not found: ID does not exist" Feb 20 15:02:11.228567 master-0 kubenswrapper[28120]: I0220 15:02:11.228557 28120 scope.go:117] "RemoveContainer" containerID="180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b" Feb 20 15:02:11.229121 master-0 kubenswrapper[28120]: I0220 15:02:11.229021 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b"} err="failed to get container status \"180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b\": rpc error: code = NotFound desc = could not find container \"180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b\": container with ID starting with 180abf92aeb65bac9b032f29db7a74baf9e81cb6cac5e121dfc110b6cb8fd90b not found: ID does not exist" Feb 20 15:02:11.229121 master-0 kubenswrapper[28120]: I0220 15:02:11.229105 28120 scope.go:117] "RemoveContainer" containerID="d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5" Feb 20 15:02:11.229611 master-0 kubenswrapper[28120]: I0220 15:02:11.229554 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5"} err="failed to get container status \"d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5\": rpc error: code = NotFound desc = could not find container \"d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5\": container with ID starting with d07f7e85c564453082fecba8bf92fdd17c8f9ba9741c50639415a09b2716b6e5 not found: ID does not exist" Feb 20 15:02:11.553277 master-0 kubenswrapper[28120]: I0220 15:02:11.553208 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 20 15:02:11.592472 master-0 kubenswrapper[28120]: I0220 15:02:11.592416 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70fc5e85-d7d2-45f7-89ff-889a248d6103-var-lock\") pod \"70fc5e85-d7d2-45f7-89ff-889a248d6103\" (UID: \"70fc5e85-d7d2-45f7-89ff-889a248d6103\") " Feb 20 15:02:11.592964 master-0 kubenswrapper[28120]: I0220 15:02:11.592877 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70fc5e85-d7d2-45f7-89ff-889a248d6103-kube-api-access\") pod \"70fc5e85-d7d2-45f7-89ff-889a248d6103\" (UID: \"70fc5e85-d7d2-45f7-89ff-889a248d6103\") " Feb 20 15:02:11.593161 master-0 kubenswrapper[28120]: I0220 15:02:11.593135 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70fc5e85-d7d2-45f7-89ff-889a248d6103-kubelet-dir\") pod \"70fc5e85-d7d2-45f7-89ff-889a248d6103\" (UID: \"70fc5e85-d7d2-45f7-89ff-889a248d6103\") " Feb 20 15:02:11.593699 master-0 kubenswrapper[28120]: I0220 15:02:11.593668 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70fc5e85-d7d2-45f7-89ff-889a248d6103-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "70fc5e85-d7d2-45f7-89ff-889a248d6103" (UID: "70fc5e85-d7d2-45f7-89ff-889a248d6103"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:02:11.593870 master-0 kubenswrapper[28120]: I0220 15:02:11.593846 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70fc5e85-d7d2-45f7-89ff-889a248d6103-var-lock" (OuterVolumeSpecName: "var-lock") pod "70fc5e85-d7d2-45f7-89ff-889a248d6103" (UID: "70fc5e85-d7d2-45f7-89ff-889a248d6103"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:02:11.611272 master-0 kubenswrapper[28120]: I0220 15:02:11.611197 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70fc5e85-d7d2-45f7-89ff-889a248d6103-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "70fc5e85-d7d2-45f7-89ff-889a248d6103" (UID: "70fc5e85-d7d2-45f7-89ff-889a248d6103"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:02:11.695814 master-0 kubenswrapper[28120]: I0220 15:02:11.695723 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70fc5e85-d7d2-45f7-89ff-889a248d6103-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 15:02:11.695814 master-0 kubenswrapper[28120]: I0220 15:02:11.695788 28120 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70fc5e85-d7d2-45f7-89ff-889a248d6103-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:02:11.695814 master-0 kubenswrapper[28120]: I0220 15:02:11.695803 28120 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70fc5e85-d7d2-45f7-89ff-889a248d6103-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 15:02:12.069308 master-0 kubenswrapper[28120]: I0220 15:02:12.069221 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24c827995023caaffd01654949c8d4dd" path="/var/lib/kubelet/pods/24c827995023caaffd01654949c8d4dd/volumes" Feb 20 15:02:12.117553 master-0 kubenswrapper[28120]: I0220 15:02:12.117486 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 20 15:02:12.118202 master-0 kubenswrapper[28120]: I0220 15:02:12.117545 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"70fc5e85-d7d2-45f7-89ff-889a248d6103","Type":"ContainerDied","Data":"05835867e5ee070c821b046db82a7d3ad2696b555f519aadc50a86a08db40896"} Feb 20 15:02:12.118202 master-0 kubenswrapper[28120]: I0220 15:02:12.118192 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05835867e5ee070c821b046db82a7d3ad2696b555f519aadc50a86a08db40896" Feb 20 15:02:12.203170 master-0 kubenswrapper[28120]: I0220 15:02:12.203112 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:02:12.208740 master-0 kubenswrapper[28120]: I0220 15:02:12.208696 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Feb 20 15:02:12.304239 master-0 kubenswrapper[28120]: I0220 15:02:12.304140 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") pod \"fea431d7-394f-4639-abd6-c70a28921fc6\" (UID: \"fea431d7-394f-4639-abd6-c70a28921fc6\") " Feb 20 15:02:12.311592 master-0 kubenswrapper[28120]: I0220 15:02:12.311513 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fea431d7-394f-4639-abd6-c70a28921fc6" (UID: "fea431d7-394f-4639-abd6-c70a28921fc6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:02:12.407062 master-0 kubenswrapper[28120]: I0220 15:02:12.406980 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fea431d7-394f-4639-abd6-c70a28921fc6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 15:02:12.536529 master-0 kubenswrapper[28120]: I0220 15:02:12.536485 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 15:02:12.610087 master-0 kubenswrapper[28120]: I0220 15:02:12.609860 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d093ed7-8264-4834-ac33-5fa08f4b27c2-var-lock\") pod \"5d093ed7-8264-4834-ac33-5fa08f4b27c2\" (UID: \"5d093ed7-8264-4834-ac33-5fa08f4b27c2\") " Feb 20 15:02:12.610087 master-0 kubenswrapper[28120]: I0220 15:02:12.609975 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d093ed7-8264-4834-ac33-5fa08f4b27c2-kube-api-access\") pod \"5d093ed7-8264-4834-ac33-5fa08f4b27c2\" (UID: \"5d093ed7-8264-4834-ac33-5fa08f4b27c2\") " Feb 20 15:02:12.610087 master-0 kubenswrapper[28120]: I0220 15:02:12.610035 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d093ed7-8264-4834-ac33-5fa08f4b27c2-kubelet-dir\") pod \"5d093ed7-8264-4834-ac33-5fa08f4b27c2\" (UID: \"5d093ed7-8264-4834-ac33-5fa08f4b27c2\") " Feb 20 15:02:12.610484 master-0 kubenswrapper[28120]: I0220 15:02:12.610407 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d093ed7-8264-4834-ac33-5fa08f4b27c2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5d093ed7-8264-4834-ac33-5fa08f4b27c2" (UID: "5d093ed7-8264-4834-ac33-5fa08f4b27c2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:02:12.610484 master-0 kubenswrapper[28120]: I0220 15:02:12.610446 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d093ed7-8264-4834-ac33-5fa08f4b27c2-var-lock" (OuterVolumeSpecName: "var-lock") pod "5d093ed7-8264-4834-ac33-5fa08f4b27c2" (UID: "5d093ed7-8264-4834-ac33-5fa08f4b27c2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:02:12.613525 master-0 kubenswrapper[28120]: I0220 15:02:12.613479 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d093ed7-8264-4834-ac33-5fa08f4b27c2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5d093ed7-8264-4834-ac33-5fa08f4b27c2" (UID: "5d093ed7-8264-4834-ac33-5fa08f4b27c2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:02:12.712393 master-0 kubenswrapper[28120]: I0220 15:02:12.712340 28120 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d093ed7-8264-4834-ac33-5fa08f4b27c2-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 15:02:12.712393 master-0 kubenswrapper[28120]: I0220 15:02:12.712377 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d093ed7-8264-4834-ac33-5fa08f4b27c2-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 15:02:12.712393 master-0 kubenswrapper[28120]: I0220 15:02:12.712393 28120 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d093ed7-8264-4834-ac33-5fa08f4b27c2-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:02:13.130433 master-0 kubenswrapper[28120]: I0220 15:02:13.130343 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"5d093ed7-8264-4834-ac33-5fa08f4b27c2","Type":"ContainerDied","Data":"cccd26426182e81137b9535965baa43a046299ee365b6481daa843ad12599e9e"} Feb 20 15:02:13.130433 master-0 kubenswrapper[28120]: I0220 15:02:13.130408 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cccd26426182e81137b9535965baa43a046299ee365b6481daa843ad12599e9e" Feb 20 15:02:13.130990 master-0 kubenswrapper[28120]: I0220 15:02:13.130491 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 20 15:02:13.206275 master-0 kubenswrapper[28120]: I0220 15:02:13.203090 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 20 15:02:13.206275 master-0 kubenswrapper[28120]: E0220 15:02:13.203476 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70fc5e85-d7d2-45f7-89ff-889a248d6103" containerName="installer" Feb 20 15:02:13.206275 master-0 kubenswrapper[28120]: I0220 15:02:13.203498 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="70fc5e85-d7d2-45f7-89ff-889a248d6103" containerName="installer" Feb 20 15:02:13.206275 master-0 kubenswrapper[28120]: E0220 15:02:13.203543 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d093ed7-8264-4834-ac33-5fa08f4b27c2" containerName="installer" Feb 20 15:02:13.206275 master-0 kubenswrapper[28120]: I0220 15:02:13.203556 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d093ed7-8264-4834-ac33-5fa08f4b27c2" containerName="installer" Feb 20 15:02:13.206275 master-0 kubenswrapper[28120]: I0220 15:02:13.203783 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="70fc5e85-d7d2-45f7-89ff-889a248d6103" containerName="installer" Feb 20 15:02:13.206275 master-0 kubenswrapper[28120]: I0220 15:02:13.203838 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d093ed7-8264-4834-ac33-5fa08f4b27c2" containerName="installer" Feb 20 15:02:13.206275 master-0 kubenswrapper[28120]: I0220 15:02:13.204485 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 15:02:13.226622 master-0 kubenswrapper[28120]: I0220 15:02:13.223991 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/40e038c9-a412-4741-9c07-8507d21d0a80-kube-api-access\") pod \"installer-5-master-0\" (UID: \"40e038c9-a412-4741-9c07-8507d21d0a80\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 15:02:13.226622 master-0 kubenswrapper[28120]: I0220 15:02:13.224088 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/40e038c9-a412-4741-9c07-8507d21d0a80-var-lock\") pod \"installer-5-master-0\" (UID: \"40e038c9-a412-4741-9c07-8507d21d0a80\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 15:02:13.226622 master-0 kubenswrapper[28120]: I0220 15:02:13.224168 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40e038c9-a412-4741-9c07-8507d21d0a80-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"40e038c9-a412-4741-9c07-8507d21d0a80\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 15:02:13.229679 master-0 kubenswrapper[28120]: I0220 15:02:13.229615 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 20 15:02:13.325481 master-0 kubenswrapper[28120]: I0220 15:02:13.325367 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/40e038c9-a412-4741-9c07-8507d21d0a80-kube-api-access\") pod \"installer-5-master-0\" (UID: \"40e038c9-a412-4741-9c07-8507d21d0a80\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 15:02:13.325481 master-0 kubenswrapper[28120]: I0220 15:02:13.325474 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/40e038c9-a412-4741-9c07-8507d21d0a80-var-lock\") pod \"installer-5-master-0\" (UID: \"40e038c9-a412-4741-9c07-8507d21d0a80\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 15:02:13.325768 master-0 kubenswrapper[28120]: I0220 15:02:13.325547 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40e038c9-a412-4741-9c07-8507d21d0a80-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"40e038c9-a412-4741-9c07-8507d21d0a80\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 15:02:13.325768 master-0 kubenswrapper[28120]: I0220 15:02:13.325649 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40e038c9-a412-4741-9c07-8507d21d0a80-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"40e038c9-a412-4741-9c07-8507d21d0a80\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 15:02:13.325768 master-0 kubenswrapper[28120]: I0220 15:02:13.325701 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/40e038c9-a412-4741-9c07-8507d21d0a80-var-lock\") pod \"installer-5-master-0\" (UID: \"40e038c9-a412-4741-9c07-8507d21d0a80\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 15:02:13.346811 master-0 kubenswrapper[28120]: I0220 15:02:13.346732 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/40e038c9-a412-4741-9c07-8507d21d0a80-kube-api-access\") pod \"installer-5-master-0\" (UID: \"40e038c9-a412-4741-9c07-8507d21d0a80\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 15:02:13.570608 master-0 kubenswrapper[28120]: I0220 15:02:13.570508 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 15:02:14.084515 master-0 kubenswrapper[28120]: I0220 15:02:14.084429 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 20 15:02:14.093084 master-0 kubenswrapper[28120]: W0220 15:02:14.093004 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod40e038c9_a412_4741_9c07_8507d21d0a80.slice/crio-e1e148b8faea4c89cf440274993fcb399b0ea5da683cd8d796c58537c33b90ee WatchSource:0}: Error finding container e1e148b8faea4c89cf440274993fcb399b0ea5da683cd8d796c58537c33b90ee: Status 404 returned error can't find the container with id e1e148b8faea4c89cf440274993fcb399b0ea5da683cd8d796c58537c33b90ee Feb 20 15:02:14.142421 master-0 kubenswrapper[28120]: I0220 15:02:14.142314 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"40e038c9-a412-4741-9c07-8507d21d0a80","Type":"ContainerStarted","Data":"e1e148b8faea4c89cf440274993fcb399b0ea5da683cd8d796c58537c33b90ee"} Feb 20 15:02:15.152257 master-0 kubenswrapper[28120]: I0220 15:02:15.152106 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"40e038c9-a412-4741-9c07-8507d21d0a80","Type":"ContainerStarted","Data":"bdfb44b6c21af6b326c2d60575eca99d4c5d5bde45ec3141310b494453f2188d"} Feb 20 15:02:15.178475 master-0 kubenswrapper[28120]: I0220 15:02:15.178374 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=2.178348714 podStartE2EDuration="2.178348714s" podCreationTimestamp="2026-02-20 15:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:02:15.173590626 +0000 UTC m=+73.434384259" watchObservedRunningTime="2026-02-20 15:02:15.178348714 +0000 UTC m=+73.439142317" Feb 20 15:02:25.056429 master-0 kubenswrapper[28120]: I0220 15:02:25.056319 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:25.090006 master-0 kubenswrapper[28120]: I0220 15:02:25.089909 28120 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="01e0d14a-fd9d-4995-8517-4582dd9b4b92" Feb 20 15:02:25.090006 master-0 kubenswrapper[28120]: I0220 15:02:25.089990 28120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="01e0d14a-fd9d-4995-8517-4582dd9b4b92" Feb 20 15:02:25.112017 master-0 kubenswrapper[28120]: I0220 15:02:25.107176 28120 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:25.112017 master-0 kubenswrapper[28120]: I0220 15:02:25.107517 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 15:02:25.112017 master-0 kubenswrapper[28120]: I0220 15:02:25.111594 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 15:02:25.123551 master-0 kubenswrapper[28120]: I0220 15:02:25.122391 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:25.128555 master-0 kubenswrapper[28120]: I0220 15:02:25.128463 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 15:02:25.172956 master-0 kubenswrapper[28120]: W0220 15:02:25.171492 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84d9b64313fdfb9864d29171f85c889a.slice/crio-1641d50753e8494afe275b6218209d1775ce70c927153becd12fb44c24b2452d WatchSource:0}: Error finding container 1641d50753e8494afe275b6218209d1775ce70c927153becd12fb44c24b2452d: Status 404 returned error can't find the container with id 1641d50753e8494afe275b6218209d1775ce70c927153becd12fb44c24b2452d Feb 20 15:02:25.245402 master-0 kubenswrapper[28120]: I0220 15:02:25.245327 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"84d9b64313fdfb9864d29171f85c889a","Type":"ContainerStarted","Data":"1641d50753e8494afe275b6218209d1775ce70c927153becd12fb44c24b2452d"} Feb 20 15:02:26.258538 master-0 kubenswrapper[28120]: I0220 15:02:26.258464 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"84d9b64313fdfb9864d29171f85c889a","Type":"ContainerStarted","Data":"820e2cac15d8737b74874cfbaa6d9476ab99c3ff31d4202e866c117704183192"} Feb 20 15:02:26.258538 master-0 kubenswrapper[28120]: I0220 15:02:26.258542 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"84d9b64313fdfb9864d29171f85c889a","Type":"ContainerStarted","Data":"7cad8acf7dbd02c3f457405f1eed4616fd7216aef734c1327428ee2be5b7437f"} Feb 20 15:02:26.259215 master-0 kubenswrapper[28120]: I0220 15:02:26.258562 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"84d9b64313fdfb9864d29171f85c889a","Type":"ContainerStarted","Data":"08fd62cd27292ede66927864096610eb2cbbf6bc7bf62eed86f0d310cd58267b"} Feb 20 15:02:27.288949 master-0 kubenswrapper[28120]: I0220 15:02:27.288285 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"84d9b64313fdfb9864d29171f85c889a","Type":"ContainerStarted","Data":"f8983494495af1cf1e5c78f42107fc4b49d0dc1f28d1662f214add882adc38d5"} Feb 20 15:02:27.344686 master-0 kubenswrapper[28120]: I0220 15:02:27.344608 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.344587231 podStartE2EDuration="2.344587231s" podCreationTimestamp="2026-02-20 15:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:02:27.3393327 +0000 UTC m=+85.600126273" watchObservedRunningTime="2026-02-20 15:02:27.344587231 +0000 UTC m=+85.605380794" Feb 20 15:02:31.320313 master-0 kubenswrapper[28120]: I0220 15:02:31.320247 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_1bf2da21-9f35-4d95-aabf-7a9c8264474f/installer/0.log" Feb 20 15:02:31.321084 master-0 kubenswrapper[28120]: I0220 15:02:31.320338 28120 generic.go:334] "Generic (PLEG): container finished" podID="1bf2da21-9f35-4d95-aabf-7a9c8264474f" containerID="a40f295459b4a2af08fc1680c69148647ff567e1baab8a7d9c6564aa36182635" exitCode=1 Feb 20 15:02:31.321084 master-0 kubenswrapper[28120]: I0220 15:02:31.320392 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"1bf2da21-9f35-4d95-aabf-7a9c8264474f","Type":"ContainerDied","Data":"a40f295459b4a2af08fc1680c69148647ff567e1baab8a7d9c6564aa36182635"} Feb 20 15:02:31.321084 master-0 kubenswrapper[28120]: I0220 15:02:31.320466 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"1bf2da21-9f35-4d95-aabf-7a9c8264474f","Type":"ContainerDied","Data":"273eb8665f38754c3ce35c1f750d3b1d7eb49cd90191a9bd12e88453101394cb"} Feb 20 15:02:31.321084 master-0 kubenswrapper[28120]: I0220 15:02:31.320488 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="273eb8665f38754c3ce35c1f750d3b1d7eb49cd90191a9bd12e88453101394cb" Feb 20 15:02:31.360074 master-0 kubenswrapper[28120]: I0220 15:02:31.359984 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_1bf2da21-9f35-4d95-aabf-7a9c8264474f/installer/0.log" Feb 20 15:02:31.360340 master-0 kubenswrapper[28120]: I0220 15:02:31.360098 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 15:02:31.507559 master-0 kubenswrapper[28120]: I0220 15:02:31.507487 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bf2da21-9f35-4d95-aabf-7a9c8264474f-kube-api-access\") pod \"1bf2da21-9f35-4d95-aabf-7a9c8264474f\" (UID: \"1bf2da21-9f35-4d95-aabf-7a9c8264474f\") " Feb 20 15:02:31.508005 master-0 kubenswrapper[28120]: I0220 15:02:31.507966 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bf2da21-9f35-4d95-aabf-7a9c8264474f-kubelet-dir\") pod \"1bf2da21-9f35-4d95-aabf-7a9c8264474f\" (UID: \"1bf2da21-9f35-4d95-aabf-7a9c8264474f\") " Feb 20 15:02:31.508313 master-0 kubenswrapper[28120]: I0220 15:02:31.508113 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bf2da21-9f35-4d95-aabf-7a9c8264474f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1bf2da21-9f35-4d95-aabf-7a9c8264474f" (UID: "1bf2da21-9f35-4d95-aabf-7a9c8264474f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:02:31.508425 master-0 kubenswrapper[28120]: I0220 15:02:31.508264 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1bf2da21-9f35-4d95-aabf-7a9c8264474f-var-lock\") pod \"1bf2da21-9f35-4d95-aabf-7a9c8264474f\" (UID: \"1bf2da21-9f35-4d95-aabf-7a9c8264474f\") " Feb 20 15:02:31.508603 master-0 kubenswrapper[28120]: I0220 15:02:31.508553 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bf2da21-9f35-4d95-aabf-7a9c8264474f-var-lock" (OuterVolumeSpecName: "var-lock") pod "1bf2da21-9f35-4d95-aabf-7a9c8264474f" (UID: "1bf2da21-9f35-4d95-aabf-7a9c8264474f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:02:31.509176 master-0 kubenswrapper[28120]: I0220 15:02:31.509098 28120 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1bf2da21-9f35-4d95-aabf-7a9c8264474f-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 15:02:31.509403 master-0 kubenswrapper[28120]: I0220 15:02:31.509366 28120 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bf2da21-9f35-4d95-aabf-7a9c8264474f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:02:31.513133 master-0 kubenswrapper[28120]: I0220 15:02:31.513046 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf2da21-9f35-4d95-aabf-7a9c8264474f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1bf2da21-9f35-4d95-aabf-7a9c8264474f" (UID: "1bf2da21-9f35-4d95-aabf-7a9c8264474f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:02:31.610463 master-0 kubenswrapper[28120]: I0220 15:02:31.610382 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bf2da21-9f35-4d95-aabf-7a9c8264474f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 15:02:32.330228 master-0 kubenswrapper[28120]: I0220 15:02:32.330172 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 20 15:02:32.365032 master-0 kubenswrapper[28120]: I0220 15:02:32.364776 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 20 15:02:32.369823 master-0 kubenswrapper[28120]: I0220 15:02:32.369736 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 20 15:02:34.070513 master-0 kubenswrapper[28120]: I0220 15:02:34.070454 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf2da21-9f35-4d95-aabf-7a9c8264474f" path="/var/lib/kubelet/pods/1bf2da21-9f35-4d95-aabf-7a9c8264474f/volumes" Feb 20 15:02:35.123335 master-0 kubenswrapper[28120]: I0220 15:02:35.123226 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:35.123335 master-0 kubenswrapper[28120]: I0220 15:02:35.123321 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:35.124237 master-0 kubenswrapper[28120]: I0220 15:02:35.123813 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:35.124237 master-0 kubenswrapper[28120]: I0220 15:02:35.123892 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:35.130971 master-0 kubenswrapper[28120]: I0220 15:02:35.130898 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:35.131386 master-0 kubenswrapper[28120]: I0220 15:02:35.131316 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:35.368475 master-0 kubenswrapper[28120]: I0220 15:02:35.368401 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:35.369341 master-0 kubenswrapper[28120]: I0220 15:02:35.369285 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:02:40.222272 master-0 kubenswrapper[28120]: I0220 15:02:40.222154 28120 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod56ff46cdb00d28519af7c0cdc9ea8d11"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod56ff46cdb00d28519af7c0cdc9ea8d11] : Timed out while waiting for systemd to remove kubepods-burstable-pod56ff46cdb00d28519af7c0cdc9ea8d11.slice" Feb 20 15:02:40.223371 master-0 kubenswrapper[28120]: E0220 15:02:40.222261 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod56ff46cdb00d28519af7c0cdc9ea8d11] : unable to destroy cgroup paths for cgroup [kubepods burstable pod56ff46cdb00d28519af7c0cdc9ea8d11] : Timed out while waiting for systemd to remove kubepods-burstable-pod56ff46cdb00d28519af7c0cdc9ea8d11.slice" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="56ff46cdb00d28519af7c0cdc9ea8d11" Feb 20 15:02:40.419232 master-0 kubenswrapper[28120]: I0220 15:02:40.419124 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:02:40.425491 master-0 kubenswrapper[28120]: I0220 15:02:40.425420 28120 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="56ff46cdb00d28519af7c0cdc9ea8d11" podUID="d03a1e6620a92c780b0a91c72a55bc8b" Feb 20 15:02:46.055560 master-0 kubenswrapper[28120]: I0220 15:02:46.055510 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:02:46.086732 master-0 kubenswrapper[28120]: I0220 15:02:46.086682 28120 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="924d83cb-6d5b-414e-8f2d-9a299c01fe26" Feb 20 15:02:46.086846 master-0 kubenswrapper[28120]: I0220 15:02:46.086740 28120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="924d83cb-6d5b-414e-8f2d-9a299c01fe26" Feb 20 15:02:46.097795 master-0 kubenswrapper[28120]: I0220 15:02:46.097632 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 20 15:02:46.106394 master-0 kubenswrapper[28120]: I0220 15:02:46.106337 28120 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:02:46.107396 master-0 kubenswrapper[28120]: I0220 15:02:46.107366 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 20 15:02:46.121849 master-0 kubenswrapper[28120]: I0220 15:02:46.121813 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:02:46.123277 master-0 kubenswrapper[28120]: I0220 15:02:46.123252 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 20 15:02:46.158618 master-0 kubenswrapper[28120]: W0220 15:02:46.158516 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd03a1e6620a92c780b0a91c72a55bc8b.slice/crio-8ed89871660dfd40f0be570fcf087369073ab317f16418ef1ebfc1d2f9ce93ea WatchSource:0}: Error finding container 8ed89871660dfd40f0be570fcf087369073ab317f16418ef1ebfc1d2f9ce93ea: Status 404 returned error can't find the container with id 8ed89871660dfd40f0be570fcf087369073ab317f16418ef1ebfc1d2f9ce93ea Feb 20 15:02:46.469419 master-0 kubenswrapper[28120]: I0220 15:02:46.469340 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"7a308bd7a34dd6fcd303e66299bc82f8011528f355158d90adcf270dfc1485a6"} Feb 20 15:02:46.469419 master-0 kubenswrapper[28120]: I0220 15:02:46.469411 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"8ed89871660dfd40f0be570fcf087369073ab317f16418ef1ebfc1d2f9ce93ea"} Feb 20 15:02:47.481446 master-0 kubenswrapper[28120]: I0220 15:02:47.481364 28120 generic.go:334] "Generic (PLEG): container finished" podID="d03a1e6620a92c780b0a91c72a55bc8b" containerID="7a308bd7a34dd6fcd303e66299bc82f8011528f355158d90adcf270dfc1485a6" exitCode=0 Feb 20 15:02:47.481446 master-0 kubenswrapper[28120]: I0220 15:02:47.481433 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerDied","Data":"7a308bd7a34dd6fcd303e66299bc82f8011528f355158d90adcf270dfc1485a6"} Feb 20 15:02:48.491734 master-0 kubenswrapper[28120]: I0220 15:02:48.491665 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"58f8f3015aaf7aaa79e4044dbd9e2e7b0399fdf778fa6714523a632be84520b8"} Feb 20 15:02:48.491734 master-0 kubenswrapper[28120]: I0220 15:02:48.491729 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"f11b5c821ba902e718d05685a00d74dddcc41a98a483e714c4589ff766475718"} Feb 20 15:02:48.492315 master-0 kubenswrapper[28120]: I0220 15:02:48.491750 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"d03a1e6620a92c780b0a91c72a55bc8b","Type":"ContainerStarted","Data":"af45c20be6eb22a928705b10b2b9b7a897acb2f0d8404aa992f1b9bc3109627e"} Feb 20 15:02:48.492315 master-0 kubenswrapper[28120]: I0220 15:02:48.491835 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:02:48.515335 master-0 kubenswrapper[28120]: I0220 15:02:48.515239 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.515212343 podStartE2EDuration="2.515212343s" podCreationTimestamp="2026-02-20 15:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:02:48.515071429 +0000 UTC m=+106.775865012" watchObservedRunningTime="2026-02-20 15:02:48.515212343 +0000 UTC m=+106.776005936" Feb 20 15:03:12.928057 master-0 kubenswrapper[28120]: I0220 15:03:12.927959 28120 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 20 15:03:12.929369 master-0 kubenswrapper[28120]: I0220 15:03:12.928393 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" containerID="cri-o://798ea82daeb38f0c7b68436fab2a622bb37f8874bef02285ea669acff721c7d4" gracePeriod=15 Feb 20 15:03:12.929369 master-0 kubenswrapper[28120]: I0220 15:03:12.928621 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" containerID="cri-o://70c289149386153ec1fc6e6d4d67a823a97c082332c284d864c056158a4eb662" gracePeriod=15 Feb 20 15:03:12.929369 master-0 kubenswrapper[28120]: I0220 15:03:12.928666 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f7706ff200b2846eeea63820bf2ee306105f8590609e6b62651139a96b21f3a0" gracePeriod=15 Feb 20 15:03:12.929369 master-0 kubenswrapper[28120]: I0220 15:03:12.928710 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://39189b545322f50b0910ed1efecdaa2e4608924890fcad29e9895c652836077f" gracePeriod=15 Feb 20 15:03:12.929369 master-0 kubenswrapper[28120]: I0220 15:03:12.928745 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" containerID="cri-o://355cedb8d26b37698e3a57c3d09006cbd9f428b85de301bc95a24404f10ef9fd" gracePeriod=15 Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: I0220 15:03:12.930345 28120 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: E0220 15:03:12.930676 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="setup" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: I0220 15:03:12.930690 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="setup" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: E0220 15:03:12.930722 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: I0220 15:03:12.930730 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: E0220 15:03:12.930740 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: I0220 15:03:12.930747 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: E0220 15:03:12.930756 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf2da21-9f35-4d95-aabf-7a9c8264474f" containerName="installer" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: I0220 15:03:12.930763 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf2da21-9f35-4d95-aabf-7a9c8264474f" containerName="installer" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: E0220 15:03:12.930775 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: I0220 15:03:12.930782 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: E0220 15:03:12.930793 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: I0220 15:03:12.930800 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: E0220 15:03:12.930810 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: I0220 15:03:12.930817 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: E0220 15:03:12.930828 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: I0220 15:03:12.930834 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: I0220 15:03:12.931472 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: I0220 15:03:12.931496 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: I0220 15:03:12.931530 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: I0220 15:03:12.931570 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf2da21-9f35-4d95-aabf-7a9c8264474f" containerName="installer" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: I0220 15:03:12.931590 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: I0220 15:03:12.931608 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" Feb 20 15:03:12.932182 master-0 kubenswrapper[28120]: I0220 15:03:12.931946 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 20 15:03:12.936400 master-0 kubenswrapper[28120]: I0220 15:03:12.936335 28120 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 20 15:03:12.937425 master-0 kubenswrapper[28120]: I0220 15:03:12.937390 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:12.950506 master-0 kubenswrapper[28120]: I0220 15:03:12.947026 28120 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="eb342c942d3d92fd08ed7cf68fafb94c" podUID="487622064474ed0ec70f7bf2a0fcb80b" Feb 20 15:03:13.007202 master-0 kubenswrapper[28120]: E0220 15:03:13.007115 28120 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.056710 master-0 kubenswrapper[28120]: I0220 15:03:13.056639 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.056814 master-0 kubenswrapper[28120]: I0220 15:03:13.056719 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.056814 master-0 kubenswrapper[28120]: I0220 15:03:13.056749 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.056814 master-0 kubenswrapper[28120]: I0220 15:03:13.056775 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:13.056814 master-0 kubenswrapper[28120]: I0220 15:03:13.056801 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.057066 master-0 kubenswrapper[28120]: I0220 15:03:13.056863 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.057066 master-0 kubenswrapper[28120]: I0220 15:03:13.056889 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:13.057066 master-0 kubenswrapper[28120]: I0220 15:03:13.056911 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:13.158948 master-0 kubenswrapper[28120]: I0220 15:03:13.158853 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.159153 master-0 kubenswrapper[28120]: I0220 15:03:13.159009 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.159153 master-0 kubenswrapper[28120]: I0220 15:03:13.159071 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.159153 master-0 kubenswrapper[28120]: I0220 15:03:13.159135 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.159153 master-0 kubenswrapper[28120]: I0220 15:03:13.159144 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.159153 master-0 kubenswrapper[28120]: I0220 15:03:13.159215 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.159153 master-0 kubenswrapper[28120]: I0220 15:03:13.159229 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:13.159804 master-0 kubenswrapper[28120]: I0220 15:03:13.159315 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.159804 master-0 kubenswrapper[28120]: I0220 15:03:13.159410 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.159804 master-0 kubenswrapper[28120]: I0220 15:03:13.159456 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.159804 master-0 kubenswrapper[28120]: I0220 15:03:13.159465 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:13.159804 master-0 kubenswrapper[28120]: I0220 15:03:13.159408 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:13.159804 master-0 kubenswrapper[28120]: I0220 15:03:13.159543 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:13.159804 master-0 kubenswrapper[28120]: I0220 15:03:13.159572 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.159804 master-0 kubenswrapper[28120]: I0220 15:03:13.159621 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:13.159804 master-0 kubenswrapper[28120]: I0220 15:03:13.159572 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:13.308693 master-0 kubenswrapper[28120]: I0220 15:03:13.308609 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.346473 master-0 kubenswrapper[28120]: E0220 15:03:13.345900 28120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1895fca02e7732d4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:2146f0e3671998cad8bbc2464b009ab7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 15:03:13.344688852 +0000 UTC m=+131.605482455,LastTimestamp:2026-02-20 15:03:13.344688852 +0000 UTC m=+131.605482455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 15:03:13.723793 master-0 kubenswrapper[28120]: I0220 15:03:13.723715 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"2146f0e3671998cad8bbc2464b009ab7","Type":"ContainerStarted","Data":"959cece566de771c554ebb90f0e2c429caada188ff7e197cc96dd521087c7e3b"} Feb 20 15:03:13.724108 master-0 kubenswrapper[28120]: I0220 15:03:13.723808 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"2146f0e3671998cad8bbc2464b009ab7","Type":"ContainerStarted","Data":"c26f2fdbf2f262be0bc448aa02722665bcfbbc28f4654f5d7433a21c832264f2"} Feb 20 15:03:13.725980 master-0 kubenswrapper[28120]: E0220 15:03:13.725880 28120 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:03:13.729002 master-0 kubenswrapper[28120]: I0220 15:03:13.728904 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-check-endpoints/0.log" Feb 20 15:03:13.731851 master-0 kubenswrapper[28120]: I0220 15:03:13.731785 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 20 15:03:13.735409 master-0 kubenswrapper[28120]: I0220 15:03:13.735341 28120 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="70c289149386153ec1fc6e6d4d67a823a97c082332c284d864c056158a4eb662" exitCode=0 Feb 20 15:03:13.735409 master-0 kubenswrapper[28120]: I0220 15:03:13.735399 28120 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="f7706ff200b2846eeea63820bf2ee306105f8590609e6b62651139a96b21f3a0" exitCode=0 Feb 20 15:03:13.735630 master-0 kubenswrapper[28120]: I0220 15:03:13.735419 28120 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="39189b545322f50b0910ed1efecdaa2e4608924890fcad29e9895c652836077f" exitCode=0 Feb 20 15:03:13.735630 master-0 kubenswrapper[28120]: I0220 15:03:13.735435 28120 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="355cedb8d26b37698e3a57c3d09006cbd9f428b85de301bc95a24404f10ef9fd" exitCode=2 Feb 20 15:03:13.735630 master-0 kubenswrapper[28120]: I0220 15:03:13.735530 28120 scope.go:117] "RemoveContainer" containerID="77c5708572ab9b4b6918c12a1fcd864571adf469d8703ecc7203af8fab7885f3" Feb 20 15:03:13.739313 master-0 kubenswrapper[28120]: I0220 15:03:13.739256 28120 generic.go:334] "Generic (PLEG): container finished" podID="40e038c9-a412-4741-9c07-8507d21d0a80" containerID="bdfb44b6c21af6b326c2d60575eca99d4c5d5bde45ec3141310b494453f2188d" exitCode=0 Feb 20 15:03:13.739313 master-0 kubenswrapper[28120]: I0220 15:03:13.739307 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"40e038c9-a412-4741-9c07-8507d21d0a80","Type":"ContainerDied","Data":"bdfb44b6c21af6b326c2d60575eca99d4c5d5bde45ec3141310b494453f2188d"} Feb 20 15:03:13.740753 master-0 kubenswrapper[28120]: I0220 15:03:13.740683 28120 status_manager.go:851] "Failed to get status for pod" podUID="40e038c9-a412-4741-9c07-8507d21d0a80" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:14.760463 master-0 kubenswrapper[28120]: I0220 15:03:14.760357 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 20 15:03:14.848326 master-0 kubenswrapper[28120]: E0220 15:03:14.848179 28120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:14.849994 master-0 kubenswrapper[28120]: E0220 15:03:14.849907 28120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:14.851650 master-0 kubenswrapper[28120]: E0220 15:03:14.851575 28120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:14.852466 master-0 kubenswrapper[28120]: E0220 15:03:14.852405 28120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:14.853292 master-0 kubenswrapper[28120]: E0220 15:03:14.853233 28120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:14.853292 master-0 kubenswrapper[28120]: I0220 15:03:14.853285 28120 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 20 15:03:14.853868 master-0 kubenswrapper[28120]: E0220 15:03:14.853821 28120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 20 15:03:15.062748 master-0 kubenswrapper[28120]: E0220 15:03:15.061800 28120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 20 15:03:15.242023 master-0 kubenswrapper[28120]: I0220 15:03:15.241968 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 15:03:15.242895 master-0 kubenswrapper[28120]: I0220 15:03:15.242855 28120 status_manager.go:851] "Failed to get status for pod" podUID="40e038c9-a412-4741-9c07-8507d21d0a80" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:15.297879 master-0 kubenswrapper[28120]: I0220 15:03:15.297827 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 20 15:03:15.298750 master-0 kubenswrapper[28120]: I0220 15:03:15.298719 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:15.299888 master-0 kubenswrapper[28120]: I0220 15:03:15.299813 28120 status_manager.go:851] "Failed to get status for pod" podUID="40e038c9-a412-4741-9c07-8507d21d0a80" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:15.300721 master-0 kubenswrapper[28120]: I0220 15:03:15.300637 28120 status_manager.go:851] "Failed to get status for pod" podUID="eb342c942d3d92fd08ed7cf68fafb94c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:15.441673 master-0 kubenswrapper[28120]: I0220 15:03:15.441477 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"eb342c942d3d92fd08ed7cf68fafb94c\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " Feb 20 15:03:15.441673 master-0 kubenswrapper[28120]: I0220 15:03:15.441577 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"eb342c942d3d92fd08ed7cf68fafb94c\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " Feb 20 15:03:15.441673 master-0 kubenswrapper[28120]: I0220 15:03:15.441670 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40e038c9-a412-4741-9c07-8507d21d0a80-kubelet-dir\") pod \"40e038c9-a412-4741-9c07-8507d21d0a80\" (UID: \"40e038c9-a412-4741-9c07-8507d21d0a80\") " Feb 20 15:03:15.442472 master-0 kubenswrapper[28120]: I0220 15:03:15.441662 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "eb342c942d3d92fd08ed7cf68fafb94c" (UID: "eb342c942d3d92fd08ed7cf68fafb94c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:03:15.442472 master-0 kubenswrapper[28120]: I0220 15:03:15.441726 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"eb342c942d3d92fd08ed7cf68fafb94c\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " Feb 20 15:03:15.442472 master-0 kubenswrapper[28120]: I0220 15:03:15.441771 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/40e038c9-a412-4741-9c07-8507d21d0a80-kube-api-access\") pod \"40e038c9-a412-4741-9c07-8507d21d0a80\" (UID: \"40e038c9-a412-4741-9c07-8507d21d0a80\") " Feb 20 15:03:15.442472 master-0 kubenswrapper[28120]: I0220 15:03:15.441768 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e038c9-a412-4741-9c07-8507d21d0a80-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "40e038c9-a412-4741-9c07-8507d21d0a80" (UID: "40e038c9-a412-4741-9c07-8507d21d0a80"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:03:15.442472 master-0 kubenswrapper[28120]: I0220 15:03:15.441885 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e038c9-a412-4741-9c07-8507d21d0a80-var-lock" (OuterVolumeSpecName: "var-lock") pod "40e038c9-a412-4741-9c07-8507d21d0a80" (UID: "40e038c9-a412-4741-9c07-8507d21d0a80"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:03:15.442472 master-0 kubenswrapper[28120]: I0220 15:03:15.441837 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/40e038c9-a412-4741-9c07-8507d21d0a80-var-lock\") pod \"40e038c9-a412-4741-9c07-8507d21d0a80\" (UID: \"40e038c9-a412-4741-9c07-8507d21d0a80\") " Feb 20 15:03:15.442472 master-0 kubenswrapper[28120]: I0220 15:03:15.441831 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "eb342c942d3d92fd08ed7cf68fafb94c" (UID: "eb342c942d3d92fd08ed7cf68fafb94c"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:03:15.442472 master-0 kubenswrapper[28120]: I0220 15:03:15.441800 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "eb342c942d3d92fd08ed7cf68fafb94c" (UID: "eb342c942d3d92fd08ed7cf68fafb94c"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:03:15.443303 master-0 kubenswrapper[28120]: I0220 15:03:15.442643 28120 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40e038c9-a412-4741-9c07-8507d21d0a80-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:03:15.443303 master-0 kubenswrapper[28120]: I0220 15:03:15.442680 28120 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:03:15.443303 master-0 kubenswrapper[28120]: I0220 15:03:15.442700 28120 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/40e038c9-a412-4741-9c07-8507d21d0a80-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 15:03:15.443303 master-0 kubenswrapper[28120]: I0220 15:03:15.442717 28120 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:03:15.443303 master-0 kubenswrapper[28120]: I0220 15:03:15.442734 28120 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:03:15.447029 master-0 kubenswrapper[28120]: I0220 15:03:15.446959 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e038c9-a412-4741-9c07-8507d21d0a80-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "40e038c9-a412-4741-9c07-8507d21d0a80" (UID: "40e038c9-a412-4741-9c07-8507d21d0a80"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:03:15.464130 master-0 kubenswrapper[28120]: E0220 15:03:15.463999 28120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 20 15:03:15.543993 master-0 kubenswrapper[28120]: I0220 15:03:15.543874 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/40e038c9-a412-4741-9c07-8507d21d0a80-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 15:03:15.776082 master-0 kubenswrapper[28120]: I0220 15:03:15.775998 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 20 15:03:15.777793 master-0 kubenswrapper[28120]: I0220 15:03:15.777717 28120 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="798ea82daeb38f0c7b68436fab2a622bb37f8874bef02285ea669acff721c7d4" exitCode=0 Feb 20 15:03:15.777904 master-0 kubenswrapper[28120]: I0220 15:03:15.777819 28120 scope.go:117] "RemoveContainer" containerID="70c289149386153ec1fc6e6d4d67a823a97c082332c284d864c056158a4eb662" Feb 20 15:03:15.777904 master-0 kubenswrapper[28120]: I0220 15:03:15.777851 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:15.780425 master-0 kubenswrapper[28120]: I0220 15:03:15.780349 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"40e038c9-a412-4741-9c07-8507d21d0a80","Type":"ContainerDied","Data":"e1e148b8faea4c89cf440274993fcb399b0ea5da683cd8d796c58537c33b90ee"} Feb 20 15:03:15.780520 master-0 kubenswrapper[28120]: I0220 15:03:15.780430 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1e148b8faea4c89cf440274993fcb399b0ea5da683cd8d796c58537c33b90ee" Feb 20 15:03:15.780520 master-0 kubenswrapper[28120]: I0220 15:03:15.780463 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 20 15:03:15.805978 master-0 kubenswrapper[28120]: I0220 15:03:15.805912 28120 scope.go:117] "RemoveContainer" containerID="f7706ff200b2846eeea63820bf2ee306105f8590609e6b62651139a96b21f3a0" Feb 20 15:03:15.817427 master-0 kubenswrapper[28120]: I0220 15:03:15.817357 28120 status_manager.go:851] "Failed to get status for pod" podUID="eb342c942d3d92fd08ed7cf68fafb94c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:15.818773 master-0 kubenswrapper[28120]: I0220 15:03:15.818688 28120 status_manager.go:851] "Failed to get status for pod" podUID="40e038c9-a412-4741-9c07-8507d21d0a80" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:15.819940 master-0 kubenswrapper[28120]: I0220 15:03:15.819854 28120 status_manager.go:851] "Failed to get status for pod" podUID="eb342c942d3d92fd08ed7cf68fafb94c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:15.820897 master-0 kubenswrapper[28120]: I0220 15:03:15.820825 28120 status_manager.go:851] "Failed to get status for pod" podUID="40e038c9-a412-4741-9c07-8507d21d0a80" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:15.831290 master-0 kubenswrapper[28120]: I0220 15:03:15.831230 28120 scope.go:117] "RemoveContainer" containerID="39189b545322f50b0910ed1efecdaa2e4608924890fcad29e9895c652836077f" Feb 20 15:03:15.857153 master-0 kubenswrapper[28120]: I0220 15:03:15.857083 28120 scope.go:117] "RemoveContainer" containerID="355cedb8d26b37698e3a57c3d09006cbd9f428b85de301bc95a24404f10ef9fd" Feb 20 15:03:15.881498 master-0 kubenswrapper[28120]: I0220 15:03:15.881424 28120 scope.go:117] "RemoveContainer" containerID="798ea82daeb38f0c7b68436fab2a622bb37f8874bef02285ea669acff721c7d4" Feb 20 15:03:15.911429 master-0 kubenswrapper[28120]: I0220 15:03:15.911358 28120 scope.go:117] "RemoveContainer" containerID="2359af63f52b488394f4fa66a44d4982b382146adcf63bb193421cfeb1ecf07e" Feb 20 15:03:15.940772 master-0 kubenswrapper[28120]: I0220 15:03:15.940724 28120 scope.go:117] "RemoveContainer" containerID="70c289149386153ec1fc6e6d4d67a823a97c082332c284d864c056158a4eb662" Feb 20 15:03:15.941392 master-0 kubenswrapper[28120]: E0220 15:03:15.941338 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70c289149386153ec1fc6e6d4d67a823a97c082332c284d864c056158a4eb662\": container with ID starting with 70c289149386153ec1fc6e6d4d67a823a97c082332c284d864c056158a4eb662 not found: ID does not exist" containerID="70c289149386153ec1fc6e6d4d67a823a97c082332c284d864c056158a4eb662" Feb 20 15:03:15.941466 master-0 kubenswrapper[28120]: I0220 15:03:15.941398 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70c289149386153ec1fc6e6d4d67a823a97c082332c284d864c056158a4eb662"} err="failed to get container status \"70c289149386153ec1fc6e6d4d67a823a97c082332c284d864c056158a4eb662\": rpc error: code = NotFound desc = could not find container \"70c289149386153ec1fc6e6d4d67a823a97c082332c284d864c056158a4eb662\": container with ID starting with 70c289149386153ec1fc6e6d4d67a823a97c082332c284d864c056158a4eb662 not found: ID does not exist" Feb 20 15:03:15.941466 master-0 kubenswrapper[28120]: I0220 15:03:15.941434 28120 scope.go:117] "RemoveContainer" containerID="f7706ff200b2846eeea63820bf2ee306105f8590609e6b62651139a96b21f3a0" Feb 20 15:03:15.941911 master-0 kubenswrapper[28120]: E0220 15:03:15.941861 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7706ff200b2846eeea63820bf2ee306105f8590609e6b62651139a96b21f3a0\": container with ID starting with f7706ff200b2846eeea63820bf2ee306105f8590609e6b62651139a96b21f3a0 not found: ID does not exist" containerID="f7706ff200b2846eeea63820bf2ee306105f8590609e6b62651139a96b21f3a0" Feb 20 15:03:15.943302 master-0 kubenswrapper[28120]: I0220 15:03:15.941908 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7706ff200b2846eeea63820bf2ee306105f8590609e6b62651139a96b21f3a0"} err="failed to get container status \"f7706ff200b2846eeea63820bf2ee306105f8590609e6b62651139a96b21f3a0\": rpc error: code = NotFound desc = could not find container \"f7706ff200b2846eeea63820bf2ee306105f8590609e6b62651139a96b21f3a0\": container with ID starting with f7706ff200b2846eeea63820bf2ee306105f8590609e6b62651139a96b21f3a0 not found: ID does not exist" Feb 20 15:03:15.943302 master-0 kubenswrapper[28120]: I0220 15:03:15.943294 28120 scope.go:117] "RemoveContainer" containerID="39189b545322f50b0910ed1efecdaa2e4608924890fcad29e9895c652836077f" Feb 20 15:03:15.943816 master-0 kubenswrapper[28120]: E0220 15:03:15.943746 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39189b545322f50b0910ed1efecdaa2e4608924890fcad29e9895c652836077f\": container with ID starting with 39189b545322f50b0910ed1efecdaa2e4608924890fcad29e9895c652836077f not found: ID does not exist" containerID="39189b545322f50b0910ed1efecdaa2e4608924890fcad29e9895c652836077f" Feb 20 15:03:15.943875 master-0 kubenswrapper[28120]: I0220 15:03:15.943810 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39189b545322f50b0910ed1efecdaa2e4608924890fcad29e9895c652836077f"} err="failed to get container status \"39189b545322f50b0910ed1efecdaa2e4608924890fcad29e9895c652836077f\": rpc error: code = NotFound desc = could not find container \"39189b545322f50b0910ed1efecdaa2e4608924890fcad29e9895c652836077f\": container with ID starting with 39189b545322f50b0910ed1efecdaa2e4608924890fcad29e9895c652836077f not found: ID does not exist" Feb 20 15:03:15.943875 master-0 kubenswrapper[28120]: I0220 15:03:15.943832 28120 scope.go:117] "RemoveContainer" containerID="355cedb8d26b37698e3a57c3d09006cbd9f428b85de301bc95a24404f10ef9fd" Feb 20 15:03:15.944278 master-0 kubenswrapper[28120]: E0220 15:03:15.944235 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"355cedb8d26b37698e3a57c3d09006cbd9f428b85de301bc95a24404f10ef9fd\": container with ID starting with 355cedb8d26b37698e3a57c3d09006cbd9f428b85de301bc95a24404f10ef9fd not found: ID does not exist" containerID="355cedb8d26b37698e3a57c3d09006cbd9f428b85de301bc95a24404f10ef9fd" Feb 20 15:03:15.944325 master-0 kubenswrapper[28120]: I0220 15:03:15.944275 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"355cedb8d26b37698e3a57c3d09006cbd9f428b85de301bc95a24404f10ef9fd"} err="failed to get container status \"355cedb8d26b37698e3a57c3d09006cbd9f428b85de301bc95a24404f10ef9fd\": rpc error: code = NotFound desc = could not find container \"355cedb8d26b37698e3a57c3d09006cbd9f428b85de301bc95a24404f10ef9fd\": container with ID starting with 355cedb8d26b37698e3a57c3d09006cbd9f428b85de301bc95a24404f10ef9fd not found: ID does not exist" Feb 20 15:03:15.944325 master-0 kubenswrapper[28120]: I0220 15:03:15.944300 28120 scope.go:117] "RemoveContainer" containerID="798ea82daeb38f0c7b68436fab2a622bb37f8874bef02285ea669acff721c7d4" Feb 20 15:03:15.944701 master-0 kubenswrapper[28120]: E0220 15:03:15.944657 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"798ea82daeb38f0c7b68436fab2a622bb37f8874bef02285ea669acff721c7d4\": container with ID starting with 798ea82daeb38f0c7b68436fab2a622bb37f8874bef02285ea669acff721c7d4 not found: ID does not exist" containerID="798ea82daeb38f0c7b68436fab2a622bb37f8874bef02285ea669acff721c7d4" Feb 20 15:03:15.944751 master-0 kubenswrapper[28120]: I0220 15:03:15.944696 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"798ea82daeb38f0c7b68436fab2a622bb37f8874bef02285ea669acff721c7d4"} err="failed to get container status \"798ea82daeb38f0c7b68436fab2a622bb37f8874bef02285ea669acff721c7d4\": rpc error: code = NotFound desc = could not find container \"798ea82daeb38f0c7b68436fab2a622bb37f8874bef02285ea669acff721c7d4\": container with ID starting with 798ea82daeb38f0c7b68436fab2a622bb37f8874bef02285ea669acff721c7d4 not found: ID does not exist" Feb 20 15:03:15.944751 master-0 kubenswrapper[28120]: I0220 15:03:15.944720 28120 scope.go:117] "RemoveContainer" containerID="2359af63f52b488394f4fa66a44d4982b382146adcf63bb193421cfeb1ecf07e" Feb 20 15:03:15.945324 master-0 kubenswrapper[28120]: E0220 15:03:15.945244 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2359af63f52b488394f4fa66a44d4982b382146adcf63bb193421cfeb1ecf07e\": container with ID starting with 2359af63f52b488394f4fa66a44d4982b382146adcf63bb193421cfeb1ecf07e not found: ID does not exist" containerID="2359af63f52b488394f4fa66a44d4982b382146adcf63bb193421cfeb1ecf07e" Feb 20 15:03:15.945384 master-0 kubenswrapper[28120]: I0220 15:03:15.945339 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2359af63f52b488394f4fa66a44d4982b382146adcf63bb193421cfeb1ecf07e"} err="failed to get container status \"2359af63f52b488394f4fa66a44d4982b382146adcf63bb193421cfeb1ecf07e\": rpc error: code = NotFound desc = could not find container \"2359af63f52b488394f4fa66a44d4982b382146adcf63bb193421cfeb1ecf07e\": container with ID starting with 2359af63f52b488394f4fa66a44d4982b382146adcf63bb193421cfeb1ecf07e not found: ID does not exist" Feb 20 15:03:16.068046 master-0 kubenswrapper[28120]: I0220 15:03:16.067869 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb342c942d3d92fd08ed7cf68fafb94c" path="/var/lib/kubelet/pods/eb342c942d3d92fd08ed7cf68fafb94c/volumes" Feb 20 15:03:16.265860 master-0 kubenswrapper[28120]: E0220 15:03:16.265714 28120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 20 15:03:17.868030 master-0 kubenswrapper[28120]: E0220 15:03:17.867911 28120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 20 15:03:21.069636 master-0 kubenswrapper[28120]: E0220 15:03:21.069187 28120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 20 15:03:22.060842 master-0 kubenswrapper[28120]: I0220 15:03:22.060738 28120 status_manager.go:851] "Failed to get status for pod" podUID="40e038c9-a412-4741-9c07-8507d21d0a80" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:22.860149 master-0 kubenswrapper[28120]: E0220 15:03:22.859828 28120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1895fca02e7732d4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:2146f0e3671998cad8bbc2464b009ab7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 15:03:13.344688852 +0000 UTC m=+131.605482455,LastTimestamp:2026-02-20 15:03:13.344688852 +0000 UTC m=+131.605482455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 20 15:03:25.872202 master-0 kubenswrapper[28120]: I0220 15:03:25.872100 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_84d9b64313fdfb9864d29171f85c889a/kube-controller-manager/0.log" Feb 20 15:03:25.872202 master-0 kubenswrapper[28120]: I0220 15:03:25.872169 28120 generic.go:334] "Generic (PLEG): container finished" podID="84d9b64313fdfb9864d29171f85c889a" containerID="08fd62cd27292ede66927864096610eb2cbbf6bc7bf62eed86f0d310cd58267b" exitCode=1 Feb 20 15:03:25.872202 master-0 kubenswrapper[28120]: I0220 15:03:25.872203 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"84d9b64313fdfb9864d29171f85c889a","Type":"ContainerDied","Data":"08fd62cd27292ede66927864096610eb2cbbf6bc7bf62eed86f0d310cd58267b"} Feb 20 15:03:25.873159 master-0 kubenswrapper[28120]: I0220 15:03:25.872685 28120 scope.go:117] "RemoveContainer" containerID="08fd62cd27292ede66927864096610eb2cbbf6bc7bf62eed86f0d310cd58267b" Feb 20 15:03:25.874396 master-0 kubenswrapper[28120]: I0220 15:03:25.874287 28120 status_manager.go:851] "Failed to get status for pod" podUID="40e038c9-a412-4741-9c07-8507d21d0a80" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:25.875494 master-0 kubenswrapper[28120]: I0220 15:03:25.875385 28120 status_manager.go:851] "Failed to get status for pod" podUID="84d9b64313fdfb9864d29171f85c889a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:26.056151 master-0 kubenswrapper[28120]: I0220 15:03:26.056059 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:26.058844 master-0 kubenswrapper[28120]: I0220 15:03:26.058764 28120 status_manager.go:851] "Failed to get status for pod" podUID="40e038c9-a412-4741-9c07-8507d21d0a80" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:26.060017 master-0 kubenswrapper[28120]: I0220 15:03:26.059902 28120 status_manager.go:851] "Failed to get status for pod" podUID="84d9b64313fdfb9864d29171f85c889a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:26.118597 master-0 kubenswrapper[28120]: I0220 15:03:26.118513 28120 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="45bee7fa-cec6-4f30-8472-8711bf3cc59f" Feb 20 15:03:26.118597 master-0 kubenswrapper[28120]: I0220 15:03:26.118589 28120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="45bee7fa-cec6-4f30-8472-8711bf3cc59f" Feb 20 15:03:26.120161 master-0 kubenswrapper[28120]: E0220 15:03:26.120090 28120 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:26.120765 master-0 kubenswrapper[28120]: I0220 15:03:26.120714 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:26.164996 master-0 kubenswrapper[28120]: W0220 15:03:26.164898 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487622064474ed0ec70f7bf2a0fcb80b.slice/crio-f3c0f7b197d206d189189d01e26a90e2990d24a2452d31dbbff5961b0a77c513 WatchSource:0}: Error finding container f3c0f7b197d206d189189d01e26a90e2990d24a2452d31dbbff5961b0a77c513: Status 404 returned error can't find the container with id f3c0f7b197d206d189189d01e26a90e2990d24a2452d31dbbff5961b0a77c513 Feb 20 15:03:26.885415 master-0 kubenswrapper[28120]: I0220 15:03:26.885309 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_84d9b64313fdfb9864d29171f85c889a/kube-controller-manager/0.log" Feb 20 15:03:26.886352 master-0 kubenswrapper[28120]: I0220 15:03:26.885466 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"84d9b64313fdfb9864d29171f85c889a","Type":"ContainerStarted","Data":"39277d8e84273f51dbeefcc1cf459f92a492385ff6b0a6961af7b62f9a72c4a1"} Feb 20 15:03:26.887105 master-0 kubenswrapper[28120]: I0220 15:03:26.887026 28120 status_manager.go:851] "Failed to get status for pod" podUID="40e038c9-a412-4741-9c07-8507d21d0a80" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:26.887998 master-0 kubenswrapper[28120]: I0220 15:03:26.887907 28120 status_manager.go:851] "Failed to get status for pod" podUID="84d9b64313fdfb9864d29171f85c889a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:26.888307 master-0 kubenswrapper[28120]: I0220 15:03:26.888262 28120 generic.go:334] "Generic (PLEG): container finished" podID="487622064474ed0ec70f7bf2a0fcb80b" containerID="8f02c0ec25be8ae620d31fbe6d306947830daf49b081fbed83f45fe35912fea0" exitCode=0 Feb 20 15:03:26.888423 master-0 kubenswrapper[28120]: I0220 15:03:26.888319 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerDied","Data":"8f02c0ec25be8ae620d31fbe6d306947830daf49b081fbed83f45fe35912fea0"} Feb 20 15:03:26.888423 master-0 kubenswrapper[28120]: I0220 15:03:26.888351 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"f3c0f7b197d206d189189d01e26a90e2990d24a2452d31dbbff5961b0a77c513"} Feb 20 15:03:26.888737 master-0 kubenswrapper[28120]: I0220 15:03:26.888688 28120 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="45bee7fa-cec6-4f30-8472-8711bf3cc59f" Feb 20 15:03:26.888737 master-0 kubenswrapper[28120]: I0220 15:03:26.888720 28120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="45bee7fa-cec6-4f30-8472-8711bf3cc59f" Feb 20 15:03:26.889578 master-0 kubenswrapper[28120]: E0220 15:03:26.889514 28120 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:26.889672 master-0 kubenswrapper[28120]: I0220 15:03:26.889524 28120 status_manager.go:851] "Failed to get status for pod" podUID="40e038c9-a412-4741-9c07-8507d21d0a80" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:26.890572 master-0 kubenswrapper[28120]: I0220 15:03:26.890456 28120 status_manager.go:851] "Failed to get status for pod" podUID="84d9b64313fdfb9864d29171f85c889a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:03:27.898170 master-0 kubenswrapper[28120]: I0220 15:03:27.898107 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"c03e02eb5af2eb1fb6c7b0bf2b150876badffa6c377fc640e73fe017195c3955"} Feb 20 15:03:27.898170 master-0 kubenswrapper[28120]: I0220 15:03:27.898167 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"949dc0da88d4a4460d2cb97e0f3c24fd8362d270a7436b3ee3b6374fb6d9feb8"} Feb 20 15:03:28.910323 master-0 kubenswrapper[28120]: I0220 15:03:28.910260 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"9ffb45550122418b770ffe4855a0d3263f2d5ff36573901ef2f27c5cf8525e1a"} Feb 20 15:03:28.910323 master-0 kubenswrapper[28120]: I0220 15:03:28.910313 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"68667de4f5fd3fb66f60d8286fdaa32ea9f4758986e5bccc5b915c5cb496818d"} Feb 20 15:03:28.910323 master-0 kubenswrapper[28120]: I0220 15:03:28.910326 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"2a0e88d060e1d652903a6e0684ac8c18552ee919654dcf829cbacf4c11cd8f63"} Feb 20 15:03:28.911278 master-0 kubenswrapper[28120]: I0220 15:03:28.910617 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:28.911278 master-0 kubenswrapper[28120]: I0220 15:03:28.910743 28120 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="45bee7fa-cec6-4f30-8472-8711bf3cc59f" Feb 20 15:03:28.911278 master-0 kubenswrapper[28120]: I0220 15:03:28.910790 28120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="45bee7fa-cec6-4f30-8472-8711bf3cc59f" Feb 20 15:03:31.121182 master-0 kubenswrapper[28120]: I0220 15:03:31.121127 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:31.122242 master-0 kubenswrapper[28120]: I0220 15:03:31.122181 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:31.130118 master-0 kubenswrapper[28120]: I0220 15:03:31.130056 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:33.943021 master-0 kubenswrapper[28120]: I0220 15:03:33.937576 28120 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:33.963469 master-0 kubenswrapper[28120]: I0220 15:03:33.963394 28120 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="45bee7fa-cec6-4f30-8472-8711bf3cc59f" Feb 20 15:03:33.963469 master-0 kubenswrapper[28120]: I0220 15:03:33.963442 28120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="45bee7fa-cec6-4f30-8472-8711bf3cc59f" Feb 20 15:03:33.980250 master-0 kubenswrapper[28120]: I0220 15:03:33.979320 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:03:34.037775 master-0 kubenswrapper[28120]: I0220 15:03:34.037711 28120 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="487622064474ed0ec70f7bf2a0fcb80b" podUID="71055f17-3b7a-49ac-b848-3ca2598ea5cb" Feb 20 15:03:34.972283 master-0 kubenswrapper[28120]: I0220 15:03:34.972213 28120 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="45bee7fa-cec6-4f30-8472-8711bf3cc59f" Feb 20 15:03:34.972283 master-0 kubenswrapper[28120]: I0220 15:03:34.972267 28120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="45bee7fa-cec6-4f30-8472-8711bf3cc59f" Feb 20 15:03:34.976689 master-0 kubenswrapper[28120]: I0220 15:03:34.976613 28120 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="487622064474ed0ec70f7bf2a0fcb80b" podUID="71055f17-3b7a-49ac-b848-3ca2598ea5cb" Feb 20 15:03:35.122869 master-0 kubenswrapper[28120]: I0220 15:03:35.122790 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:03:35.122869 master-0 kubenswrapper[28120]: I0220 15:03:35.122872 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:03:35.128850 master-0 kubenswrapper[28120]: I0220 15:03:35.128803 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:03:36.129973 master-0 kubenswrapper[28120]: I0220 15:03:36.129876 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 20 15:03:44.064463 master-0 kubenswrapper[28120]: I0220 15:03:44.064369 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 20 15:03:44.581190 master-0 kubenswrapper[28120]: I0220 15:03:44.581095 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-cp6wb" Feb 20 15:03:44.929028 master-0 kubenswrapper[28120]: I0220 15:03:44.928870 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 20 15:03:44.936064 master-0 kubenswrapper[28120]: I0220 15:03:44.935999 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 20 15:03:45.133197 master-0 kubenswrapper[28120]: I0220 15:03:45.130647 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:03:45.370963 master-0 kubenswrapper[28120]: I0220 15:03:45.370868 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-7m26s" Feb 20 15:03:45.489094 master-0 kubenswrapper[28120]: I0220 15:03:45.488971 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 20 15:03:45.494055 master-0 kubenswrapper[28120]: I0220 15:03:45.493915 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 20 15:03:45.696613 master-0 kubenswrapper[28120]: I0220 15:03:45.696541 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 20 15:03:45.705282 master-0 kubenswrapper[28120]: I0220 15:03:45.705213 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 20 15:03:45.783727 master-0 kubenswrapper[28120]: I0220 15:03:45.783636 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 20 15:03:45.883343 master-0 kubenswrapper[28120]: I0220 15:03:45.883267 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 20 15:03:45.966422 master-0 kubenswrapper[28120]: I0220 15:03:45.966275 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 20 15:03:46.138414 master-0 kubenswrapper[28120]: I0220 15:03:46.138331 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 20 15:03:46.140570 master-0 kubenswrapper[28120]: I0220 15:03:46.140251 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 20 15:03:46.320536 master-0 kubenswrapper[28120]: I0220 15:03:46.320442 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 20 15:03:46.348475 master-0 kubenswrapper[28120]: I0220 15:03:46.348387 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-67ksg" Feb 20 15:03:46.392785 master-0 kubenswrapper[28120]: I0220 15:03:46.392719 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 20 15:03:46.455634 master-0 kubenswrapper[28120]: I0220 15:03:46.455547 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 20 15:03:46.496718 master-0 kubenswrapper[28120]: I0220 15:03:46.496648 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 20 15:03:46.625824 master-0 kubenswrapper[28120]: I0220 15:03:46.625667 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 20 15:03:46.765613 master-0 kubenswrapper[28120]: I0220 15:03:46.765532 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 20 15:03:46.811835 master-0 kubenswrapper[28120]: I0220 15:03:46.811763 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 20 15:03:46.864884 master-0 kubenswrapper[28120]: I0220 15:03:46.864802 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 20 15:03:46.876085 master-0 kubenswrapper[28120]: I0220 15:03:46.875964 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 20 15:03:46.975262 master-0 kubenswrapper[28120]: I0220 15:03:46.975185 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 20 15:03:46.982225 master-0 kubenswrapper[28120]: I0220 15:03:46.982156 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 20 15:03:47.036856 master-0 kubenswrapper[28120]: I0220 15:03:47.036774 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 20 15:03:47.123024 master-0 kubenswrapper[28120]: I0220 15:03:47.122907 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 20 15:03:47.224013 master-0 kubenswrapper[28120]: I0220 15:03:47.223865 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 20 15:03:47.256738 master-0 kubenswrapper[28120]: I0220 15:03:47.256662 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xrgsf" Feb 20 15:03:47.575947 master-0 kubenswrapper[28120]: I0220 15:03:47.575848 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 15:03:47.617110 master-0 kubenswrapper[28120]: I0220 15:03:47.617040 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 20 15:03:47.759002 master-0 kubenswrapper[28120]: I0220 15:03:47.758876 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 20 15:03:47.829022 master-0 kubenswrapper[28120]: I0220 15:03:47.828849 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 20 15:03:47.969436 master-0 kubenswrapper[28120]: I0220 15:03:47.969354 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 20 15:03:47.974317 master-0 kubenswrapper[28120]: I0220 15:03:47.974267 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 20 15:03:48.025772 master-0 kubenswrapper[28120]: I0220 15:03:48.025710 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 20 15:03:48.119719 master-0 kubenswrapper[28120]: I0220 15:03:48.119544 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 20 15:03:48.188446 master-0 kubenswrapper[28120]: I0220 15:03:48.188371 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-c8rnz" Feb 20 15:03:48.190117 master-0 kubenswrapper[28120]: I0220 15:03:48.190051 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 20 15:03:48.299538 master-0 kubenswrapper[28120]: I0220 15:03:48.299435 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 20 15:03:48.476177 master-0 kubenswrapper[28120]: I0220 15:03:48.476050 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 20 15:03:48.614197 master-0 kubenswrapper[28120]: I0220 15:03:48.614083 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-cpp79" Feb 20 15:03:48.636892 master-0 kubenswrapper[28120]: I0220 15:03:48.636819 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 20 15:03:48.727173 master-0 kubenswrapper[28120]: I0220 15:03:48.727037 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 20 15:03:48.753749 master-0 kubenswrapper[28120]: I0220 15:03:48.753661 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 20 15:03:48.787439 master-0 kubenswrapper[28120]: I0220 15:03:48.787320 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 20 15:03:48.809549 master-0 kubenswrapper[28120]: I0220 15:03:48.809457 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 20 15:03:48.838394 master-0 kubenswrapper[28120]: I0220 15:03:48.838290 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 20 15:03:48.843017 master-0 kubenswrapper[28120]: I0220 15:03:48.842973 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 20 15:03:48.873343 master-0 kubenswrapper[28120]: I0220 15:03:48.873311 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 20 15:03:48.879546 master-0 kubenswrapper[28120]: I0220 15:03:48.879482 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 20 15:03:48.883259 master-0 kubenswrapper[28120]: I0220 15:03:48.883237 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 20 15:03:48.902700 master-0 kubenswrapper[28120]: I0220 15:03:48.902634 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-tljfd" Feb 20 15:03:48.966939 master-0 kubenswrapper[28120]: I0220 15:03:48.966874 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 20 15:03:48.969070 master-0 kubenswrapper[28120]: I0220 15:03:48.969018 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 20 15:03:49.110299 master-0 kubenswrapper[28120]: I0220 15:03:49.110234 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 20 15:03:49.161540 master-0 kubenswrapper[28120]: I0220 15:03:49.161434 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 20 15:03:49.174110 master-0 kubenswrapper[28120]: I0220 15:03:49.174014 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 20 15:03:49.200679 master-0 kubenswrapper[28120]: I0220 15:03:49.200613 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 20 15:03:49.247727 master-0 kubenswrapper[28120]: I0220 15:03:49.247668 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 20 15:03:49.343600 master-0 kubenswrapper[28120]: I0220 15:03:49.343529 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 20 15:03:49.370775 master-0 kubenswrapper[28120]: I0220 15:03:49.370641 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 20 15:03:49.698328 master-0 kubenswrapper[28120]: I0220 15:03:49.698170 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 20 15:03:49.742364 master-0 kubenswrapper[28120]: I0220 15:03:49.742283 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 20 15:03:49.799487 master-0 kubenswrapper[28120]: I0220 15:03:49.799404 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 20 15:03:49.943183 master-0 kubenswrapper[28120]: I0220 15:03:49.943122 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 20 15:03:50.013593 master-0 kubenswrapper[28120]: I0220 15:03:50.013478 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 20 15:03:50.088438 master-0 kubenswrapper[28120]: I0220 15:03:50.088362 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 20 15:03:50.152767 master-0 kubenswrapper[28120]: I0220 15:03:50.152683 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 20 15:03:50.182637 master-0 kubenswrapper[28120]: I0220 15:03:50.182548 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 20 15:03:50.224304 master-0 kubenswrapper[28120]: I0220 15:03:50.224238 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 20 15:03:50.327473 master-0 kubenswrapper[28120]: I0220 15:03:50.327307 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-mnmfc" Feb 20 15:03:50.432885 master-0 kubenswrapper[28120]: I0220 15:03:50.432848 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 20 15:03:50.489403 master-0 kubenswrapper[28120]: I0220 15:03:50.489344 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 20 15:03:50.501391 master-0 kubenswrapper[28120]: I0220 15:03:50.501321 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-s2d9t" Feb 20 15:03:50.507754 master-0 kubenswrapper[28120]: I0220 15:03:50.507700 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 20 15:03:50.600965 master-0 kubenswrapper[28120]: I0220 15:03:50.600829 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 20 15:03:50.601428 master-0 kubenswrapper[28120]: I0220 15:03:50.601359 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 20 15:03:50.654061 master-0 kubenswrapper[28120]: I0220 15:03:50.653996 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 20 15:03:50.716107 master-0 kubenswrapper[28120]: I0220 15:03:50.715797 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 20 15:03:50.719197 master-0 kubenswrapper[28120]: I0220 15:03:50.719161 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 20 15:03:50.829448 master-0 kubenswrapper[28120]: I0220 15:03:50.829407 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 20 15:03:50.838195 master-0 kubenswrapper[28120]: I0220 15:03:50.838177 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 20 15:03:50.881407 master-0 kubenswrapper[28120]: I0220 15:03:50.881303 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 20 15:03:50.881800 master-0 kubenswrapper[28120]: I0220 15:03:50.881398 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 20 15:03:50.934286 master-0 kubenswrapper[28120]: I0220 15:03:50.934232 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 20 15:03:50.967429 master-0 kubenswrapper[28120]: I0220 15:03:50.967382 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 20 15:03:50.969570 master-0 kubenswrapper[28120]: I0220 15:03:50.969534 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 20 15:03:50.976034 master-0 kubenswrapper[28120]: I0220 15:03:50.976007 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 20 15:03:51.030964 master-0 kubenswrapper[28120]: I0220 15:03:51.030891 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 20 15:03:51.169612 master-0 kubenswrapper[28120]: I0220 15:03:51.169485 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 20 15:03:51.350849 master-0 kubenswrapper[28120]: I0220 15:03:51.350784 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 20 15:03:51.380548 master-0 kubenswrapper[28120]: I0220 15:03:51.380487 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 20 15:03:51.408131 master-0 kubenswrapper[28120]: I0220 15:03:51.408086 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-jtt44" Feb 20 15:03:51.554554 master-0 kubenswrapper[28120]: I0220 15:03:51.554489 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 20 15:03:51.610624 master-0 kubenswrapper[28120]: I0220 15:03:51.610584 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-4xtlh" Feb 20 15:03:51.799246 master-0 kubenswrapper[28120]: I0220 15:03:51.799174 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 20 15:03:51.839151 master-0 kubenswrapper[28120]: I0220 15:03:51.838953 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 20 15:03:51.860106 master-0 kubenswrapper[28120]: I0220 15:03:51.860048 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-btmxs" Feb 20 15:03:51.965801 master-0 kubenswrapper[28120]: I0220 15:03:51.965766 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 20 15:03:51.975074 master-0 kubenswrapper[28120]: I0220 15:03:51.975060 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 20 15:03:52.063120 master-0 kubenswrapper[28120]: I0220 15:03:52.063065 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 20 15:03:52.066582 master-0 kubenswrapper[28120]: I0220 15:03:52.066558 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 20 15:03:52.093574 master-0 kubenswrapper[28120]: I0220 15:03:52.093459 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 20 15:03:52.162765 master-0 kubenswrapper[28120]: I0220 15:03:52.162734 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 20 15:03:52.273453 master-0 kubenswrapper[28120]: I0220 15:03:52.273419 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 20 15:03:52.312878 master-0 kubenswrapper[28120]: I0220 15:03:52.312823 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-5mrbx" Feb 20 15:03:52.341918 master-0 kubenswrapper[28120]: I0220 15:03:52.341755 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 20 15:03:52.443363 master-0 kubenswrapper[28120]: I0220 15:03:52.443209 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-dm2ds" Feb 20 15:03:52.448430 master-0 kubenswrapper[28120]: I0220 15:03:52.448397 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 20 15:03:52.487736 master-0 kubenswrapper[28120]: I0220 15:03:52.487674 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 20 15:03:52.547636 master-0 kubenswrapper[28120]: I0220 15:03:52.547581 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 20 15:03:52.569500 master-0 kubenswrapper[28120]: I0220 15:03:52.569420 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 20 15:03:52.672598 master-0 kubenswrapper[28120]: I0220 15:03:52.672506 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 20 15:03:52.711964 master-0 kubenswrapper[28120]: I0220 15:03:52.711816 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 20 15:03:52.841882 master-0 kubenswrapper[28120]: I0220 15:03:52.841802 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 20 15:03:52.846455 master-0 kubenswrapper[28120]: I0220 15:03:52.846382 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-d7z2t" Feb 20 15:03:52.849099 master-0 kubenswrapper[28120]: I0220 15:03:52.849064 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 20 15:03:52.862001 master-0 kubenswrapper[28120]: I0220 15:03:52.861950 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 20 15:03:52.914692 master-0 kubenswrapper[28120]: I0220 15:03:52.912360 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 20 15:03:52.926323 master-0 kubenswrapper[28120]: I0220 15:03:52.926263 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 20 15:03:52.947154 master-0 kubenswrapper[28120]: I0220 15:03:52.947074 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 20 15:03:52.983490 master-0 kubenswrapper[28120]: I0220 15:03:52.983341 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-ts5zc" Feb 20 15:03:53.050300 master-0 kubenswrapper[28120]: I0220 15:03:53.050195 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 20 15:03:53.079490 master-0 kubenswrapper[28120]: I0220 15:03:53.079400 28120 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 20 15:03:53.094602 master-0 kubenswrapper[28120]: I0220 15:03:53.094524 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 20 15:03:53.232561 master-0 kubenswrapper[28120]: I0220 15:03:53.232462 28120 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 20 15:03:53.267956 master-0 kubenswrapper[28120]: I0220 15:03:53.267861 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 20 15:03:53.271743 master-0 kubenswrapper[28120]: I0220 15:03:53.271692 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 20 15:03:53.293082 master-0 kubenswrapper[28120]: I0220 15:03:53.292968 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 20 15:03:53.300624 master-0 kubenswrapper[28120]: I0220 15:03:53.300563 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 20 15:03:53.355246 master-0 kubenswrapper[28120]: I0220 15:03:53.355171 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 20 15:03:53.365678 master-0 kubenswrapper[28120]: I0220 15:03:53.365579 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-tkxrl" Feb 20 15:03:53.411472 master-0 kubenswrapper[28120]: I0220 15:03:53.411399 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 20 15:03:53.436622 master-0 kubenswrapper[28120]: I0220 15:03:53.436564 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 20 15:03:53.442676 master-0 kubenswrapper[28120]: I0220 15:03:53.442648 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 20 15:03:53.470470 master-0 kubenswrapper[28120]: I0220 15:03:53.470395 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-7zgzx" Feb 20 15:03:53.470689 master-0 kubenswrapper[28120]: I0220 15:03:53.470626 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 20 15:03:53.472284 master-0 kubenswrapper[28120]: I0220 15:03:53.472243 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 20 15:03:53.571582 master-0 kubenswrapper[28120]: I0220 15:03:53.571465 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 20 15:03:53.614470 master-0 kubenswrapper[28120]: I0220 15:03:53.614402 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 20 15:03:53.645642 master-0 kubenswrapper[28120]: I0220 15:03:53.645579 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 20 15:03:53.649360 master-0 kubenswrapper[28120]: I0220 15:03:53.649308 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 20 15:03:53.775096 master-0 kubenswrapper[28120]: I0220 15:03:53.775005 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 20 15:03:53.852621 master-0 kubenswrapper[28120]: I0220 15:03:53.852421 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 20 15:03:53.945732 master-0 kubenswrapper[28120]: I0220 15:03:53.945669 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 20 15:03:54.033068 master-0 kubenswrapper[28120]: I0220 15:03:54.033009 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 20 15:03:54.033347 master-0 kubenswrapper[28120]: I0220 15:03:54.033316 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-7vdpw" Feb 20 15:03:54.108047 master-0 kubenswrapper[28120]: I0220 15:03:54.106698 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 20 15:03:54.142481 master-0 kubenswrapper[28120]: I0220 15:03:54.142364 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 20 15:03:54.209535 master-0 kubenswrapper[28120]: I0220 15:03:54.209461 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 20 15:03:54.284934 master-0 kubenswrapper[28120]: I0220 15:03:54.284855 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 20 15:03:54.364117 master-0 kubenswrapper[28120]: I0220 15:03:54.363987 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 20 15:03:54.558031 master-0 kubenswrapper[28120]: I0220 15:03:54.557906 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 20 15:03:54.620272 master-0 kubenswrapper[28120]: I0220 15:03:54.620077 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 20 15:03:54.637348 master-0 kubenswrapper[28120]: I0220 15:03:54.637289 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 20 15:03:54.708840 master-0 kubenswrapper[28120]: I0220 15:03:54.708737 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 20 15:03:54.774429 master-0 kubenswrapper[28120]: I0220 15:03:54.774361 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 20 15:03:54.800864 master-0 kubenswrapper[28120]: I0220 15:03:54.800783 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 20 15:03:54.852722 master-0 kubenswrapper[28120]: I0220 15:03:54.852627 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 20 15:03:54.868396 master-0 kubenswrapper[28120]: I0220 15:03:54.868318 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 20 15:03:54.877781 master-0 kubenswrapper[28120]: I0220 15:03:54.877602 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 20 15:03:54.936469 master-0 kubenswrapper[28120]: I0220 15:03:54.936372 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-n8qfb" Feb 20 15:03:54.979474 master-0 kubenswrapper[28120]: I0220 15:03:54.979398 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 20 15:03:54.985086 master-0 kubenswrapper[28120]: I0220 15:03:54.985038 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 20 15:03:55.057591 master-0 kubenswrapper[28120]: I0220 15:03:55.057508 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 20 15:03:55.073389 master-0 kubenswrapper[28120]: I0220 15:03:55.073303 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 20 15:03:55.088577 master-0 kubenswrapper[28120]: I0220 15:03:55.088502 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 20 15:03:55.092993 master-0 kubenswrapper[28120]: I0220 15:03:55.092959 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 15:03:55.145105 master-0 kubenswrapper[28120]: I0220 15:03:55.144907 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 20 15:03:55.199151 master-0 kubenswrapper[28120]: I0220 15:03:55.199045 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 20 15:03:55.220535 master-0 kubenswrapper[28120]: I0220 15:03:55.220445 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 20 15:03:55.269673 master-0 kubenswrapper[28120]: I0220 15:03:55.269617 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 20 15:03:55.361406 master-0 kubenswrapper[28120]: I0220 15:03:55.361306 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 20 15:03:55.390250 master-0 kubenswrapper[28120]: I0220 15:03:55.390154 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-mxnq7" Feb 20 15:03:55.442757 master-0 kubenswrapper[28120]: I0220 15:03:55.442614 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 20 15:03:55.464898 master-0 kubenswrapper[28120]: I0220 15:03:55.464797 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 20 15:03:55.597147 master-0 kubenswrapper[28120]: I0220 15:03:55.597034 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 20 15:03:55.638205 master-0 kubenswrapper[28120]: I0220 15:03:55.638109 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 20 15:03:55.681870 master-0 kubenswrapper[28120]: I0220 15:03:55.681799 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 20 15:03:55.770111 master-0 kubenswrapper[28120]: I0220 15:03:55.770019 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 20 15:03:55.797535 master-0 kubenswrapper[28120]: I0220 15:03:55.797430 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 20 15:03:55.903275 master-0 kubenswrapper[28120]: I0220 15:03:55.903187 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 20 15:03:55.994258 master-0 kubenswrapper[28120]: I0220 15:03:55.994179 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 20 15:03:56.046782 master-0 kubenswrapper[28120]: I0220 15:03:56.046646 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 20 15:03:56.086600 master-0 kubenswrapper[28120]: I0220 15:03:56.086536 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 20 15:03:56.135259 master-0 kubenswrapper[28120]: I0220 15:03:56.135190 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 20 15:03:56.207991 master-0 kubenswrapper[28120]: I0220 15:03:56.207816 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 20 15:03:56.208754 master-0 kubenswrapper[28120]: I0220 15:03:56.208706 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 20 15:03:56.217680 master-0 kubenswrapper[28120]: I0220 15:03:56.217576 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 20 15:03:56.405919 master-0 kubenswrapper[28120]: I0220 15:03:56.405763 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 20 15:03:56.424579 master-0 kubenswrapper[28120]: I0220 15:03:56.424490 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 20 15:03:56.466138 master-0 kubenswrapper[28120]: I0220 15:03:56.466056 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-77fdh" Feb 20 15:03:56.509276 master-0 kubenswrapper[28120]: I0220 15:03:56.509206 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7pkl9jqft06ca" Feb 20 15:03:56.587851 master-0 kubenswrapper[28120]: I0220 15:03:56.587771 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 20 15:03:56.740462 master-0 kubenswrapper[28120]: I0220 15:03:56.740405 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 20 15:03:56.815709 master-0 kubenswrapper[28120]: I0220 15:03:56.815647 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-c2dd6" Feb 20 15:03:56.858917 master-0 kubenswrapper[28120]: I0220 15:03:56.858859 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-qb2q7" Feb 20 15:03:56.889174 master-0 kubenswrapper[28120]: I0220 15:03:56.889108 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 20 15:03:56.890699 master-0 kubenswrapper[28120]: I0220 15:03:56.890660 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 20 15:03:56.937615 master-0 kubenswrapper[28120]: I0220 15:03:56.937539 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 20 15:03:57.024256 master-0 kubenswrapper[28120]: I0220 15:03:57.024089 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 20 15:03:57.066538 master-0 kubenswrapper[28120]: I0220 15:03:57.066452 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 20 15:03:57.076514 master-0 kubenswrapper[28120]: I0220 15:03:57.076429 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 20 15:03:57.132459 master-0 kubenswrapper[28120]: I0220 15:03:57.132382 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 20 15:03:57.153483 master-0 kubenswrapper[28120]: I0220 15:03:57.153420 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-8m9cn" Feb 20 15:03:57.191603 master-0 kubenswrapper[28120]: I0220 15:03:57.191531 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 20 15:03:57.271530 master-0 kubenswrapper[28120]: I0220 15:03:57.271455 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 20 15:03:57.278835 master-0 kubenswrapper[28120]: I0220 15:03:57.278709 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-gfr9m" Feb 20 15:03:57.378717 master-0 kubenswrapper[28120]: I0220 15:03:57.378579 28120 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 20 15:03:57.405080 master-0 kubenswrapper[28120]: I0220 15:03:57.404954 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-jscmz" Feb 20 15:03:57.448074 master-0 kubenswrapper[28120]: I0220 15:03:57.447994 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-rnbdm" Feb 20 15:03:57.509280 master-0 kubenswrapper[28120]: I0220 15:03:57.509191 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mvrxq" Feb 20 15:03:57.598167 master-0 kubenswrapper[28120]: I0220 15:03:57.597029 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 20 15:03:57.631201 master-0 kubenswrapper[28120]: I0220 15:03:57.631126 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 20 15:03:57.667711 master-0 kubenswrapper[28120]: I0220 15:03:57.667670 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 20 15:03:57.678870 master-0 kubenswrapper[28120]: I0220 15:03:57.678751 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 20 15:03:57.838468 master-0 kubenswrapper[28120]: I0220 15:03:57.838373 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 20 15:03:57.868472 master-0 kubenswrapper[28120]: I0220 15:03:57.868341 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 20 15:03:57.874251 master-0 kubenswrapper[28120]: I0220 15:03:57.874213 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 20 15:03:57.883981 master-0 kubenswrapper[28120]: I0220 15:03:57.883948 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 20 15:03:57.890895 master-0 kubenswrapper[28120]: I0220 15:03:57.890861 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 20 15:03:58.059790 master-0 kubenswrapper[28120]: I0220 15:03:58.059728 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 20 15:03:58.129802 master-0 kubenswrapper[28120]: I0220 15:03:58.129619 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 20 15:03:58.157418 master-0 kubenswrapper[28120]: I0220 15:03:58.157354 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 20 15:03:58.160967 master-0 kubenswrapper[28120]: I0220 15:03:58.160893 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 20 15:03:58.219108 master-0 kubenswrapper[28120]: I0220 15:03:58.219038 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 20 15:03:58.345896 master-0 kubenswrapper[28120]: I0220 15:03:58.345784 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-sn97p" Feb 20 15:03:58.380614 master-0 kubenswrapper[28120]: I0220 15:03:58.380502 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 20 15:03:58.455026 master-0 kubenswrapper[28120]: I0220 15:03:58.454892 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 20 15:03:58.504967 master-0 kubenswrapper[28120]: I0220 15:03:58.504580 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 20 15:03:58.578070 master-0 kubenswrapper[28120]: I0220 15:03:58.577969 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 20 15:03:58.619839 master-0 kubenswrapper[28120]: I0220 15:03:58.619756 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 20 15:03:58.651599 master-0 kubenswrapper[28120]: I0220 15:03:58.651456 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 20 15:03:58.784402 master-0 kubenswrapper[28120]: I0220 15:03:58.784335 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 20 15:03:58.801051 master-0 kubenswrapper[28120]: I0220 15:03:58.800999 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 20 15:03:58.831432 master-0 kubenswrapper[28120]: I0220 15:03:58.831356 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 20 15:03:58.908805 master-0 kubenswrapper[28120]: I0220 15:03:58.908676 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 20 15:03:58.929974 master-0 kubenswrapper[28120]: I0220 15:03:58.929906 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 20 15:03:58.984678 master-0 kubenswrapper[28120]: I0220 15:03:58.984606 28120 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 20 15:03:59.005406 master-0 kubenswrapper[28120]: I0220 15:03:59.005332 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 20 15:03:59.055217 master-0 kubenswrapper[28120]: I0220 15:03:59.055141 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 20 15:03:59.444297 master-0 kubenswrapper[28120]: I0220 15:03:59.444252 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 20 15:03:59.472472 master-0 kubenswrapper[28120]: I0220 15:03:59.472422 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 20 15:03:59.523786 master-0 kubenswrapper[28120]: I0220 15:03:59.523670 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 20 15:03:59.830249 master-0 kubenswrapper[28120]: I0220 15:03:59.830162 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 20 15:03:59.880756 master-0 kubenswrapper[28120]: I0220 15:03:59.880632 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 20 15:03:59.889150 master-0 kubenswrapper[28120]: I0220 15:03:59.888874 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 20 15:03:59.992712 master-0 kubenswrapper[28120]: I0220 15:03:59.992619 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-9g7zv" Feb 20 15:04:01.042411 master-0 kubenswrapper[28120]: I0220 15:04:01.042313 28120 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 20 15:04:01.052725 master-0 kubenswrapper[28120]: I0220 15:04:01.052648 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 20 15:04:01.052870 master-0 kubenswrapper[28120]: I0220 15:04:01.052738 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 20 15:04:01.060550 master-0 kubenswrapper[28120]: I0220 15:04:01.060464 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:04:01.095857 master-0 kubenswrapper[28120]: I0220 15:04:01.095742 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=28.095714147 podStartE2EDuration="28.095714147s" podCreationTimestamp="2026-02-20 15:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:04:01.088159048 +0000 UTC m=+179.348952691" watchObservedRunningTime="2026-02-20 15:04:01.095714147 +0000 UTC m=+179.356507740" Feb 20 15:04:02.769762 master-0 kubenswrapper[28120]: I0220 15:04:02.769678 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 20 15:04:08.008766 master-0 kubenswrapper[28120]: I0220 15:04:08.008649 28120 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 20 15:04:08.009781 master-0 kubenswrapper[28120]: I0220 15:04:08.009103 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="2146f0e3671998cad8bbc2464b009ab7" containerName="startup-monitor" containerID="cri-o://959cece566de771c554ebb90f0e2c429caada188ff7e197cc96dd521087c7e3b" gracePeriod=5 Feb 20 15:04:13.300623 master-0 kubenswrapper[28120]: I0220 15:04:13.300564 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_2146f0e3671998cad8bbc2464b009ab7/startup-monitor/0.log" Feb 20 15:04:13.302102 master-0 kubenswrapper[28120]: I0220 15:04:13.300632 28120 generic.go:334] "Generic (PLEG): container finished" podID="2146f0e3671998cad8bbc2464b009ab7" containerID="959cece566de771c554ebb90f0e2c429caada188ff7e197cc96dd521087c7e3b" exitCode=137 Feb 20 15:04:13.305451 master-0 kubenswrapper[28120]: I0220 15:04:13.305415 28120 generic.go:334] "Generic (PLEG): container finished" podID="c0a3548f-299c-4234-9bf1-c93efcb9740b" containerID="8955afec05ac17b6d5bd5b27623b6f73413fa01ace341f3ccb7e06f06406e93d" exitCode=0 Feb 20 15:04:13.305640 master-0 kubenswrapper[28120]: I0220 15:04:13.305454 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" event={"ID":"c0a3548f-299c-4234-9bf1-c93efcb9740b","Type":"ContainerDied","Data":"8955afec05ac17b6d5bd5b27623b6f73413fa01ace341f3ccb7e06f06406e93d"} Feb 20 15:04:13.305640 master-0 kubenswrapper[28120]: I0220 15:04:13.305490 28120 scope.go:117] "RemoveContainer" containerID="52bf43d0e30c121fdb642cca3e4e8c737348e2c0806817b6c660ae4bd355d192" Feb 20 15:04:13.306631 master-0 kubenswrapper[28120]: I0220 15:04:13.306145 28120 scope.go:117] "RemoveContainer" containerID="8955afec05ac17b6d5bd5b27623b6f73413fa01ace341f3ccb7e06f06406e93d" Feb 20 15:04:13.712155 master-0 kubenswrapper[28120]: I0220 15:04:13.710230 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_2146f0e3671998cad8bbc2464b009ab7/startup-monitor/0.log" Feb 20 15:04:13.712155 master-0 kubenswrapper[28120]: I0220 15:04:13.710351 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:04:13.903832 master-0 kubenswrapper[28120]: I0220 15:04:13.903635 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 20 15:04:13.904198 master-0 kubenswrapper[28120]: I0220 15:04:13.903833 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests" (OuterVolumeSpecName: "manifests") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:04:13.904198 master-0 kubenswrapper[28120]: I0220 15:04:13.903894 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 20 15:04:13.904198 master-0 kubenswrapper[28120]: I0220 15:04:13.904014 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 20 15:04:13.904198 master-0 kubenswrapper[28120]: I0220 15:04:13.904099 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 20 15:04:13.904198 master-0 kubenswrapper[28120]: I0220 15:04:13.904073 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:04:13.904414 master-0 kubenswrapper[28120]: I0220 15:04:13.904169 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 20 15:04:13.904414 master-0 kubenswrapper[28120]: I0220 15:04:13.904268 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log" (OuterVolumeSpecName: "var-log") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:04:13.904494 master-0 kubenswrapper[28120]: I0220 15:04:13.904323 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock" (OuterVolumeSpecName: "var-lock") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:04:13.905122 master-0 kubenswrapper[28120]: I0220 15:04:13.905088 28120 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") on node \"master-0\" DevicePath \"\"" Feb 20 15:04:13.905191 master-0 kubenswrapper[28120]: I0220 15:04:13.905118 28120 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:04:13.905191 master-0 kubenswrapper[28120]: I0220 15:04:13.905139 28120 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 15:04:13.905191 master-0 kubenswrapper[28120]: I0220 15:04:13.905152 28120 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") on node \"master-0\" DevicePath \"\"" Feb 20 15:04:13.912463 master-0 kubenswrapper[28120]: I0220 15:04:13.912399 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:04:14.007303 master-0 kubenswrapper[28120]: I0220 15:04:14.007217 28120 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:04:14.070916 master-0 kubenswrapper[28120]: I0220 15:04:14.070822 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2146f0e3671998cad8bbc2464b009ab7" path="/var/lib/kubelet/pods/2146f0e3671998cad8bbc2464b009ab7/volumes" Feb 20 15:04:14.316781 master-0 kubenswrapper[28120]: I0220 15:04:14.316667 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_2146f0e3671998cad8bbc2464b009ab7/startup-monitor/0.log" Feb 20 15:04:14.317747 master-0 kubenswrapper[28120]: I0220 15:04:14.316882 28120 scope.go:117] "RemoveContainer" containerID="959cece566de771c554ebb90f0e2c429caada188ff7e197cc96dd521087c7e3b" Feb 20 15:04:14.317747 master-0 kubenswrapper[28120]: I0220 15:04:14.316953 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:04:14.322247 master-0 kubenswrapper[28120]: I0220 15:04:14.322172 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" event={"ID":"c0a3548f-299c-4234-9bf1-c93efcb9740b","Type":"ContainerStarted","Data":"0690b5cde1de0a7e0ee02ec27511e6c0338973097769b56b60384aee0a9acb07"} Feb 20 15:04:14.322642 master-0 kubenswrapper[28120]: I0220 15:04:14.322590 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 15:04:14.325956 master-0 kubenswrapper[28120]: I0220 15:04:14.325862 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-97m7r" Feb 20 15:05:27.035673 master-0 kubenswrapper[28120]: I0220 15:05:27.035581 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-7-master-0"] Feb 20 15:05:27.037062 master-0 kubenswrapper[28120]: E0220 15:05:27.035986 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2146f0e3671998cad8bbc2464b009ab7" containerName="startup-monitor" Feb 20 15:05:27.037062 master-0 kubenswrapper[28120]: I0220 15:05:27.036007 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2146f0e3671998cad8bbc2464b009ab7" containerName="startup-monitor" Feb 20 15:05:27.037062 master-0 kubenswrapper[28120]: E0220 15:05:27.036033 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e038c9-a412-4741-9c07-8507d21d0a80" containerName="installer" Feb 20 15:05:27.037062 master-0 kubenswrapper[28120]: I0220 15:05:27.036046 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e038c9-a412-4741-9c07-8507d21d0a80" containerName="installer" Feb 20 15:05:27.037062 master-0 kubenswrapper[28120]: I0220 15:05:27.036266 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e038c9-a412-4741-9c07-8507d21d0a80" containerName="installer" Feb 20 15:05:27.037062 master-0 kubenswrapper[28120]: I0220 15:05:27.036294 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="2146f0e3671998cad8bbc2464b009ab7" containerName="startup-monitor" Feb 20 15:05:27.039308 master-0 kubenswrapper[28120]: I0220 15:05:27.037250 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-7-master-0" Feb 20 15:05:27.042348 master-0 kubenswrapper[28120]: I0220 15:05:27.042290 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 20 15:05:27.043026 master-0 kubenswrapper[28120]: I0220 15:05:27.042977 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-w4mx6" Feb 20 15:05:27.050074 master-0 kubenswrapper[28120]: I0220 15:05:27.049880 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-7-master-0"] Feb 20 15:05:27.235119 master-0 kubenswrapper[28120]: I0220 15:05:27.235058 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff9eff2a-6fb2-4972-b393-faa883504995-var-lock\") pod \"installer-7-master-0\" (UID: \"ff9eff2a-6fb2-4972-b393-faa883504995\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 20 15:05:27.235438 master-0 kubenswrapper[28120]: I0220 15:05:27.235171 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff9eff2a-6fb2-4972-b393-faa883504995-kube-api-access\") pod \"installer-7-master-0\" (UID: \"ff9eff2a-6fb2-4972-b393-faa883504995\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 20 15:05:27.235438 master-0 kubenswrapper[28120]: I0220 15:05:27.235279 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff9eff2a-6fb2-4972-b393-faa883504995-kubelet-dir\") pod \"installer-7-master-0\" (UID: \"ff9eff2a-6fb2-4972-b393-faa883504995\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 20 15:05:27.336527 master-0 kubenswrapper[28120]: I0220 15:05:27.336385 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff9eff2a-6fb2-4972-b393-faa883504995-kubelet-dir\") pod \"installer-7-master-0\" (UID: \"ff9eff2a-6fb2-4972-b393-faa883504995\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 20 15:05:27.336527 master-0 kubenswrapper[28120]: I0220 15:05:27.336487 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff9eff2a-6fb2-4972-b393-faa883504995-var-lock\") pod \"installer-7-master-0\" (UID: \"ff9eff2a-6fb2-4972-b393-faa883504995\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 20 15:05:27.336771 master-0 kubenswrapper[28120]: I0220 15:05:27.336564 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff9eff2a-6fb2-4972-b393-faa883504995-kube-api-access\") pod \"installer-7-master-0\" (UID: \"ff9eff2a-6fb2-4972-b393-faa883504995\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 20 15:05:27.336771 master-0 kubenswrapper[28120]: I0220 15:05:27.336631 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff9eff2a-6fb2-4972-b393-faa883504995-var-lock\") pod \"installer-7-master-0\" (UID: \"ff9eff2a-6fb2-4972-b393-faa883504995\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 20 15:05:27.336771 master-0 kubenswrapper[28120]: I0220 15:05:27.336615 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff9eff2a-6fb2-4972-b393-faa883504995-kubelet-dir\") pod \"installer-7-master-0\" (UID: \"ff9eff2a-6fb2-4972-b393-faa883504995\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 20 15:05:27.367966 master-0 kubenswrapper[28120]: I0220 15:05:27.365741 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff9eff2a-6fb2-4972-b393-faa883504995-kube-api-access\") pod \"installer-7-master-0\" (UID: \"ff9eff2a-6fb2-4972-b393-faa883504995\") " pod="openshift-kube-apiserver/installer-7-master-0" Feb 20 15:05:27.378990 master-0 kubenswrapper[28120]: I0220 15:05:27.374918 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-7-master-0" Feb 20 15:05:27.898449 master-0 kubenswrapper[28120]: I0220 15:05:27.896123 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-7-master-0"] Feb 20 15:05:27.946981 master-0 kubenswrapper[28120]: I0220 15:05:27.946872 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-7-master-0" event={"ID":"ff9eff2a-6fb2-4972-b393-faa883504995","Type":"ContainerStarted","Data":"cbd93db5ce6e8e8ebf11a51c7296d7478ce3b60cf0071c5850483baf691b991b"} Feb 20 15:05:28.960680 master-0 kubenswrapper[28120]: I0220 15:05:28.960567 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-7-master-0" event={"ID":"ff9eff2a-6fb2-4972-b393-faa883504995","Type":"ContainerStarted","Data":"199b626d3af169dd1863bd86a256a14084f72991c72c28ec4c1c5d1db9f5435f"} Feb 20 15:05:39.232580 master-0 kubenswrapper[28120]: I0220 15:05:39.232455 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-7-master-0" podStartSLOduration=12.232429123 podStartE2EDuration="12.232429123s" podCreationTimestamp="2026-02-20 15:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:05:28.985621968 +0000 UTC m=+267.246415601" watchObservedRunningTime="2026-02-20 15:05:39.232429123 +0000 UTC m=+277.493222716" Feb 20 15:05:39.233718 master-0 kubenswrapper[28120]: I0220 15:05:39.233515 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-7-master-0"] Feb 20 15:05:39.233830 master-0 kubenswrapper[28120]: I0220 15:05:39.233767 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-7-master-0" podUID="ff9eff2a-6fb2-4972-b393-faa883504995" containerName="installer" containerID="cri-o://199b626d3af169dd1863bd86a256a14084f72991c72c28ec4c1c5d1db9f5435f" gracePeriod=30 Feb 20 15:05:41.097128 master-0 kubenswrapper[28120]: I0220 15:05:41.096989 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c6bcb95bb-jd879"] Feb 20 15:05:41.098322 master-0 kubenswrapper[28120]: I0220 15:05:41.098258 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.102425 master-0 kubenswrapper[28120]: I0220 15:05:41.102344 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 15:05:41.102914 master-0 kubenswrapper[28120]: I0220 15:05:41.102853 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-9zvh6" Feb 20 15:05:41.103268 master-0 kubenswrapper[28120]: I0220 15:05:41.103207 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 15:05:41.103885 master-0 kubenswrapper[28120]: I0220 15:05:41.103823 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 15:05:41.104321 master-0 kubenswrapper[28120]: I0220 15:05:41.104252 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 15:05:41.104502 master-0 kubenswrapper[28120]: I0220 15:05:41.104458 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 15:05:41.108631 master-0 kubenswrapper[28120]: I0220 15:05:41.108552 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-42dr4"] Feb 20 15:05:41.112071 master-0 kubenswrapper[28120]: I0220 15:05:41.110314 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-79f587d78f-42dr4" Feb 20 15:05:41.114333 master-0 kubenswrapper[28120]: I0220 15:05:41.114110 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 20 15:05:41.114642 master-0 kubenswrapper[28120]: I0220 15:05:41.114577 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 15:05:41.115987 master-0 kubenswrapper[28120]: I0220 15:05:41.115097 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 20 15:05:41.124023 master-0 kubenswrapper[28120]: I0220 15:05:41.123244 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72"] Feb 20 15:05:41.124760 master-0 kubenswrapper[28120]: I0220 15:05:41.124613 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72" Feb 20 15:05:41.126965 master-0 kubenswrapper[28120]: I0220 15:05:41.126850 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 20 15:05:41.127574 master-0 kubenswrapper[28120]: I0220 15:05:41.127422 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-glnt8" Feb 20 15:05:41.137373 master-0 kubenswrapper[28120]: I0220 15:05:41.137295 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r"] Feb 20 15:05:41.138969 master-0 kubenswrapper[28120]: I0220 15:05:41.138871 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:05:41.142614 master-0 kubenswrapper[28120]: I0220 15:05:41.142568 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-2w8rc" Feb 20 15:05:41.143332 master-0 kubenswrapper[28120]: I0220 15:05:41.143009 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 20 15:05:41.143332 master-0 kubenswrapper[28120]: I0220 15:05:41.143255 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 20 15:05:41.143525 master-0 kubenswrapper[28120]: I0220 15:05:41.143398 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 20 15:05:41.144766 master-0 kubenswrapper[28120]: I0220 15:05:41.143706 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 20 15:05:41.144766 master-0 kubenswrapper[28120]: I0220 15:05:41.144504 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 20 15:05:41.149896 master-0 kubenswrapper[28120]: I0220 15:05:41.149797 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-f2p4m"] Feb 20 15:05:41.151840 master-0 kubenswrapper[28120]: I0220 15:05:41.151626 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" Feb 20 15:05:41.155097 master-0 kubenswrapper[28120]: I0220 15:05:41.154947 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4x7c8" Feb 20 15:05:41.155455 master-0 kubenswrapper[28120]: I0220 15:05:41.155236 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 20 15:05:41.155682 master-0 kubenswrapper[28120]: I0220 15:05:41.155623 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-s6kvh"] Feb 20 15:05:41.156237 master-0 kubenswrapper[28120]: I0220 15:05:41.156033 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 20 15:05:41.156349 master-0 kubenswrapper[28120]: I0220 15:05:41.156207 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 20 15:05:41.156981 master-0 kubenswrapper[28120]: I0220 15:05:41.156661 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 20 15:05:41.158735 master-0 kubenswrapper[28120]: I0220 15:05:41.158602 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-s6kvh" Feb 20 15:05:41.159692 master-0 kubenswrapper[28120]: I0220 15:05:41.159527 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs"] Feb 20 15:05:41.163878 master-0 kubenswrapper[28120]: I0220 15:05:41.160850 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.165139 master-0 kubenswrapper[28120]: I0220 15:05:41.165039 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k4cl\" (UniqueName: \"kubernetes.io/projected/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-kube-api-access-6k4cl\") pod \"controller-manager-6c6bcb95bb-jd879\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.165156 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmg86\" (UniqueName: \"kubernetes.io/projected/4e11583a-e3a5-4634-89bc-6edb03f6ba02-kube-api-access-kmg86\") pod \"route-controller-manager-6fcf8cbd8f-6gz7r\" (UID: \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.165237 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-client-ca\") pod \"controller-manager-6c6bcb95bb-jd879\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.165290 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd597b37-557c-473c-9191-91510e92fdd8-serving-cert\") pod \"console-operator-5df5ffc47c-f2p4m\" (UID: \"fd597b37-557c-473c-9191-91510e92fdd8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.165342 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-config\") pod \"controller-manager-6c6bcb95bb-jd879\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.165390 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea38f6cf-c959-49b0-85e5-868c3b3187cf-secret-volume\") pod \"collect-profiles-29526660-2pk72\" (UID: \"ea38f6cf-c959-49b0-85e5-868c3b3187cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.165472 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/638be8af-0cb4-4cbc-8bb9-72c996ee87b9-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-42dr4\" (UID: \"638be8af-0cb4-4cbc-8bb9-72c996ee87b9\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-42dr4" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.165544 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-proxy-ca-bundles\") pod \"controller-manager-6c6bcb95bb-jd879\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.165596 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxtjm\" (UniqueName: \"kubernetes.io/projected/fd597b37-557c-473c-9191-91510e92fdd8-kube-api-access-zxtjm\") pod \"console-operator-5df5ffc47c-f2p4m\" (UID: \"fd597b37-557c-473c-9191-91510e92fdd8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.165651 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea38f6cf-c959-49b0-85e5-868c3b3187cf-config-volume\") pod \"collect-profiles-29526660-2pk72\" (UID: \"ea38f6cf-c959-49b0-85e5-868c3b3187cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.165709 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e11583a-e3a5-4634-89bc-6edb03f6ba02-config\") pod \"route-controller-manager-6fcf8cbd8f-6gz7r\" (UID: \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.165765 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd597b37-557c-473c-9191-91510e92fdd8-config\") pod \"console-operator-5df5ffc47c-f2p4m\" (UID: \"fd597b37-557c-473c-9191-91510e92fdd8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.165821 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e11583a-e3a5-4634-89bc-6edb03f6ba02-client-ca\") pod \"route-controller-manager-6fcf8cbd8f-6gz7r\" (UID: \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.165890 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e11583a-e3a5-4634-89bc-6edb03f6ba02-serving-cert\") pod \"route-controller-manager-6fcf8cbd8f-6gz7r\" (UID: \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.165994 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd597b37-557c-473c-9191-91510e92fdd8-trusted-ca\") pod \"console-operator-5df5ffc47c-f2p4m\" (UID: \"fd597b37-557c-473c-9191-91510e92fdd8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.166052 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/638be8af-0cb4-4cbc-8bb9-72c996ee87b9-nginx-conf\") pod \"networking-console-plugin-79f587d78f-42dr4\" (UID: \"638be8af-0cb4-4cbc-8bb9-72c996ee87b9\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-42dr4" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.166115 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zht4w\" (UniqueName: \"kubernetes.io/projected/ea38f6cf-c959-49b0-85e5-868c3b3187cf-kube-api-access-zht4w\") pod \"collect-profiles-29526660-2pk72\" (UID: \"ea38f6cf-c959-49b0-85e5-868c3b3187cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.166160 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-serving-cert\") pod \"controller-manager-6c6bcb95bb-jd879\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.167189 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 20 15:05:41.169360 master-0 kubenswrapper[28120]: I0220 15:05:41.167527 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-lh9jf"] Feb 20 15:05:41.172773 master-0 kubenswrapper[28120]: I0220 15:05:41.172647 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 20 15:05:41.173538 master-0 kubenswrapper[28120]: I0220 15:05:41.173485 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-k6xtb" Feb 20 15:05:41.173793 master-0 kubenswrapper[28120]: I0220 15:05:41.173757 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-2wv5x" Feb 20 15:05:41.174018 master-0 kubenswrapper[28120]: I0220 15:05:41.173983 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 20 15:05:41.175801 master-0 kubenswrapper[28120]: I0220 15:05:41.175757 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 20 15:05:41.188972 master-0 kubenswrapper[28120]: I0220 15:05:41.181325 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 20 15:05:41.188972 master-0 kubenswrapper[28120]: I0220 15:05:41.182990 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 20 15:05:41.188972 master-0 kubenswrapper[28120]: I0220 15:05:41.184888 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 20 15:05:41.188972 master-0 kubenswrapper[28120]: I0220 15:05:41.186821 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 20 15:05:41.188972 master-0 kubenswrapper[28120]: I0220 15:05:41.187167 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 20 15:05:41.188972 master-0 kubenswrapper[28120]: I0220 15:05:41.187788 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 20 15:05:41.188972 master-0 kubenswrapper[28120]: I0220 15:05:41.187992 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" Feb 20 15:05:41.188972 master-0 kubenswrapper[28120]: I0220 15:05:41.188492 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 20 15:05:41.192882 master-0 kubenswrapper[28120]: I0220 15:05:41.192757 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 20 15:05:41.195632 master-0 kubenswrapper[28120]: I0220 15:05:41.195080 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 20 15:05:41.195894 master-0 kubenswrapper[28120]: I0220 15:05:41.195844 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-9bd798c78-d5dft"] Feb 20 15:05:41.197254 master-0 kubenswrapper[28120]: I0220 15:05:41.197219 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-9bd798c78-d5dft" Feb 20 15:05:41.198878 master-0 kubenswrapper[28120]: I0220 15:05:41.198828 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 20 15:05:41.199007 master-0 kubenswrapper[28120]: I0220 15:05:41.198894 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-lr4lf" Feb 20 15:05:41.200385 master-0 kubenswrapper[28120]: I0220 15:05:41.200339 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-brpdw" Feb 20 15:05:41.201148 master-0 kubenswrapper[28120]: I0220 15:05:41.201071 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 20 15:05:41.210465 master-0 kubenswrapper[28120]: I0220 15:05:41.210420 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 20 15:05:41.210618 master-0 kubenswrapper[28120]: I0220 15:05:41.210586 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72"] Feb 20 15:05:41.213380 master-0 kubenswrapper[28120]: I0220 15:05:41.213339 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 20 15:05:41.221441 master-0 kubenswrapper[28120]: I0220 15:05:41.221390 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-9bd798c78-d5dft"] Feb 20 15:05:41.229342 master-0 kubenswrapper[28120]: I0220 15:05:41.228950 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r"] Feb 20 15:05:41.241399 master-0 kubenswrapper[28120]: I0220 15:05:41.241336 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c6bcb95bb-jd879"] Feb 20 15:05:41.254110 master-0 kubenswrapper[28120]: I0220 15:05:41.254032 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-42dr4"] Feb 20 15:05:41.260251 master-0 kubenswrapper[28120]: I0220 15:05:41.260193 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs"] Feb 20 15:05:41.269433 master-0 kubenswrapper[28120]: I0220 15:05:41.269371 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea38f6cf-c959-49b0-85e5-868c3b3187cf-secret-volume\") pod \"collect-profiles-29526660-2pk72\" (UID: \"ea38f6cf-c959-49b0-85e5-868c3b3187cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72" Feb 20 15:05:41.269542 master-0 kubenswrapper[28120]: I0220 15:05:41.269445 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-audit-policies\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.269542 master-0 kubenswrapper[28120]: I0220 15:05:41.269485 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/638be8af-0cb4-4cbc-8bb9-72c996ee87b9-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-42dr4\" (UID: \"638be8af-0cb4-4cbc-8bb9-72c996ee87b9\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-42dr4" Feb 20 15:05:41.269542 master-0 kubenswrapper[28120]: I0220 15:05:41.269516 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.269542 master-0 kubenswrapper[28120]: I0220 15:05:41.269544 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.269723 master-0 kubenswrapper[28120]: I0220 15:05:41.269576 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-proxy-ca-bundles\") pod \"controller-manager-6c6bcb95bb-jd879\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.269723 master-0 kubenswrapper[28120]: I0220 15:05:41.269602 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxtjm\" (UniqueName: \"kubernetes.io/projected/fd597b37-557c-473c-9191-91510e92fdd8-kube-api-access-zxtjm\") pod \"console-operator-5df5ffc47c-f2p4m\" (UID: \"fd597b37-557c-473c-9191-91510e92fdd8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" Feb 20 15:05:41.269723 master-0 kubenswrapper[28120]: I0220 15:05:41.269629 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea38f6cf-c959-49b0-85e5-868c3b3187cf-config-volume\") pod \"collect-profiles-29526660-2pk72\" (UID: \"ea38f6cf-c959-49b0-85e5-868c3b3187cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72" Feb 20 15:05:41.269723 master-0 kubenswrapper[28120]: I0220 15:05:41.269656 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e11583a-e3a5-4634-89bc-6edb03f6ba02-config\") pod \"route-controller-manager-6fcf8cbd8f-6gz7r\" (UID: \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:05:41.269723 master-0 kubenswrapper[28120]: I0220 15:05:41.269697 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd597b37-557c-473c-9191-91510e92fdd8-config\") pod \"console-operator-5df5ffc47c-f2p4m\" (UID: \"fd597b37-557c-473c-9191-91510e92fdd8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" Feb 20 15:05:41.269968 master-0 kubenswrapper[28120]: I0220 15:05:41.269733 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-error\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.269968 master-0 kubenswrapper[28120]: I0220 15:05:41.269763 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e11583a-e3a5-4634-89bc-6edb03f6ba02-client-ca\") pod \"route-controller-manager-6fcf8cbd8f-6gz7r\" (UID: \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:05:41.269968 master-0 kubenswrapper[28120]: I0220 15:05:41.269795 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.269968 master-0 kubenswrapper[28120]: I0220 15:05:41.269831 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0b95604a-6807-4654-a1b0-7dc20cc4418c-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-lh9jf\" (UID: \"0b95604a-6807-4654-a1b0-7dc20cc4418c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" Feb 20 15:05:41.269968 master-0 kubenswrapper[28120]: I0220 15:05:41.269859 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e11583a-e3a5-4634-89bc-6edb03f6ba02-serving-cert\") pod \"route-controller-manager-6fcf8cbd8f-6gz7r\" (UID: \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:05:41.269968 master-0 kubenswrapper[28120]: I0220 15:05:41.269884 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bc47b01c-8c45-4e60-9558-7d34770441d4-audit-dir\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.269968 master-0 kubenswrapper[28120]: I0220 15:05:41.269908 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-login\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.269968 master-0 kubenswrapper[28120]: I0220 15:05:41.269957 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b95604a-6807-4654-a1b0-7dc20cc4418c-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-lh9jf\" (UID: \"0b95604a-6807-4654-a1b0-7dc20cc4418c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" Feb 20 15:05:41.270300 master-0 kubenswrapper[28120]: I0220 15:05:41.269994 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd597b37-557c-473c-9191-91510e92fdd8-trusted-ca\") pod \"console-operator-5df5ffc47c-f2p4m\" (UID: \"fd597b37-557c-473c-9191-91510e92fdd8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" Feb 20 15:05:41.270300 master-0 kubenswrapper[28120]: I0220 15:05:41.270028 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/638be8af-0cb4-4cbc-8bb9-72c996ee87b9-nginx-conf\") pod \"networking-console-plugin-79f587d78f-42dr4\" (UID: \"638be8af-0cb4-4cbc-8bb9-72c996ee87b9\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-42dr4" Feb 20 15:05:41.270300 master-0 kubenswrapper[28120]: I0220 15:05:41.270068 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.270300 master-0 kubenswrapper[28120]: I0220 15:05:41.270111 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.270300 master-0 kubenswrapper[28120]: I0220 15:05:41.270138 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-session\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.270300 master-0 kubenswrapper[28120]: I0220 15:05:41.270164 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.270300 master-0 kubenswrapper[28120]: I0220 15:05:41.270189 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zht4w\" (UniqueName: \"kubernetes.io/projected/ea38f6cf-c959-49b0-85e5-868c3b3187cf-kube-api-access-zht4w\") pod \"collect-profiles-29526660-2pk72\" (UID: \"ea38f6cf-c959-49b0-85e5-868c3b3187cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72" Feb 20 15:05:41.270300 master-0 kubenswrapper[28120]: I0220 15:05:41.270219 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-serving-cert\") pod \"controller-manager-6c6bcb95bb-jd879\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.270300 master-0 kubenswrapper[28120]: I0220 15:05:41.270248 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/563f832c-f7a7-405e-a4f3-6c84064b2942-host\") pod \"node-ca-s6kvh\" (UID: \"563f832c-f7a7-405e-a4f3-6c84064b2942\") " pod="openshift-image-registry/node-ca-s6kvh" Feb 20 15:05:41.270300 master-0 kubenswrapper[28120]: I0220 15:05:41.270280 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.270300 master-0 kubenswrapper[28120]: I0220 15:05:41.270302 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scbvx\" (UniqueName: \"kubernetes.io/projected/563f832c-f7a7-405e-a4f3-6c84064b2942-kube-api-access-scbvx\") pod \"node-ca-s6kvh\" (UID: \"563f832c-f7a7-405e-a4f3-6c84064b2942\") " pod="openshift-image-registry/node-ca-s6kvh" Feb 20 15:05:41.270746 master-0 kubenswrapper[28120]: I0220 15:05:41.270329 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k4cl\" (UniqueName: \"kubernetes.io/projected/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-kube-api-access-6k4cl\") pod \"controller-manager-6c6bcb95bb-jd879\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.270746 master-0 kubenswrapper[28120]: I0220 15:05:41.270357 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xllbv\" (UniqueName: \"kubernetes.io/projected/bc47b01c-8c45-4e60-9558-7d34770441d4-kube-api-access-xllbv\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.270746 master-0 kubenswrapper[28120]: I0220 15:05:41.270389 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmg86\" (UniqueName: \"kubernetes.io/projected/4e11583a-e3a5-4634-89bc-6edb03f6ba02-kube-api-access-kmg86\") pod \"route-controller-manager-6fcf8cbd8f-6gz7r\" (UID: \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:05:41.270746 master-0 kubenswrapper[28120]: I0220 15:05:41.270418 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twblg\" (UniqueName: \"kubernetes.io/projected/0b95604a-6807-4654-a1b0-7dc20cc4418c-kube-api-access-twblg\") pod \"cni-sysctl-allowlist-ds-lh9jf\" (UID: \"0b95604a-6807-4654-a1b0-7dc20cc4418c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" Feb 20 15:05:41.270746 master-0 kubenswrapper[28120]: I0220 15:05:41.270450 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9f6549ad-a91a-4e45-ae85-f79ddf9aee86-monitoring-plugin-cert\") pod \"monitoring-plugin-9bd798c78-d5dft\" (UID: \"9f6549ad-a91a-4e45-ae85-f79ddf9aee86\") " pod="openshift-monitoring/monitoring-plugin-9bd798c78-d5dft" Feb 20 15:05:41.270746 master-0 kubenswrapper[28120]: I0220 15:05:41.270474 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/563f832c-f7a7-405e-a4f3-6c84064b2942-serviceca\") pod \"node-ca-s6kvh\" (UID: \"563f832c-f7a7-405e-a4f3-6c84064b2942\") " pod="openshift-image-registry/node-ca-s6kvh" Feb 20 15:05:41.270746 master-0 kubenswrapper[28120]: I0220 15:05:41.270498 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-client-ca\") pod \"controller-manager-6c6bcb95bb-jd879\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.270746 master-0 kubenswrapper[28120]: I0220 15:05:41.270522 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd597b37-557c-473c-9191-91510e92fdd8-serving-cert\") pod \"console-operator-5df5ffc47c-f2p4m\" (UID: \"fd597b37-557c-473c-9191-91510e92fdd8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" Feb 20 15:05:41.270746 master-0 kubenswrapper[28120]: I0220 15:05:41.270549 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0b95604a-6807-4654-a1b0-7dc20cc4418c-ready\") pod \"cni-sysctl-allowlist-ds-lh9jf\" (UID: \"0b95604a-6807-4654-a1b0-7dc20cc4418c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" Feb 20 15:05:41.270746 master-0 kubenswrapper[28120]: I0220 15:05:41.270576 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-config\") pod \"controller-manager-6c6bcb95bb-jd879\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.271363 master-0 kubenswrapper[28120]: I0220 15:05:41.271077 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-f2p4m"] Feb 20 15:05:41.272609 master-0 kubenswrapper[28120]: I0220 15:05:41.272549 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-config\") pod \"controller-manager-6c6bcb95bb-jd879\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.274302 master-0 kubenswrapper[28120]: I0220 15:05:41.274242 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-proxy-ca-bundles\") pod \"controller-manager-6c6bcb95bb-jd879\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.276995 master-0 kubenswrapper[28120]: I0220 15:05:41.275304 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e11583a-e3a5-4634-89bc-6edb03f6ba02-config\") pod \"route-controller-manager-6fcf8cbd8f-6gz7r\" (UID: \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:05:41.276995 master-0 kubenswrapper[28120]: I0220 15:05:41.275438 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-client-ca\") pod \"controller-manager-6c6bcb95bb-jd879\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.276995 master-0 kubenswrapper[28120]: I0220 15:05:41.276785 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea38f6cf-c959-49b0-85e5-868c3b3187cf-config-volume\") pod \"collect-profiles-29526660-2pk72\" (UID: \"ea38f6cf-c959-49b0-85e5-868c3b3187cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72" Feb 20 15:05:41.277346 master-0 kubenswrapper[28120]: I0220 15:05:41.277304 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd597b37-557c-473c-9191-91510e92fdd8-trusted-ca\") pod \"console-operator-5df5ffc47c-f2p4m\" (UID: \"fd597b37-557c-473c-9191-91510e92fdd8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" Feb 20 15:05:41.278539 master-0 kubenswrapper[28120]: I0220 15:05:41.278493 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e11583a-e3a5-4634-89bc-6edb03f6ba02-client-ca\") pod \"route-controller-manager-6fcf8cbd8f-6gz7r\" (UID: \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:05:41.279067 master-0 kubenswrapper[28120]: I0220 15:05:41.279036 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea38f6cf-c959-49b0-85e5-868c3b3187cf-secret-volume\") pod \"collect-profiles-29526660-2pk72\" (UID: \"ea38f6cf-c959-49b0-85e5-868c3b3187cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72" Feb 20 15:05:41.279259 master-0 kubenswrapper[28120]: I0220 15:05:41.279198 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd597b37-557c-473c-9191-91510e92fdd8-serving-cert\") pod \"console-operator-5df5ffc47c-f2p4m\" (UID: \"fd597b37-557c-473c-9191-91510e92fdd8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" Feb 20 15:05:41.279521 master-0 kubenswrapper[28120]: I0220 15:05:41.279470 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd597b37-557c-473c-9191-91510e92fdd8-config\") pod \"console-operator-5df5ffc47c-f2p4m\" (UID: \"fd597b37-557c-473c-9191-91510e92fdd8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" Feb 20 15:05:41.279737 master-0 kubenswrapper[28120]: I0220 15:05:41.279697 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/638be8af-0cb4-4cbc-8bb9-72c996ee87b9-nginx-conf\") pod \"networking-console-plugin-79f587d78f-42dr4\" (UID: \"638be8af-0cb4-4cbc-8bb9-72c996ee87b9\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-42dr4" Feb 20 15:05:41.279814 master-0 kubenswrapper[28120]: I0220 15:05:41.279763 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-serving-cert\") pod \"controller-manager-6c6bcb95bb-jd879\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.280661 master-0 kubenswrapper[28120]: I0220 15:05:41.280611 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/638be8af-0cb4-4cbc-8bb9-72c996ee87b9-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-42dr4\" (UID: \"638be8af-0cb4-4cbc-8bb9-72c996ee87b9\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-42dr4" Feb 20 15:05:41.282484 master-0 kubenswrapper[28120]: I0220 15:05:41.282384 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e11583a-e3a5-4634-89bc-6edb03f6ba02-serving-cert\") pod \"route-controller-manager-6fcf8cbd8f-6gz7r\" (UID: \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:05:41.306547 master-0 kubenswrapper[28120]: I0220 15:05:41.306489 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmg86\" (UniqueName: \"kubernetes.io/projected/4e11583a-e3a5-4634-89bc-6edb03f6ba02-kube-api-access-kmg86\") pod \"route-controller-manager-6fcf8cbd8f-6gz7r\" (UID: \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:05:41.306547 master-0 kubenswrapper[28120]: I0220 15:05:41.306521 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k4cl\" (UniqueName: \"kubernetes.io/projected/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-kube-api-access-6k4cl\") pod \"controller-manager-6c6bcb95bb-jd879\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.306844 master-0 kubenswrapper[28120]: I0220 15:05:41.306713 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zht4w\" (UniqueName: \"kubernetes.io/projected/ea38f6cf-c959-49b0-85e5-868c3b3187cf-kube-api-access-zht4w\") pod \"collect-profiles-29526660-2pk72\" (UID: \"ea38f6cf-c959-49b0-85e5-868c3b3187cf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72" Feb 20 15:05:41.310602 master-0 kubenswrapper[28120]: I0220 15:05:41.310562 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxtjm\" (UniqueName: \"kubernetes.io/projected/fd597b37-557c-473c-9191-91510e92fdd8-kube-api-access-zxtjm\") pod \"console-operator-5df5ffc47c-f2p4m\" (UID: \"fd597b37-557c-473c-9191-91510e92fdd8\") " pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" Feb 20 15:05:41.371741 master-0 kubenswrapper[28120]: I0220 15:05:41.371686 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twblg\" (UniqueName: \"kubernetes.io/projected/0b95604a-6807-4654-a1b0-7dc20cc4418c-kube-api-access-twblg\") pod \"cni-sysctl-allowlist-ds-lh9jf\" (UID: \"0b95604a-6807-4654-a1b0-7dc20cc4418c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" Feb 20 15:05:41.371741 master-0 kubenswrapper[28120]: I0220 15:05:41.371736 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9f6549ad-a91a-4e45-ae85-f79ddf9aee86-monitoring-plugin-cert\") pod \"monitoring-plugin-9bd798c78-d5dft\" (UID: \"9f6549ad-a91a-4e45-ae85-f79ddf9aee86\") " pod="openshift-monitoring/monitoring-plugin-9bd798c78-d5dft" Feb 20 15:05:41.371881 master-0 kubenswrapper[28120]: I0220 15:05:41.371759 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/563f832c-f7a7-405e-a4f3-6c84064b2942-serviceca\") pod \"node-ca-s6kvh\" (UID: \"563f832c-f7a7-405e-a4f3-6c84064b2942\") " pod="openshift-image-registry/node-ca-s6kvh" Feb 20 15:05:41.371881 master-0 kubenswrapper[28120]: I0220 15:05:41.371803 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0b95604a-6807-4654-a1b0-7dc20cc4418c-ready\") pod \"cni-sysctl-allowlist-ds-lh9jf\" (UID: \"0b95604a-6807-4654-a1b0-7dc20cc4418c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" Feb 20 15:05:41.371881 master-0 kubenswrapper[28120]: I0220 15:05:41.371842 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-audit-policies\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.372094 master-0 kubenswrapper[28120]: I0220 15:05:41.372066 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.372140 master-0 kubenswrapper[28120]: I0220 15:05:41.372109 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.372173 master-0 kubenswrapper[28120]: I0220 15:05:41.372153 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-error\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.372205 master-0 kubenswrapper[28120]: I0220 15:05:41.372184 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.372468 master-0 kubenswrapper[28120]: I0220 15:05:41.372421 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0b95604a-6807-4654-a1b0-7dc20cc4418c-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-lh9jf\" (UID: \"0b95604a-6807-4654-a1b0-7dc20cc4418c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" Feb 20 15:05:41.372523 master-0 kubenswrapper[28120]: I0220 15:05:41.372456 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/563f832c-f7a7-405e-a4f3-6c84064b2942-serviceca\") pod \"node-ca-s6kvh\" (UID: \"563f832c-f7a7-405e-a4f3-6c84064b2942\") " pod="openshift-image-registry/node-ca-s6kvh" Feb 20 15:05:41.372769 master-0 kubenswrapper[28120]: I0220 15:05:41.372717 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bc47b01c-8c45-4e60-9558-7d34770441d4-audit-dir\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.373210 master-0 kubenswrapper[28120]: I0220 15:05:41.373144 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0b95604a-6807-4654-a1b0-7dc20cc4418c-ready\") pod \"cni-sysctl-allowlist-ds-lh9jf\" (UID: \"0b95604a-6807-4654-a1b0-7dc20cc4418c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" Feb 20 15:05:41.373267 master-0 kubenswrapper[28120]: I0220 15:05:41.373162 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0b95604a-6807-4654-a1b0-7dc20cc4418c-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-lh9jf\" (UID: \"0b95604a-6807-4654-a1b0-7dc20cc4418c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" Feb 20 15:05:41.373660 master-0 kubenswrapper[28120]: I0220 15:05:41.373622 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.374331 master-0 kubenswrapper[28120]: I0220 15:05:41.374297 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-login\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.374406 master-0 kubenswrapper[28120]: I0220 15:05:41.374375 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b95604a-6807-4654-a1b0-7dc20cc4418c-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-lh9jf\" (UID: \"0b95604a-6807-4654-a1b0-7dc20cc4418c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" Feb 20 15:05:41.374442 master-0 kubenswrapper[28120]: I0220 15:05:41.374419 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.374474 master-0 kubenswrapper[28120]: I0220 15:05:41.374449 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.374509 master-0 kubenswrapper[28120]: I0220 15:05:41.374478 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-session\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.374509 master-0 kubenswrapper[28120]: I0220 15:05:41.374500 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.374579 master-0 kubenswrapper[28120]: I0220 15:05:41.374520 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/563f832c-f7a7-405e-a4f3-6c84064b2942-host\") pod \"node-ca-s6kvh\" (UID: \"563f832c-f7a7-405e-a4f3-6c84064b2942\") " pod="openshift-image-registry/node-ca-s6kvh" Feb 20 15:05:41.374773 master-0 kubenswrapper[28120]: I0220 15:05:41.374740 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b95604a-6807-4654-a1b0-7dc20cc4418c-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-lh9jf\" (UID: \"0b95604a-6807-4654-a1b0-7dc20cc4418c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" Feb 20 15:05:41.374829 master-0 kubenswrapper[28120]: I0220 15:05:41.374773 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.374866 master-0 kubenswrapper[28120]: I0220 15:05:41.374825 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/563f832c-f7a7-405e-a4f3-6c84064b2942-host\") pod \"node-ca-s6kvh\" (UID: \"563f832c-f7a7-405e-a4f3-6c84064b2942\") " pod="openshift-image-registry/node-ca-s6kvh" Feb 20 15:05:41.375594 master-0 kubenswrapper[28120]: I0220 15:05:41.374650 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bc47b01c-8c45-4e60-9558-7d34770441d4-audit-dir\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.375594 master-0 kubenswrapper[28120]: I0220 15:05:41.375303 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scbvx\" (UniqueName: \"kubernetes.io/projected/563f832c-f7a7-405e-a4f3-6c84064b2942-kube-api-access-scbvx\") pod \"node-ca-s6kvh\" (UID: \"563f832c-f7a7-405e-a4f3-6c84064b2942\") " pod="openshift-image-registry/node-ca-s6kvh" Feb 20 15:05:41.375594 master-0 kubenswrapper[28120]: I0220 15:05:41.375393 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xllbv\" (UniqueName: \"kubernetes.io/projected/bc47b01c-8c45-4e60-9558-7d34770441d4-kube-api-access-xllbv\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.375594 master-0 kubenswrapper[28120]: I0220 15:05:41.375549 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.376168 master-0 kubenswrapper[28120]: I0220 15:05:41.376149 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.377186 master-0 kubenswrapper[28120]: I0220 15:05:41.377150 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.377433 master-0 kubenswrapper[28120]: I0220 15:05:41.377391 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-audit-policies\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.378253 master-0 kubenswrapper[28120]: I0220 15:05:41.378218 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.378315 master-0 kubenswrapper[28120]: I0220 15:05:41.378218 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9f6549ad-a91a-4e45-ae85-f79ddf9aee86-monitoring-plugin-cert\") pod \"monitoring-plugin-9bd798c78-d5dft\" (UID: \"9f6549ad-a91a-4e45-ae85-f79ddf9aee86\") " pod="openshift-monitoring/monitoring-plugin-9bd798c78-d5dft" Feb 20 15:05:41.378352 master-0 kubenswrapper[28120]: I0220 15:05:41.378331 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.378934 master-0 kubenswrapper[28120]: I0220 15:05:41.378881 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-error\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.379699 master-0 kubenswrapper[28120]: I0220 15:05:41.379650 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-session\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.390537 master-0 kubenswrapper[28120]: I0220 15:05:41.390482 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scbvx\" (UniqueName: \"kubernetes.io/projected/563f832c-f7a7-405e-a4f3-6c84064b2942-kube-api-access-scbvx\") pod \"node-ca-s6kvh\" (UID: \"563f832c-f7a7-405e-a4f3-6c84064b2942\") " pod="openshift-image-registry/node-ca-s6kvh" Feb 20 15:05:41.391288 master-0 kubenswrapper[28120]: I0220 15:05:41.391249 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.392059 master-0 kubenswrapper[28120]: I0220 15:05:41.392023 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-login\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.394541 master-0 kubenswrapper[28120]: I0220 15:05:41.394510 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xllbv\" (UniqueName: \"kubernetes.io/projected/bc47b01c-8c45-4e60-9558-7d34770441d4-kube-api-access-xllbv\") pod \"oauth-openshift-7b96f5c8d4-6xtzs\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.394802 master-0 kubenswrapper[28120]: I0220 15:05:41.394753 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twblg\" (UniqueName: \"kubernetes.io/projected/0b95604a-6807-4654-a1b0-7dc20cc4418c-kube-api-access-twblg\") pod \"cni-sysctl-allowlist-ds-lh9jf\" (UID: \"0b95604a-6807-4654-a1b0-7dc20cc4418c\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" Feb 20 15:05:41.430153 master-0 kubenswrapper[28120]: I0220 15:05:41.430092 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-9bd798c78-d5dft" Feb 20 15:05:41.441498 master-0 kubenswrapper[28120]: I0220 15:05:41.441417 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:41.475797 master-0 kubenswrapper[28120]: I0220 15:05:41.475683 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-79f587d78f-42dr4" Feb 20 15:05:41.499441 master-0 kubenswrapper[28120]: I0220 15:05:41.499368 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72" Feb 20 15:05:41.522300 master-0 kubenswrapper[28120]: I0220 15:05:41.522245 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:05:41.551860 master-0 kubenswrapper[28120]: I0220 15:05:41.551130 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" Feb 20 15:05:41.569366 master-0 kubenswrapper[28120]: I0220 15:05:41.568533 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-s6kvh" Feb 20 15:05:41.602754 master-0 kubenswrapper[28120]: I0220 15:05:41.602343 28120 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 15:05:41.652024 master-0 kubenswrapper[28120]: I0220 15:05:41.651914 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:41.673350 master-0 kubenswrapper[28120]: I0220 15:05:41.673308 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" Feb 20 15:05:41.694150 master-0 kubenswrapper[28120]: W0220 15:05:41.694100 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b95604a_6807_4654_a1b0_7dc20cc4418c.slice/crio-d60a8b21345008be7a9ad8cb8e1f20677382ab03ec0e93b18ff4df6c6ccfdb89 WatchSource:0}: Error finding container d60a8b21345008be7a9ad8cb8e1f20677382ab03ec0e93b18ff4df6c6ccfdb89: Status 404 returned error can't find the container with id d60a8b21345008be7a9ad8cb8e1f20677382ab03ec0e93b18ff4df6c6ccfdb89 Feb 20 15:05:41.904021 master-0 kubenswrapper[28120]: I0220 15:05:41.903969 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-9bd798c78-d5dft"] Feb 20 15:05:41.918841 master-0 kubenswrapper[28120]: W0220 15:05:41.918789 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f6549ad_a91a_4e45_ae85_f79ddf9aee86.slice/crio-7bb384e0d3246f8ffc997c54ff113acdd8353e1bb10ecb66f673353b6f3c2208 WatchSource:0}: Error finding container 7bb384e0d3246f8ffc997c54ff113acdd8353e1bb10ecb66f673353b6f3c2208: Status 404 returned error can't find the container with id 7bb384e0d3246f8ffc997c54ff113acdd8353e1bb10ecb66f673353b6f3c2208 Feb 20 15:05:41.925283 master-0 kubenswrapper[28120]: W0220 15:05:41.925246 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f8fbacc_4698_43ac_8941_e3a3db7c5dea.slice/crio-6df8cdc58f9d96783bb3ca6e76faf33668e2e87dcd7359442ebf564c1915f2ff WatchSource:0}: Error finding container 6df8cdc58f9d96783bb3ca6e76faf33668e2e87dcd7359442ebf564c1915f2ff: Status 404 returned error can't find the container with id 6df8cdc58f9d96783bb3ca6e76faf33668e2e87dcd7359442ebf564c1915f2ff Feb 20 15:05:41.928936 master-0 kubenswrapper[28120]: I0220 15:05:41.928862 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c6bcb95bb-jd879"] Feb 20 15:05:42.079749 master-0 kubenswrapper[28120]: I0220 15:05:42.079275 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72"] Feb 20 15:05:42.079749 master-0 kubenswrapper[28120]: I0220 15:05:42.079340 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-s6kvh" event={"ID":"563f832c-f7a7-405e-a4f3-6c84064b2942","Type":"ContainerStarted","Data":"9fac3c55fa5e23a623553fe8bcf387a3dcb55284f6ea0e65d1f74d0daafa5d19"} Feb 20 15:05:42.079749 master-0 kubenswrapper[28120]: I0220 15:05:42.079395 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-f2p4m"] Feb 20 15:05:42.083409 master-0 kubenswrapper[28120]: I0220 15:05:42.081816 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-42dr4"] Feb 20 15:05:42.083774 master-0 kubenswrapper[28120]: I0220 15:05:42.083709 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-9bd798c78-d5dft" event={"ID":"9f6549ad-a91a-4e45-ae85-f79ddf9aee86","Type":"ContainerStarted","Data":"7bb384e0d3246f8ffc997c54ff113acdd8353e1bb10ecb66f673353b6f3c2208"} Feb 20 15:05:42.086179 master-0 kubenswrapper[28120]: I0220 15:05:42.086093 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" event={"ID":"2f8fbacc-4698-43ac-8941-e3a3db7c5dea","Type":"ContainerStarted","Data":"6df8cdc58f9d96783bb3ca6e76faf33668e2e87dcd7359442ebf564c1915f2ff"} Feb 20 15:05:42.088012 master-0 kubenswrapper[28120]: I0220 15:05:42.087946 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" event={"ID":"0b95604a-6807-4654-a1b0-7dc20cc4418c","Type":"ContainerStarted","Data":"022a50bc2d88f6eea8c0664cd9f69aaae364fe47c11146634987f954803c2d12"} Feb 20 15:05:42.088012 master-0 kubenswrapper[28120]: I0220 15:05:42.088003 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" event={"ID":"0b95604a-6807-4654-a1b0-7dc20cc4418c","Type":"ContainerStarted","Data":"d60a8b21345008be7a9ad8cb8e1f20677382ab03ec0e93b18ff4df6c6ccfdb89"} Feb 20 15:05:42.088540 master-0 kubenswrapper[28120]: I0220 15:05:42.088464 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" Feb 20 15:05:42.101219 master-0 kubenswrapper[28120]: W0220 15:05:42.101183 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd597b37_557c_473c_9191_91510e92fdd8.slice/crio-f90f9a4d674e2d6bdac36d80e87175a423c7742220669d8976fa2570460a1e4c WatchSource:0}: Error finding container f90f9a4d674e2d6bdac36d80e87175a423c7742220669d8976fa2570460a1e4c: Status 404 returned error can't find the container with id f90f9a4d674e2d6bdac36d80e87175a423c7742220669d8976fa2570460a1e4c Feb 20 15:05:42.103013 master-0 kubenswrapper[28120]: W0220 15:05:42.102962 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod638be8af_0cb4_4cbc_8bb9_72c996ee87b9.slice/crio-2e917157da74cdbdfbabe6c2894b012dccc12200c2eb8e82fa1591e750ee82e0 WatchSource:0}: Error finding container 2e917157da74cdbdfbabe6c2894b012dccc12200c2eb8e82fa1591e750ee82e0: Status 404 returned error can't find the container with id 2e917157da74cdbdfbabe6c2894b012dccc12200c2eb8e82fa1591e750ee82e0 Feb 20 15:05:42.104956 master-0 kubenswrapper[28120]: W0220 15:05:42.104890 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e11583a_e3a5_4634_89bc_6edb03f6ba02.slice/crio-48ef437a777069d0baa3a357450465f63d399658e873940561cb3da1d9dacc37 WatchSource:0}: Error finding container 48ef437a777069d0baa3a357450465f63d399658e873940561cb3da1d9dacc37: Status 404 returned error can't find the container with id 48ef437a777069d0baa3a357450465f63d399658e873940561cb3da1d9dacc37 Feb 20 15:05:42.108154 master-0 kubenswrapper[28120]: I0220 15:05:42.107917 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r"] Feb 20 15:05:42.131166 master-0 kubenswrapper[28120]: I0220 15:05:42.126333 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" Feb 20 15:05:42.138159 master-0 kubenswrapper[28120]: I0220 15:05:42.138115 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs"] Feb 20 15:05:42.143152 master-0 kubenswrapper[28120]: I0220 15:05:42.143101 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-lh9jf" podStartSLOduration=331.143089511 podStartE2EDuration="5m31.143089511s" podCreationTimestamp="2026-02-20 15:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:05:42.128669788 +0000 UTC m=+280.389463361" watchObservedRunningTime="2026-02-20 15:05:42.143089511 +0000 UTC m=+280.403883064" Feb 20 15:05:42.158640 master-0 kubenswrapper[28120]: W0220 15:05:42.158577 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc47b01c_8c45_4e60_9558_7d34770441d4.slice/crio-89341d3c5d275b2547226300b721d50db25c215cd2a2005a65109649867d88aa WatchSource:0}: Error finding container 89341d3c5d275b2547226300b721d50db25c215cd2a2005a65109649867d88aa: Status 404 returned error can't find the container with id 89341d3c5d275b2547226300b721d50db25c215cd2a2005a65109649867d88aa Feb 20 15:05:42.629646 master-0 kubenswrapper[28120]: I0220 15:05:42.629573 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-8-master-0"] Feb 20 15:05:42.630725 master-0 kubenswrapper[28120]: I0220 15:05:42.630688 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-8-master-0" Feb 20 15:05:42.642001 master-0 kubenswrapper[28120]: I0220 15:05:42.641952 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-8-master-0"] Feb 20 15:05:42.703637 master-0 kubenswrapper[28120]: I0220 15:05:42.703559 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-var-lock\") pod \"installer-8-master-0\" (UID: \"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6\") " pod="openshift-kube-apiserver/installer-8-master-0" Feb 20 15:05:42.703911 master-0 kubenswrapper[28120]: I0220 15:05:42.703705 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-kubelet-dir\") pod \"installer-8-master-0\" (UID: \"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6\") " pod="openshift-kube-apiserver/installer-8-master-0" Feb 20 15:05:42.703911 master-0 kubenswrapper[28120]: I0220 15:05:42.703837 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-kube-api-access\") pod \"installer-8-master-0\" (UID: \"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6\") " pod="openshift-kube-apiserver/installer-8-master-0" Feb 20 15:05:42.805671 master-0 kubenswrapper[28120]: I0220 15:05:42.805595 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-var-lock\") pod \"installer-8-master-0\" (UID: \"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6\") " pod="openshift-kube-apiserver/installer-8-master-0" Feb 20 15:05:42.805897 master-0 kubenswrapper[28120]: I0220 15:05:42.805767 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-var-lock\") pod \"installer-8-master-0\" (UID: \"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6\") " pod="openshift-kube-apiserver/installer-8-master-0" Feb 20 15:05:42.805897 master-0 kubenswrapper[28120]: I0220 15:05:42.805856 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-kubelet-dir\") pod \"installer-8-master-0\" (UID: \"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6\") " pod="openshift-kube-apiserver/installer-8-master-0" Feb 20 15:05:42.806016 master-0 kubenswrapper[28120]: I0220 15:05:42.805912 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-kube-api-access\") pod \"installer-8-master-0\" (UID: \"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6\") " pod="openshift-kube-apiserver/installer-8-master-0" Feb 20 15:05:42.806083 master-0 kubenswrapper[28120]: I0220 15:05:42.806025 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-kubelet-dir\") pod \"installer-8-master-0\" (UID: \"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6\") " pod="openshift-kube-apiserver/installer-8-master-0" Feb 20 15:05:42.829198 master-0 kubenswrapper[28120]: I0220 15:05:42.829156 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-kube-api-access\") pod \"installer-8-master-0\" (UID: \"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6\") " pod="openshift-kube-apiserver/installer-8-master-0" Feb 20 15:05:42.973888 master-0 kubenswrapper[28120]: I0220 15:05:42.973845 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-8-master-0" Feb 20 15:05:43.094632 master-0 kubenswrapper[28120]: I0220 15:05:43.094473 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" event={"ID":"fd597b37-557c-473c-9191-91510e92fdd8","Type":"ContainerStarted","Data":"f90f9a4d674e2d6bdac36d80e87175a423c7742220669d8976fa2570460a1e4c"} Feb 20 15:05:43.095909 master-0 kubenswrapper[28120]: I0220 15:05:43.095875 28120 generic.go:334] "Generic (PLEG): container finished" podID="ea38f6cf-c959-49b0-85e5-868c3b3187cf" containerID="1105423561c204f6f75f25a4f763aae0ebd2fd864eba76c8813194e9c894411a" exitCode=0 Feb 20 15:05:43.095993 master-0 kubenswrapper[28120]: I0220 15:05:43.095937 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72" event={"ID":"ea38f6cf-c959-49b0-85e5-868c3b3187cf","Type":"ContainerDied","Data":"1105423561c204f6f75f25a4f763aae0ebd2fd864eba76c8813194e9c894411a"} Feb 20 15:05:43.096040 master-0 kubenswrapper[28120]: I0220 15:05:43.096011 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72" event={"ID":"ea38f6cf-c959-49b0-85e5-868c3b3187cf","Type":"ContainerStarted","Data":"4c882e5376d41420162b06f754a352196c3372e092295f2d4424c001cc3defa2"} Feb 20 15:05:43.100689 master-0 kubenswrapper[28120]: I0220 15:05:43.100325 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" event={"ID":"bc47b01c-8c45-4e60-9558-7d34770441d4","Type":"ContainerStarted","Data":"89341d3c5d275b2547226300b721d50db25c215cd2a2005a65109649867d88aa"} Feb 20 15:05:43.106615 master-0 kubenswrapper[28120]: I0220 15:05:43.106565 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" event={"ID":"4e11583a-e3a5-4634-89bc-6edb03f6ba02","Type":"ContainerStarted","Data":"b83059054242c5a75ae5aa3fcc22958bcea0cc829794298aa24a6a97133df2bf"} Feb 20 15:05:43.106615 master-0 kubenswrapper[28120]: I0220 15:05:43.106612 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" event={"ID":"4e11583a-e3a5-4634-89bc-6edb03f6ba02","Type":"ContainerStarted","Data":"48ef437a777069d0baa3a357450465f63d399658e873940561cb3da1d9dacc37"} Feb 20 15:05:43.111743 master-0 kubenswrapper[28120]: I0220 15:05:43.109350 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:05:43.112245 master-0 kubenswrapper[28120]: I0220 15:05:43.112217 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:05:43.112885 master-0 kubenswrapper[28120]: I0220 15:05:43.112855 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-79f587d78f-42dr4" event={"ID":"638be8af-0cb4-4cbc-8bb9-72c996ee87b9","Type":"ContainerStarted","Data":"2e917157da74cdbdfbabe6c2894b012dccc12200c2eb8e82fa1591e750ee82e0"} Feb 20 15:05:43.115456 master-0 kubenswrapper[28120]: I0220 15:05:43.115419 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" event={"ID":"2f8fbacc-4698-43ac-8941-e3a3db7c5dea","Type":"ContainerStarted","Data":"10d92bc258f8f8b9a79201f354e7c21782133cfd415da7b5b4cb3b3577c97674"} Feb 20 15:05:43.115456 master-0 kubenswrapper[28120]: I0220 15:05:43.115448 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:43.157583 master-0 kubenswrapper[28120]: I0220 15:05:43.157546 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:05:43.180810 master-0 kubenswrapper[28120]: I0220 15:05:43.180732 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" podStartSLOduration=232.180716961 podStartE2EDuration="3m52.180716961s" podCreationTimestamp="2026-02-20 15:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:05:43.177584252 +0000 UTC m=+281.438377835" watchObservedRunningTime="2026-02-20 15:05:43.180716961 +0000 UTC m=+281.441510524" Feb 20 15:05:43.205024 master-0 kubenswrapper[28120]: I0220 15:05:43.204899 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" podStartSLOduration=232.204858068 podStartE2EDuration="3m52.204858068s" podCreationTimestamp="2026-02-20 15:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:05:43.200660743 +0000 UTC m=+281.461454316" watchObservedRunningTime="2026-02-20 15:05:43.204858068 +0000 UTC m=+281.465651631" Feb 20 15:05:43.503173 master-0 kubenswrapper[28120]: I0220 15:05:43.503123 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-8-master-0"] Feb 20 15:05:44.687844 master-0 kubenswrapper[28120]: W0220 15:05:44.687656 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod78eaf3b1_0c29_48ea_bf93_e3a6b9a79aa6.slice/crio-81938480325b6afb4453dfcf8feefefeaa92ffdb19e84afa903a902748890b30 WatchSource:0}: Error finding container 81938480325b6afb4453dfcf8feefefeaa92ffdb19e84afa903a902748890b30: Status 404 returned error can't find the container with id 81938480325b6afb4453dfcf8feefefeaa92ffdb19e84afa903a902748890b30 Feb 20 15:05:45.134974 master-0 kubenswrapper[28120]: I0220 15:05:45.134857 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-8-master-0" event={"ID":"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6","Type":"ContainerStarted","Data":"81938480325b6afb4453dfcf8feefefeaa92ffdb19e84afa903a902748890b30"} Feb 20 15:05:45.490229 master-0 kubenswrapper[28120]: I0220 15:05:45.490178 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72" Feb 20 15:05:45.597613 master-0 kubenswrapper[28120]: I0220 15:05:45.597494 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea38f6cf-c959-49b0-85e5-868c3b3187cf-config-volume\") pod \"ea38f6cf-c959-49b0-85e5-868c3b3187cf\" (UID: \"ea38f6cf-c959-49b0-85e5-868c3b3187cf\") " Feb 20 15:05:45.597808 master-0 kubenswrapper[28120]: I0220 15:05:45.597735 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zht4w\" (UniqueName: \"kubernetes.io/projected/ea38f6cf-c959-49b0-85e5-868c3b3187cf-kube-api-access-zht4w\") pod \"ea38f6cf-c959-49b0-85e5-868c3b3187cf\" (UID: \"ea38f6cf-c959-49b0-85e5-868c3b3187cf\") " Feb 20 15:05:45.598292 master-0 kubenswrapper[28120]: I0220 15:05:45.598247 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea38f6cf-c959-49b0-85e5-868c3b3187cf-secret-volume\") pod \"ea38f6cf-c959-49b0-85e5-868c3b3187cf\" (UID: \"ea38f6cf-c959-49b0-85e5-868c3b3187cf\") " Feb 20 15:05:45.600221 master-0 kubenswrapper[28120]: I0220 15:05:45.600162 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea38f6cf-c959-49b0-85e5-868c3b3187cf-config-volume" (OuterVolumeSpecName: "config-volume") pod "ea38f6cf-c959-49b0-85e5-868c3b3187cf" (UID: "ea38f6cf-c959-49b0-85e5-868c3b3187cf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:05:45.616462 master-0 kubenswrapper[28120]: I0220 15:05:45.614785 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea38f6cf-c959-49b0-85e5-868c3b3187cf-kube-api-access-zht4w" (OuterVolumeSpecName: "kube-api-access-zht4w") pod "ea38f6cf-c959-49b0-85e5-868c3b3187cf" (UID: "ea38f6cf-c959-49b0-85e5-868c3b3187cf"). InnerVolumeSpecName "kube-api-access-zht4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:05:45.616462 master-0 kubenswrapper[28120]: I0220 15:05:45.615036 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea38f6cf-c959-49b0-85e5-868c3b3187cf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ea38f6cf-c959-49b0-85e5-868c3b3187cf" (UID: "ea38f6cf-c959-49b0-85e5-868c3b3187cf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:05:45.700554 master-0 kubenswrapper[28120]: I0220 15:05:45.700436 28120 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea38f6cf-c959-49b0-85e5-868c3b3187cf-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 15:05:45.700554 master-0 kubenswrapper[28120]: I0220 15:05:45.700485 28120 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea38f6cf-c959-49b0-85e5-868c3b3187cf-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 15:05:45.700554 master-0 kubenswrapper[28120]: I0220 15:05:45.700499 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zht4w\" (UniqueName: \"kubernetes.io/projected/ea38f6cf-c959-49b0-85e5-868c3b3187cf-kube-api-access-zht4w\") on node \"master-0\" DevicePath \"\"" Feb 20 15:05:46.145633 master-0 kubenswrapper[28120]: I0220 15:05:46.145026 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72" event={"ID":"ea38f6cf-c959-49b0-85e5-868c3b3187cf","Type":"ContainerDied","Data":"4c882e5376d41420162b06f754a352196c3372e092295f2d4424c001cc3defa2"} Feb 20 15:05:46.145633 master-0 kubenswrapper[28120]: I0220 15:05:46.145073 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c882e5376d41420162b06f754a352196c3372e092295f2d4424c001cc3defa2" Feb 20 15:05:46.145633 master-0 kubenswrapper[28120]: I0220 15:05:46.145075 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526660-2pk72" Feb 20 15:05:47.024534 master-0 kubenswrapper[28120]: I0220 15:05:47.024436 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-955b69498-f6nq7"] Feb 20 15:05:47.024984 master-0 kubenswrapper[28120]: E0220 15:05:47.024862 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea38f6cf-c959-49b0-85e5-868c3b3187cf" containerName="collect-profiles" Feb 20 15:05:47.024984 master-0 kubenswrapper[28120]: I0220 15:05:47.024884 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea38f6cf-c959-49b0-85e5-868c3b3187cf" containerName="collect-profiles" Feb 20 15:05:47.025200 master-0 kubenswrapper[28120]: I0220 15:05:47.025166 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea38f6cf-c959-49b0-85e5-868c3b3187cf" containerName="collect-profiles" Feb 20 15:05:47.025859 master-0 kubenswrapper[28120]: I0220 15:05:47.025827 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-955b69498-f6nq7" Feb 20 15:05:47.031565 master-0 kubenswrapper[28120]: I0220 15:05:47.031427 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-mblrr" Feb 20 15:05:47.032423 master-0 kubenswrapper[28120]: I0220 15:05:47.031750 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 20 15:05:47.032423 master-0 kubenswrapper[28120]: I0220 15:05:47.031979 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 20 15:05:47.038630 master-0 kubenswrapper[28120]: I0220 15:05:47.036329 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-955b69498-f6nq7"] Feb 20 15:05:47.118337 master-0 kubenswrapper[28120]: I0220 15:05:47.118268 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nfrf\" (UniqueName: \"kubernetes.io/projected/15b884b3-2614-453a-88c5-f47fcc14aad1-kube-api-access-4nfrf\") pod \"downloads-955b69498-f6nq7\" (UID: \"15b884b3-2614-453a-88c5-f47fcc14aad1\") " pod="openshift-console/downloads-955b69498-f6nq7" Feb 20 15:05:47.156308 master-0 kubenswrapper[28120]: I0220 15:05:47.155781 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" event={"ID":"bc47b01c-8c45-4e60-9558-7d34770441d4","Type":"ContainerStarted","Data":"fad731f5662d750a81e6190957a60cb50151b6c92bf5f4edd1fbaf227d85ba11"} Feb 20 15:05:47.156308 master-0 kubenswrapper[28120]: I0220 15:05:47.156074 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:47.158871 master-0 kubenswrapper[28120]: I0220 15:05:47.158487 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-79f587d78f-42dr4" event={"ID":"638be8af-0cb4-4cbc-8bb9-72c996ee87b9","Type":"ContainerStarted","Data":"a29f4e201ca14f951656984ccdc682885df765b53f7154b9d4d20daad75235e7"} Feb 20 15:05:47.161840 master-0 kubenswrapper[28120]: I0220 15:05:47.161768 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-9bd798c78-d5dft" event={"ID":"9f6549ad-a91a-4e45-ae85-f79ddf9aee86","Type":"ContainerStarted","Data":"c9792260100fa501f7aeee79992bcf4de75c8718b628af665d5503b828064e57"} Feb 20 15:05:47.162036 master-0 kubenswrapper[28120]: I0220 15:05:47.161867 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-9bd798c78-d5dft" Feb 20 15:05:47.164794 master-0 kubenswrapper[28120]: I0220 15:05:47.164713 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" event={"ID":"fd597b37-557c-473c-9191-91510e92fdd8","Type":"ContainerStarted","Data":"24f4a1cca8c768da7d467703a38dc8fde7994586899928e67b6e51f49e8cfe37"} Feb 20 15:05:47.165002 master-0 kubenswrapper[28120]: I0220 15:05:47.164882 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" Feb 20 15:05:47.165507 master-0 kubenswrapper[28120]: I0220 15:05:47.165453 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:05:47.168979 master-0 kubenswrapper[28120]: I0220 15:05:47.168904 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-s6kvh" event={"ID":"563f832c-f7a7-405e-a4f3-6c84064b2942","Type":"ContainerStarted","Data":"2a40035a002b63d91ce2e95d1e5d292b7578c2072c6868e3ab6432d19924df37"} Feb 20 15:05:47.172671 master-0 kubenswrapper[28120]: I0220 15:05:47.172609 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" Feb 20 15:05:47.172885 master-0 kubenswrapper[28120]: I0220 15:05:47.172819 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-8-master-0" event={"ID":"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6","Type":"ContainerStarted","Data":"b59d80a3e1ed21ce35267f7624e91d5c503d2c8145aa26f5499f55751a4bcbb1"} Feb 20 15:05:47.176870 master-0 kubenswrapper[28120]: I0220 15:05:47.176816 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-9bd798c78-d5dft" Feb 20 15:05:47.205131 master-0 kubenswrapper[28120]: I0220 15:05:47.201319 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" podStartSLOduration=55.307227135 podStartE2EDuration="59.201297067s" podCreationTimestamp="2026-02-20 15:04:48 +0000 UTC" firstStartedPulling="2026-02-20 15:05:42.160571411 +0000 UTC m=+280.421364984" lastFinishedPulling="2026-02-20 15:05:46.054641303 +0000 UTC m=+284.315434916" observedRunningTime="2026-02-20 15:05:47.193587073 +0000 UTC m=+285.454380696" watchObservedRunningTime="2026-02-20 15:05:47.201297067 +0000 UTC m=+285.462090660" Feb 20 15:05:47.221695 master-0 kubenswrapper[28120]: I0220 15:05:47.221098 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nfrf\" (UniqueName: \"kubernetes.io/projected/15b884b3-2614-453a-88c5-f47fcc14aad1-kube-api-access-4nfrf\") pod \"downloads-955b69498-f6nq7\" (UID: \"15b884b3-2614-453a-88c5-f47fcc14aad1\") " pod="openshift-console/downloads-955b69498-f6nq7" Feb 20 15:05:47.224873 master-0 kubenswrapper[28120]: I0220 15:05:47.224791 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-s6kvh" podStartSLOduration=40.846551639 podStartE2EDuration="45.224767988s" podCreationTimestamp="2026-02-20 15:05:02 +0000 UTC" firstStartedPulling="2026-02-20 15:05:41.602265177 +0000 UTC m=+279.863058740" lastFinishedPulling="2026-02-20 15:05:45.980481486 +0000 UTC m=+284.241275089" observedRunningTime="2026-02-20 15:05:47.216313655 +0000 UTC m=+285.477107328" watchObservedRunningTime="2026-02-20 15:05:47.224767988 +0000 UTC m=+285.485561561" Feb 20 15:05:47.249238 master-0 kubenswrapper[28120]: I0220 15:05:47.249152 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nfrf\" (UniqueName: \"kubernetes.io/projected/15b884b3-2614-453a-88c5-f47fcc14aad1-kube-api-access-4nfrf\") pod \"downloads-955b69498-f6nq7\" (UID: \"15b884b3-2614-453a-88c5-f47fcc14aad1\") " pod="openshift-console/downloads-955b69498-f6nq7" Feb 20 15:05:47.341974 master-0 kubenswrapper[28120]: I0220 15:05:47.340801 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-9bd798c78-d5dft" podStartSLOduration=262.281980009 podStartE2EDuration="4m26.340781438s" podCreationTimestamp="2026-02-20 15:01:21 +0000 UTC" firstStartedPulling="2026-02-20 15:05:41.921890523 +0000 UTC m=+280.182684106" lastFinishedPulling="2026-02-20 15:05:45.980691932 +0000 UTC m=+284.241485535" observedRunningTime="2026-02-20 15:05:47.340761728 +0000 UTC m=+285.601555311" watchObservedRunningTime="2026-02-20 15:05:47.340781438 +0000 UTC m=+285.601575011" Feb 20 15:05:47.345826 master-0 kubenswrapper[28120]: I0220 15:05:47.344017 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-955b69498-f6nq7" Feb 20 15:05:47.345826 master-0 kubenswrapper[28120]: I0220 15:05:47.344791 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-5df5ffc47c-f2p4m" podStartSLOduration=266.469347836 podStartE2EDuration="4m30.344777059s" podCreationTimestamp="2026-02-20 15:01:17 +0000 UTC" firstStartedPulling="2026-02-20 15:05:42.105108335 +0000 UTC m=+280.365901928" lastFinishedPulling="2026-02-20 15:05:45.980537558 +0000 UTC m=+284.241331151" observedRunningTime="2026-02-20 15:05:47.316279402 +0000 UTC m=+285.577072975" watchObservedRunningTime="2026-02-20 15:05:47.344777059 +0000 UTC m=+285.605570622" Feb 20 15:05:47.372838 master-0 kubenswrapper[28120]: I0220 15:05:47.372699 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-8-master-0" podStartSLOduration=5.372682871 podStartE2EDuration="5.372682871s" podCreationTimestamp="2026-02-20 15:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:05:47.367260585 +0000 UTC m=+285.628054148" watchObservedRunningTime="2026-02-20 15:05:47.372682871 +0000 UTC m=+285.633476434" Feb 20 15:05:47.403845 master-0 kubenswrapper[28120]: I0220 15:05:47.403174 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-79f587d78f-42dr4" podStartSLOduration=221.529875469 podStartE2EDuration="3m45.403155658s" podCreationTimestamp="2026-02-20 15:02:02 +0000 UTC" firstStartedPulling="2026-02-20 15:05:42.107218918 +0000 UTC m=+280.368012481" lastFinishedPulling="2026-02-20 15:05:45.980499067 +0000 UTC m=+284.241292670" observedRunningTime="2026-02-20 15:05:47.397958668 +0000 UTC m=+285.658752231" watchObservedRunningTime="2026-02-20 15:05:47.403155658 +0000 UTC m=+285.663949221" Feb 20 15:05:47.861530 master-0 kubenswrapper[28120]: I0220 15:05:47.861227 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-955b69498-f6nq7"] Feb 20 15:05:47.873883 master-0 kubenswrapper[28120]: W0220 15:05:47.873829 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15b884b3_2614_453a_88c5_f47fcc14aad1.slice/crio-12c3bf9aa25e963b7900a403206bf231b778d1c5855402d568e36f1e49e1bd9c WatchSource:0}: Error finding container 12c3bf9aa25e963b7900a403206bf231b778d1c5855402d568e36f1e49e1bd9c: Status 404 returned error can't find the container with id 12c3bf9aa25e963b7900a403206bf231b778d1c5855402d568e36f1e49e1bd9c Feb 20 15:05:48.184855 master-0 kubenswrapper[28120]: I0220 15:05:48.183767 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-955b69498-f6nq7" event={"ID":"15b884b3-2614-453a-88c5-f47fcc14aad1","Type":"ContainerStarted","Data":"12c3bf9aa25e963b7900a403206bf231b778d1c5855402d568e36f1e49e1bd9c"} Feb 20 15:05:52.878479 master-0 kubenswrapper[28120]: I0220 15:05:52.878418 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 20 15:05:52.882015 master-0 kubenswrapper[28120]: I0220 15:05:52.881968 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:52.885688 master-0 kubenswrapper[28120]: I0220 15:05:52.885637 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 20 15:05:52.885982 master-0 kubenswrapper[28120]: I0220 15:05:52.885953 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 20 15:05:52.886299 master-0 kubenswrapper[28120]: I0220 15:05:52.886270 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 20 15:05:52.886607 master-0 kubenswrapper[28120]: I0220 15:05:52.886582 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 20 15:05:52.886793 master-0 kubenswrapper[28120]: I0220 15:05:52.886766 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 20 15:05:52.887023 master-0 kubenswrapper[28120]: I0220 15:05:52.886996 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 20 15:05:52.889542 master-0 kubenswrapper[28120]: I0220 15:05:52.889506 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 20 15:05:52.895250 master-0 kubenswrapper[28120]: I0220 15:05:52.894994 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 20 15:05:52.910072 master-0 kubenswrapper[28120]: I0220 15:05:52.910016 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 20 15:05:53.010225 master-0 kubenswrapper[28120]: I0220 15:05:53.010120 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/48af5081-6d64-454b-979a-ee1bc7065bc4-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.010225 master-0 kubenswrapper[28120]: I0220 15:05:53.010212 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-config-volume\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.010225 master-0 kubenswrapper[28120]: I0220 15:05:53.010236 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lj7n\" (UniqueName: \"kubernetes.io/projected/48af5081-6d64-454b-979a-ee1bc7065bc4-kube-api-access-8lj7n\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.010534 master-0 kubenswrapper[28120]: I0220 15:05:53.010442 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/48af5081-6d64-454b-979a-ee1bc7065bc4-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.010534 master-0 kubenswrapper[28120]: I0220 15:05:53.010509 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-web-config\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.010628 master-0 kubenswrapper[28120]: I0220 15:05:53.010551 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.010628 master-0 kubenswrapper[28120]: I0220 15:05:53.010570 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/48af5081-6d64-454b-979a-ee1bc7065bc4-config-out\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.010711 master-0 kubenswrapper[28120]: I0220 15:05:53.010653 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.010711 master-0 kubenswrapper[28120]: I0220 15:05:53.010680 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48af5081-6d64-454b-979a-ee1bc7065bc4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.010805 master-0 kubenswrapper[28120]: I0220 15:05:53.010743 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.010805 master-0 kubenswrapper[28120]: I0220 15:05:53.010769 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/48af5081-6d64-454b-979a-ee1bc7065bc4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.010805 master-0 kubenswrapper[28120]: I0220 15:05:53.010786 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.112728 master-0 kubenswrapper[28120]: I0220 15:05:53.112674 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/48af5081-6d64-454b-979a-ee1bc7065bc4-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.112728 master-0 kubenswrapper[28120]: I0220 15:05:53.112723 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-config-volume\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.112960 master-0 kubenswrapper[28120]: I0220 15:05:53.112747 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lj7n\" (UniqueName: \"kubernetes.io/projected/48af5081-6d64-454b-979a-ee1bc7065bc4-kube-api-access-8lj7n\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.112960 master-0 kubenswrapper[28120]: I0220 15:05:53.112771 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/48af5081-6d64-454b-979a-ee1bc7065bc4-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.113053 master-0 kubenswrapper[28120]: I0220 15:05:53.113005 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-web-config\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.113096 master-0 kubenswrapper[28120]: I0220 15:05:53.113078 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.113129 master-0 kubenswrapper[28120]: I0220 15:05:53.113106 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/48af5081-6d64-454b-979a-ee1bc7065bc4-config-out\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.113349 master-0 kubenswrapper[28120]: I0220 15:05:53.113322 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.113386 master-0 kubenswrapper[28120]: I0220 15:05:53.113356 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48af5081-6d64-454b-979a-ee1bc7065bc4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.113434 master-0 kubenswrapper[28120]: I0220 15:05:53.113415 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.113468 master-0 kubenswrapper[28120]: I0220 15:05:53.113443 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/48af5081-6d64-454b-979a-ee1bc7065bc4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.113502 master-0 kubenswrapper[28120]: I0220 15:05:53.113470 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.113611 master-0 kubenswrapper[28120]: I0220 15:05:53.113544 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/48af5081-6d64-454b-979a-ee1bc7065bc4-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.115038 master-0 kubenswrapper[28120]: I0220 15:05:53.114995 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/48af5081-6d64-454b-979a-ee1bc7065bc4-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.115308 master-0 kubenswrapper[28120]: I0220 15:05:53.115281 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48af5081-6d64-454b-979a-ee1bc7065bc4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.116577 master-0 kubenswrapper[28120]: I0220 15:05:53.116498 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/48af5081-6d64-454b-979a-ee1bc7065bc4-config-out\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.117163 master-0 kubenswrapper[28120]: I0220 15:05:53.117138 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-web-config\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.117309 master-0 kubenswrapper[28120]: I0220 15:05:53.117284 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-config-volume\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.117470 master-0 kubenswrapper[28120]: I0220 15:05:53.117444 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.117519 master-0 kubenswrapper[28120]: I0220 15:05:53.117456 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.118001 master-0 kubenswrapper[28120]: I0220 15:05:53.117977 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/48af5081-6d64-454b-979a-ee1bc7065bc4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.118740 master-0 kubenswrapper[28120]: I0220 15:05:53.118700 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.127009 master-0 kubenswrapper[28120]: I0220 15:05:53.126982 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.130537 master-0 kubenswrapper[28120]: I0220 15:05:53.130476 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lj7n\" (UniqueName: \"kubernetes.io/projected/48af5081-6d64-454b-979a-ee1bc7065bc4-kube-api-access-8lj7n\") pod \"alertmanager-main-0\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.223079 master-0 kubenswrapper[28120]: I0220 15:05:53.223031 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:05:53.541055 master-0 kubenswrapper[28120]: I0220 15:05:53.537382 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6bcb747b79-dfz8f"] Feb 20 15:05:53.541055 master-0 kubenswrapper[28120]: I0220 15:05:53.539975 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.554959 master-0 kubenswrapper[28120]: I0220 15:05:53.545856 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 20 15:05:53.554959 master-0 kubenswrapper[28120]: I0220 15:05:53.547138 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 20 15:05:53.554959 master-0 kubenswrapper[28120]: I0220 15:05:53.547154 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-6nj4c" Feb 20 15:05:53.554959 master-0 kubenswrapper[28120]: I0220 15:05:53.547510 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 20 15:05:53.554959 master-0 kubenswrapper[28120]: I0220 15:05:53.547695 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 20 15:05:53.554959 master-0 kubenswrapper[28120]: I0220 15:05:53.547823 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 20 15:05:53.571622 master-0 kubenswrapper[28120]: I0220 15:05:53.571564 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bcb747b79-dfz8f"] Feb 20 15:05:53.622233 master-0 kubenswrapper[28120]: I0220 15:05:53.622169 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-service-ca\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.622440 master-0 kubenswrapper[28120]: I0220 15:05:53.622326 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxtgp\" (UniqueName: \"kubernetes.io/projected/2ff24014-84b3-43df-a20a-7caa44088b0c-kube-api-access-cxtgp\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.622440 master-0 kubenswrapper[28120]: I0220 15:05:53.622377 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-oauth-serving-cert\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.622440 master-0 kubenswrapper[28120]: I0220 15:05:53.622407 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff24014-84b3-43df-a20a-7caa44088b0c-console-serving-cert\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.622681 master-0 kubenswrapper[28120]: I0220 15:05:53.622616 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2ff24014-84b3-43df-a20a-7caa44088b0c-console-oauth-config\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.622745 master-0 kubenswrapper[28120]: I0220 15:05:53.622704 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-console-config\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.696855 master-0 kubenswrapper[28120]: I0220 15:05:53.696804 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 20 15:05:53.725675 master-0 kubenswrapper[28120]: I0220 15:05:53.725638 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxtgp\" (UniqueName: \"kubernetes.io/projected/2ff24014-84b3-43df-a20a-7caa44088b0c-kube-api-access-cxtgp\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.725794 master-0 kubenswrapper[28120]: I0220 15:05:53.725697 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-oauth-serving-cert\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.725794 master-0 kubenswrapper[28120]: I0220 15:05:53.725725 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff24014-84b3-43df-a20a-7caa44088b0c-console-serving-cert\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.725794 master-0 kubenswrapper[28120]: I0220 15:05:53.725766 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2ff24014-84b3-43df-a20a-7caa44088b0c-console-oauth-config\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.726235 master-0 kubenswrapper[28120]: I0220 15:05:53.726182 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-console-config\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.726310 master-0 kubenswrapper[28120]: I0220 15:05:53.726261 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-service-ca\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.734944 master-0 kubenswrapper[28120]: I0220 15:05:53.727511 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-oauth-serving-cert\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.734944 master-0 kubenswrapper[28120]: I0220 15:05:53.727766 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-service-ca\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.734944 master-0 kubenswrapper[28120]: I0220 15:05:53.728192 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-console-config\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.734944 master-0 kubenswrapper[28120]: I0220 15:05:53.729387 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2ff24014-84b3-43df-a20a-7caa44088b0c-console-oauth-config\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.735371 master-0 kubenswrapper[28120]: I0220 15:05:53.735333 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff24014-84b3-43df-a20a-7caa44088b0c-console-serving-cert\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.741623 master-0 kubenswrapper[28120]: I0220 15:05:53.741584 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxtgp\" (UniqueName: \"kubernetes.io/projected/2ff24014-84b3-43df-a20a-7caa44088b0c-kube-api-access-cxtgp\") pod \"console-6bcb747b79-dfz8f\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.892983 master-0 kubenswrapper[28120]: I0220 15:05:53.892911 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5cb8875886-98kx5"] Feb 20 15:05:53.894220 master-0 kubenswrapper[28120]: I0220 15:05:53.894187 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:05:53.895306 master-0 kubenswrapper[28120]: I0220 15:05:53.895281 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:53.900507 master-0 kubenswrapper[28120]: I0220 15:05:53.899460 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 20 15:05:53.900507 master-0 kubenswrapper[28120]: I0220 15:05:53.899622 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 20 15:05:53.900507 master-0 kubenswrapper[28120]: I0220 15:05:53.899658 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 20 15:05:53.900507 master-0 kubenswrapper[28120]: I0220 15:05:53.899773 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 20 15:05:53.900507 master-0 kubenswrapper[28120]: I0220 15:05:53.899832 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-4ru6f463aj8a5" Feb 20 15:05:53.900507 master-0 kubenswrapper[28120]: I0220 15:05:53.899889 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 20 15:05:53.907106 master-0 kubenswrapper[28120]: I0220 15:05:53.905809 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5cb8875886-98kx5"] Feb 20 15:05:54.031525 master-0 kubenswrapper[28120]: I0220 15:05:54.031461 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.031733 master-0 kubenswrapper[28120]: I0220 15:05:54.031686 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.031802 master-0 kubenswrapper[28120]: I0220 15:05:54.031778 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-grpc-tls\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.031851 master-0 kubenswrapper[28120]: I0220 15:05:54.031834 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1c673ef8-317e-40ef-a225-412a116cced5-metrics-client-ca\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.032077 master-0 kubenswrapper[28120]: I0220 15:05:54.032036 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcmnm\" (UniqueName: \"kubernetes.io/projected/1c673ef8-317e-40ef-a225-412a116cced5-kube-api-access-vcmnm\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.032159 master-0 kubenswrapper[28120]: I0220 15:05:54.032138 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-thanos-querier-tls\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.032278 master-0 kubenswrapper[28120]: I0220 15:05:54.032261 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.032416 master-0 kubenswrapper[28120]: I0220 15:05:54.032371 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.133443 master-0 kubenswrapper[28120]: I0220 15:05:54.133375 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.133443 master-0 kubenswrapper[28120]: I0220 15:05:54.133431 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-grpc-tls\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.133687 master-0 kubenswrapper[28120]: I0220 15:05:54.133628 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1c673ef8-317e-40ef-a225-412a116cced5-metrics-client-ca\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.133736 master-0 kubenswrapper[28120]: I0220 15:05:54.133716 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcmnm\" (UniqueName: \"kubernetes.io/projected/1c673ef8-317e-40ef-a225-412a116cced5-kube-api-access-vcmnm\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.133782 master-0 kubenswrapper[28120]: I0220 15:05:54.133759 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-thanos-querier-tls\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.133854 master-0 kubenswrapper[28120]: I0220 15:05:54.133824 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.133904 master-0 kubenswrapper[28120]: I0220 15:05:54.133882 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.133975 master-0 kubenswrapper[28120]: I0220 15:05:54.133916 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.135022 master-0 kubenswrapper[28120]: I0220 15:05:54.134996 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1c673ef8-317e-40ef-a225-412a116cced5-metrics-client-ca\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.138569 master-0 kubenswrapper[28120]: I0220 15:05:54.138537 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-thanos-querier-tls\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.141982 master-0 kubenswrapper[28120]: I0220 15:05:54.138934 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-grpc-tls\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.141982 master-0 kubenswrapper[28120]: I0220 15:05:54.139415 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.144375 master-0 kubenswrapper[28120]: I0220 15:05:54.144323 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.144439 master-0 kubenswrapper[28120]: I0220 15:05:54.144369 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.144535 master-0 kubenswrapper[28120]: I0220 15:05:54.144506 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1c673ef8-317e-40ef-a225-412a116cced5-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.155638 master-0 kubenswrapper[28120]: I0220 15:05:54.155605 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcmnm\" (UniqueName: \"kubernetes.io/projected/1c673ef8-317e-40ef-a225-412a116cced5-kube-api-access-vcmnm\") pod \"thanos-querier-5cb8875886-98kx5\" (UID: \"1c673ef8-317e-40ef-a225-412a116cced5\") " pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.217543 master-0 kubenswrapper[28120]: I0220 15:05:54.216871 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:05:54.246467 master-0 kubenswrapper[28120]: I0220 15:05:54.246408 28120 generic.go:334] "Generic (PLEG): container finished" podID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerID="5ad622774ead6e685f9fb4dfc0db016a0fd7e7b66e65e0e546c696a6aa340199" exitCode=0 Feb 20 15:05:54.246467 master-0 kubenswrapper[28120]: I0220 15:05:54.246459 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"48af5081-6d64-454b-979a-ee1bc7065bc4","Type":"ContainerDied","Data":"5ad622774ead6e685f9fb4dfc0db016a0fd7e7b66e65e0e546c696a6aa340199"} Feb 20 15:05:54.246720 master-0 kubenswrapper[28120]: I0220 15:05:54.246489 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"48af5081-6d64-454b-979a-ee1bc7065bc4","Type":"ContainerStarted","Data":"19a83332786319b4e2e769e38ccf1e9e48f51720bf2ae179d6d2b27a26a896ff"} Feb 20 15:05:54.334018 master-0 kubenswrapper[28120]: I0220 15:05:54.332047 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bcb747b79-dfz8f"] Feb 20 15:05:54.338061 master-0 kubenswrapper[28120]: W0220 15:05:54.338002 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ff24014_84b3_43df_a20a_7caa44088b0c.slice/crio-1a03ab185ca4d95b1b2224e20bb956f383e9e0ac8e82050e65434df2ae9a4d44 WatchSource:0}: Error finding container 1a03ab185ca4d95b1b2224e20bb956f383e9e0ac8e82050e65434df2ae9a4d44: Status 404 returned error can't find the container with id 1a03ab185ca4d95b1b2224e20bb956f383e9e0ac8e82050e65434df2ae9a4d44 Feb 20 15:05:54.717374 master-0 kubenswrapper[28120]: W0220 15:05:54.717287 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c673ef8_317e_40ef_a225_412a116cced5.slice/crio-feb0100f6af06300e8490be3e4fc852e26221cf6b13652a059ee4f5a97cfbdee WatchSource:0}: Error finding container feb0100f6af06300e8490be3e4fc852e26221cf6b13652a059ee4f5a97cfbdee: Status 404 returned error can't find the container with id feb0100f6af06300e8490be3e4fc852e26221cf6b13652a059ee4f5a97cfbdee Feb 20 15:05:54.728535 master-0 kubenswrapper[28120]: I0220 15:05:54.727500 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5cb8875886-98kx5"] Feb 20 15:05:55.254503 master-0 kubenswrapper[28120]: I0220 15:05:55.254452 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bcb747b79-dfz8f" event={"ID":"2ff24014-84b3-43df-a20a-7caa44088b0c","Type":"ContainerStarted","Data":"1a03ab185ca4d95b1b2224e20bb956f383e9e0ac8e82050e65434df2ae9a4d44"} Feb 20 15:05:55.255633 master-0 kubenswrapper[28120]: I0220 15:05:55.255589 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" event={"ID":"1c673ef8-317e-40ef-a225-412a116cced5","Type":"ContainerStarted","Data":"feb0100f6af06300e8490be3e4fc852e26221cf6b13652a059ee4f5a97cfbdee"} Feb 20 15:05:56.581765 master-0 kubenswrapper[28120]: I0220 15:05:56.581686 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-59db89dc96-xkdwh"] Feb 20 15:05:56.584234 master-0 kubenswrapper[28120]: I0220 15:05:56.582516 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.584322 master-0 kubenswrapper[28120]: I0220 15:05:56.584274 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-d2ca3efq1akg1" Feb 20 15:05:56.595490 master-0 kubenswrapper[28120]: I0220 15:05:56.593414 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-9bcdd7684-kz2z2"] Feb 20 15:05:56.595490 master-0 kubenswrapper[28120]: I0220 15:05:56.593713 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" podUID="bdd203e0-3dd9-4e9d-81f1-46f60d235e38" containerName="metrics-server" containerID="cri-o://6de3357e6e18954512d073202b91b501ca58384ea08b18ec75d08c4929c63531" gracePeriod=170 Feb 20 15:05:56.597502 master-0 kubenswrapper[28120]: I0220 15:05:56.596766 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-59db89dc96-xkdwh"] Feb 20 15:05:56.675105 master-0 kubenswrapper[28120]: I0220 15:05:56.675045 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hzb8\" (UniqueName: \"kubernetes.io/projected/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-kube-api-access-8hzb8\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.675105 master-0 kubenswrapper[28120]: I0220 15:05:56.675108 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-client-ca-bundle\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.675318 master-0 kubenswrapper[28120]: I0220 15:05:56.675129 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-audit-log\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.675318 master-0 kubenswrapper[28120]: I0220 15:05:56.675155 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-metrics-server-audit-profiles\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.675318 master-0 kubenswrapper[28120]: I0220 15:05:56.675174 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-secret-metrics-client-certs\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.675318 master-0 kubenswrapper[28120]: I0220 15:05:56.675207 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-secret-metrics-server-tls\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.675318 master-0 kubenswrapper[28120]: I0220 15:05:56.675243 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.776603 master-0 kubenswrapper[28120]: I0220 15:05:56.776497 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-secret-metrics-server-tls\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.777015 master-0 kubenswrapper[28120]: I0220 15:05:56.776610 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.777015 master-0 kubenswrapper[28120]: I0220 15:05:56.776869 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hzb8\" (UniqueName: \"kubernetes.io/projected/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-kube-api-access-8hzb8\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.777015 master-0 kubenswrapper[28120]: I0220 15:05:56.776899 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-client-ca-bundle\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.777015 master-0 kubenswrapper[28120]: I0220 15:05:56.776918 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-audit-log\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.777015 master-0 kubenswrapper[28120]: I0220 15:05:56.776967 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-metrics-server-audit-profiles\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.777015 master-0 kubenswrapper[28120]: I0220 15:05:56.776993 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-secret-metrics-client-certs\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.780443 master-0 kubenswrapper[28120]: I0220 15:05:56.780396 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-secret-metrics-client-certs\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.781458 master-0 kubenswrapper[28120]: I0220 15:05:56.781421 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-metrics-server-audit-profiles\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.781665 master-0 kubenswrapper[28120]: I0220 15:05:56.781606 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.790861 master-0 kubenswrapper[28120]: I0220 15:05:56.790794 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-secret-metrics-server-tls\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.791154 master-0 kubenswrapper[28120]: I0220 15:05:56.791070 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-client-ca-bundle\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.791302 master-0 kubenswrapper[28120]: I0220 15:05:56.791245 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-audit-log\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.795409 master-0 kubenswrapper[28120]: I0220 15:05:56.795355 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hzb8\" (UniqueName: \"kubernetes.io/projected/8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2-kube-api-access-8hzb8\") pod \"metrics-server-59db89dc96-xkdwh\" (UID: \"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2\") " pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.842367 master-0 kubenswrapper[28120]: I0220 15:05:56.840300 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-68cd6dbb78-rhjhv"] Feb 20 15:05:56.849758 master-0 kubenswrapper[28120]: I0220 15:05:56.844817 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:56.857154 master-0 kubenswrapper[28120]: I0220 15:05:56.856924 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 20 15:05:56.871353 master-0 kubenswrapper[28120]: I0220 15:05:56.871296 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68cd6dbb78-rhjhv"] Feb 20 15:05:56.917210 master-0 kubenswrapper[28120]: I0220 15:05:56.917163 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:05:56.981279 master-0 kubenswrapper[28120]: I0220 15:05:56.981224 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-oauth-serving-cert\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:56.981383 master-0 kubenswrapper[28120]: I0220 15:05:56.981361 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-serving-cert\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:56.981463 master-0 kubenswrapper[28120]: I0220 15:05:56.981431 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-trusted-ca-bundle\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:56.981518 master-0 kubenswrapper[28120]: I0220 15:05:56.981492 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-service-ca\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:56.981567 master-0 kubenswrapper[28120]: I0220 15:05:56.981547 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-oauth-config\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:56.981630 master-0 kubenswrapper[28120]: I0220 15:05:56.981614 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z4sc\" (UniqueName: \"kubernetes.io/projected/bbe031c3-3ab8-42af-ab24-718d83d7d121-kube-api-access-5z4sc\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:56.981679 master-0 kubenswrapper[28120]: I0220 15:05:56.981634 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-config\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:57.083091 master-0 kubenswrapper[28120]: I0220 15:05:57.083031 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-oauth-serving-cert\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:57.083322 master-0 kubenswrapper[28120]: I0220 15:05:57.083125 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-serving-cert\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:57.083322 master-0 kubenswrapper[28120]: I0220 15:05:57.083158 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-trusted-ca-bundle\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:57.083322 master-0 kubenswrapper[28120]: I0220 15:05:57.083187 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-service-ca\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:57.083322 master-0 kubenswrapper[28120]: I0220 15:05:57.083224 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-oauth-config\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:57.083322 master-0 kubenswrapper[28120]: I0220 15:05:57.083247 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z4sc\" (UniqueName: \"kubernetes.io/projected/bbe031c3-3ab8-42af-ab24-718d83d7d121-kube-api-access-5z4sc\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:57.083322 master-0 kubenswrapper[28120]: I0220 15:05:57.083271 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-config\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:57.084320 master-0 kubenswrapper[28120]: I0220 15:05:57.083860 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-oauth-serving-cert\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:57.084320 master-0 kubenswrapper[28120]: I0220 15:05:57.084235 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-config\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:57.084320 master-0 kubenswrapper[28120]: I0220 15:05:57.084276 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-service-ca\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:57.085669 master-0 kubenswrapper[28120]: I0220 15:05:57.085629 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-trusted-ca-bundle\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:57.087404 master-0 kubenswrapper[28120]: I0220 15:05:57.087374 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-oauth-config\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:57.098915 master-0 kubenswrapper[28120]: I0220 15:05:57.098885 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z4sc\" (UniqueName: \"kubernetes.io/projected/bbe031c3-3ab8-42af-ab24-718d83d7d121-kube-api-access-5z4sc\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:57.099117 master-0 kubenswrapper[28120]: I0220 15:05:57.099077 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-serving-cert\") pod \"console-68cd6dbb78-rhjhv\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:57.212122 master-0 kubenswrapper[28120]: I0220 15:05:57.212068 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:05:57.274272 master-0 kubenswrapper[28120]: I0220 15:05:57.274136 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"48af5081-6d64-454b-979a-ee1bc7065bc4","Type":"ContainerStarted","Data":"3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d"} Feb 20 15:05:57.274272 master-0 kubenswrapper[28120]: I0220 15:05:57.274183 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"48af5081-6d64-454b-979a-ee1bc7065bc4","Type":"ContainerStarted","Data":"be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1"} Feb 20 15:05:57.274272 master-0 kubenswrapper[28120]: I0220 15:05:57.274194 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"48af5081-6d64-454b-979a-ee1bc7065bc4","Type":"ContainerStarted","Data":"0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01"} Feb 20 15:05:57.424753 master-0 kubenswrapper[28120]: I0220 15:05:57.424707 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-59db89dc96-xkdwh"] Feb 20 15:05:57.897088 master-0 kubenswrapper[28120]: I0220 15:05:57.897027 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68cd6dbb78-rhjhv"] Feb 20 15:05:58.206110 master-0 kubenswrapper[28120]: W0220 15:05:58.205995 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ff8eaf3_8658_43d7_b20b_e00a3a9ecad2.slice/crio-13841b4a341b45381d8fdedb49c5574d6674501288c066ebdfe3d84e2ecd4faf WatchSource:0}: Error finding container 13841b4a341b45381d8fdedb49c5574d6674501288c066ebdfe3d84e2ecd4faf: Status 404 returned error can't find the container with id 13841b4a341b45381d8fdedb49c5574d6674501288c066ebdfe3d84e2ecd4faf Feb 20 15:05:58.286841 master-0 kubenswrapper[28120]: I0220 15:05:58.286738 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" event={"ID":"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2","Type":"ContainerStarted","Data":"13841b4a341b45381d8fdedb49c5574d6674501288c066ebdfe3d84e2ecd4faf"} Feb 20 15:05:59.479695 master-0 kubenswrapper[28120]: I0220 15:05:59.479623 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 20 15:05:59.483564 master-0 kubenswrapper[28120]: I0220 15:05:59.483518 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.495831 master-0 kubenswrapper[28120]: I0220 15:05:59.487742 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 20 15:05:59.495831 master-0 kubenswrapper[28120]: I0220 15:05:59.488101 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 20 15:05:59.495831 master-0 kubenswrapper[28120]: I0220 15:05:59.488344 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 20 15:05:59.495831 master-0 kubenswrapper[28120]: I0220 15:05:59.488482 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 20 15:05:59.495831 master-0 kubenswrapper[28120]: I0220 15:05:59.488605 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 20 15:05:59.495831 master-0 kubenswrapper[28120]: I0220 15:05:59.488816 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 20 15:05:59.495831 master-0 kubenswrapper[28120]: I0220 15:05:59.488966 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 20 15:05:59.495831 master-0 kubenswrapper[28120]: I0220 15:05:59.489092 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-38nuos4fhcl5a" Feb 20 15:05:59.495831 master-0 kubenswrapper[28120]: I0220 15:05:59.491317 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 20 15:05:59.495831 master-0 kubenswrapper[28120]: I0220 15:05:59.491597 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 20 15:05:59.495831 master-0 kubenswrapper[28120]: I0220 15:05:59.494885 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 20 15:05:59.505523 master-0 kubenswrapper[28120]: I0220 15:05:59.505478 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 20 15:05:59.511896 master-0 kubenswrapper[28120]: I0220 15:05:59.511840 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.623572 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.623625 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpdzb\" (UniqueName: \"kubernetes.io/projected/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-kube-api-access-fpdzb\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.623646 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.623666 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.623686 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.623705 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.623726 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.623870 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.623964 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.624075 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.624107 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-config\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.624132 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.624212 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.624253 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-config-out\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.624273 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.624299 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-web-config\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.624330 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.625543 master-0 kubenswrapper[28120]: I0220 15:05:59.624365 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.725690 master-0 kubenswrapper[28120]: I0220 15:05:59.725628 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-config-out\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.725690 master-0 kubenswrapper[28120]: I0220 15:05:59.725679 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.725690 master-0 kubenswrapper[28120]: I0220 15:05:59.725697 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-web-config\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.726029 master-0 kubenswrapper[28120]: I0220 15:05:59.725859 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.726029 master-0 kubenswrapper[28120]: I0220 15:05:59.725944 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.726029 master-0 kubenswrapper[28120]: I0220 15:05:59.726022 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.726168 master-0 kubenswrapper[28120]: I0220 15:05:59.726056 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpdzb\" (UniqueName: \"kubernetes.io/projected/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-kube-api-access-fpdzb\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.726168 master-0 kubenswrapper[28120]: I0220 15:05:59.726077 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.726168 master-0 kubenswrapper[28120]: I0220 15:05:59.726100 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.726168 master-0 kubenswrapper[28120]: I0220 15:05:59.726115 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.726168 master-0 kubenswrapper[28120]: I0220 15:05:59.726137 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.726168 master-0 kubenswrapper[28120]: I0220 15:05:59.726154 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.726422 master-0 kubenswrapper[28120]: I0220 15:05:59.726187 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.726422 master-0 kubenswrapper[28120]: I0220 15:05:59.726216 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.726422 master-0 kubenswrapper[28120]: I0220 15:05:59.726297 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.726422 master-0 kubenswrapper[28120]: I0220 15:05:59.726323 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-config\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.726422 master-0 kubenswrapper[28120]: I0220 15:05:59.726343 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.726422 master-0 kubenswrapper[28120]: I0220 15:05:59.726404 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.726902 master-0 kubenswrapper[28120]: I0220 15:05:59.726868 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.728318 master-0 kubenswrapper[28120]: I0220 15:05:59.728276 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.738853 master-0 kubenswrapper[28120]: I0220 15:05:59.737915 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.738853 master-0 kubenswrapper[28120]: I0220 15:05:59.738547 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.738853 master-0 kubenswrapper[28120]: I0220 15:05:59.738702 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.740477 master-0 kubenswrapper[28120]: I0220 15:05:59.740442 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.740555 master-0 kubenswrapper[28120]: I0220 15:05:59.740487 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-web-config\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.740724 master-0 kubenswrapper[28120]: I0220 15:05:59.740673 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-config\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.740825 master-0 kubenswrapper[28120]: I0220 15:05:59.740796 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.740946 master-0 kubenswrapper[28120]: I0220 15:05:59.740908 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.742288 master-0 kubenswrapper[28120]: I0220 15:05:59.742252 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.742356 master-0 kubenswrapper[28120]: I0220 15:05:59.742297 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.742356 master-0 kubenswrapper[28120]: I0220 15:05:59.742347 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.742780 master-0 kubenswrapper[28120]: I0220 15:05:59.742747 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.742831 master-0 kubenswrapper[28120]: I0220 15:05:59.742784 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.752724 master-0 kubenswrapper[28120]: I0220 15:05:59.752609 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpdzb\" (UniqueName: \"kubernetes.io/projected/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-kube-api-access-fpdzb\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.753173 master-0 kubenswrapper[28120]: I0220 15:05:59.753127 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-config-out\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.754847 master-0 kubenswrapper[28120]: I0220 15:05:59.754703 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:05:59.808261 master-0 kubenswrapper[28120]: I0220 15:05:59.808162 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:06:00.329198 master-0 kubenswrapper[28120]: I0220 15:06:00.329146 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-7-master-0_ff9eff2a-6fb2-4972-b393-faa883504995/installer/0.log" Feb 20 15:06:00.329328 master-0 kubenswrapper[28120]: I0220 15:06:00.329217 28120 generic.go:334] "Generic (PLEG): container finished" podID="ff9eff2a-6fb2-4972-b393-faa883504995" containerID="199b626d3af169dd1863bd86a256a14084f72991c72c28ec4c1c5d1db9f5435f" exitCode=1 Feb 20 15:06:00.329328 master-0 kubenswrapper[28120]: I0220 15:06:00.329285 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-7-master-0" event={"ID":"ff9eff2a-6fb2-4972-b393-faa883504995","Type":"ContainerDied","Data":"199b626d3af169dd1863bd86a256a14084f72991c72c28ec4c1c5d1db9f5435f"} Feb 20 15:06:00.331223 master-0 kubenswrapper[28120]: I0220 15:06:00.331162 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68cd6dbb78-rhjhv" event={"ID":"bbe031c3-3ab8-42af-ab24-718d83d7d121","Type":"ContainerStarted","Data":"92f96c88d30a444197ed70e8bd675e0d553d6337d16c8d4ecd884eae81ff681e"} Feb 20 15:06:00.710761 master-0 kubenswrapper[28120]: I0220 15:06:00.710727 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-7-master-0_ff9eff2a-6fb2-4972-b393-faa883504995/installer/0.log" Feb 20 15:06:00.711159 master-0 kubenswrapper[28120]: I0220 15:06:00.710788 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-7-master-0" Feb 20 15:06:00.811189 master-0 kubenswrapper[28120]: I0220 15:06:00.810766 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 20 15:06:00.815608 master-0 kubenswrapper[28120]: W0220 15:06:00.815568 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f79c358_60c9_4d3e_a830_ce6cab8e39d6.slice/crio-35289c6cafcf1fc5af2eb07d3f95df14f1ae9ed75224929c6d3858e4ed0926ad WatchSource:0}: Error finding container 35289c6cafcf1fc5af2eb07d3f95df14f1ae9ed75224929c6d3858e4ed0926ad: Status 404 returned error can't find the container with id 35289c6cafcf1fc5af2eb07d3f95df14f1ae9ed75224929c6d3858e4ed0926ad Feb 20 15:06:00.844012 master-0 kubenswrapper[28120]: I0220 15:06:00.843958 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff9eff2a-6fb2-4972-b393-faa883504995-kube-api-access\") pod \"ff9eff2a-6fb2-4972-b393-faa883504995\" (UID: \"ff9eff2a-6fb2-4972-b393-faa883504995\") " Feb 20 15:06:00.844107 master-0 kubenswrapper[28120]: I0220 15:06:00.844050 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff9eff2a-6fb2-4972-b393-faa883504995-var-lock\") pod \"ff9eff2a-6fb2-4972-b393-faa883504995\" (UID: \"ff9eff2a-6fb2-4972-b393-faa883504995\") " Feb 20 15:06:00.844199 master-0 kubenswrapper[28120]: I0220 15:06:00.844158 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff9eff2a-6fb2-4972-b393-faa883504995-kubelet-dir\") pod \"ff9eff2a-6fb2-4972-b393-faa883504995\" (UID: \"ff9eff2a-6fb2-4972-b393-faa883504995\") " Feb 20 15:06:00.844670 master-0 kubenswrapper[28120]: I0220 15:06:00.844642 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff9eff2a-6fb2-4972-b393-faa883504995-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ff9eff2a-6fb2-4972-b393-faa883504995" (UID: "ff9eff2a-6fb2-4972-b393-faa883504995"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:06:00.845477 master-0 kubenswrapper[28120]: I0220 15:06:00.845445 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff9eff2a-6fb2-4972-b393-faa883504995-var-lock" (OuterVolumeSpecName: "var-lock") pod "ff9eff2a-6fb2-4972-b393-faa883504995" (UID: "ff9eff2a-6fb2-4972-b393-faa883504995"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:06:00.847480 master-0 kubenswrapper[28120]: I0220 15:06:00.847444 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff9eff2a-6fb2-4972-b393-faa883504995-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ff9eff2a-6fb2-4972-b393-faa883504995" (UID: "ff9eff2a-6fb2-4972-b393-faa883504995"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:06:00.945657 master-0 kubenswrapper[28120]: I0220 15:06:00.945620 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff9eff2a-6fb2-4972-b393-faa883504995-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:00.945657 master-0 kubenswrapper[28120]: I0220 15:06:00.945655 28120 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff9eff2a-6fb2-4972-b393-faa883504995-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:00.945793 master-0 kubenswrapper[28120]: I0220 15:06:00.945665 28120 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff9eff2a-6fb2-4972-b393-faa883504995-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:01.346966 master-0 kubenswrapper[28120]: I0220 15:06:01.346829 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"48af5081-6d64-454b-979a-ee1bc7065bc4","Type":"ContainerStarted","Data":"8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d"} Feb 20 15:06:01.346966 master-0 kubenswrapper[28120]: I0220 15:06:01.346882 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"48af5081-6d64-454b-979a-ee1bc7065bc4","Type":"ContainerStarted","Data":"eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b"} Feb 20 15:06:01.353652 master-0 kubenswrapper[28120]: I0220 15:06:01.353550 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68cd6dbb78-rhjhv" event={"ID":"bbe031c3-3ab8-42af-ab24-718d83d7d121","Type":"ContainerStarted","Data":"b3ad512efb5dbbcc33d6138b656a0885487bca2c87d7c1ae457add1c2c74ff8e"} Feb 20 15:06:01.358775 master-0 kubenswrapper[28120]: I0220 15:06:01.358368 28120 generic.go:334] "Generic (PLEG): container finished" podID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerID="c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de" exitCode=0 Feb 20 15:06:01.358775 master-0 kubenswrapper[28120]: I0220 15:06:01.358479 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7f79c358-60c9-4d3e-a830-ce6cab8e39d6","Type":"ContainerDied","Data":"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de"} Feb 20 15:06:01.358775 master-0 kubenswrapper[28120]: I0220 15:06:01.358531 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7f79c358-60c9-4d3e-a830-ce6cab8e39d6","Type":"ContainerStarted","Data":"35289c6cafcf1fc5af2eb07d3f95df14f1ae9ed75224929c6d3858e4ed0926ad"} Feb 20 15:06:01.360221 master-0 kubenswrapper[28120]: I0220 15:06:01.360111 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bcb747b79-dfz8f" event={"ID":"2ff24014-84b3-43df-a20a-7caa44088b0c","Type":"ContainerStarted","Data":"43fec6224584ee0092a54e87a413a46e7446e54f8e1d6d0f153460368c1604d6"} Feb 20 15:06:01.362506 master-0 kubenswrapper[28120]: I0220 15:06:01.362468 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" event={"ID":"1c673ef8-317e-40ef-a225-412a116cced5","Type":"ContainerStarted","Data":"76f283d7083cfa73ba2968f3acf4e6e5392ba1767e05f8a81a43c2f710d3f713"} Feb 20 15:06:01.362563 master-0 kubenswrapper[28120]: I0220 15:06:01.362504 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" event={"ID":"1c673ef8-317e-40ef-a225-412a116cced5","Type":"ContainerStarted","Data":"7dab35ea5ad257299eade113c550990ecae72d8c047bcdfc1cc4def3f33a2dec"} Feb 20 15:06:01.362563 master-0 kubenswrapper[28120]: I0220 15:06:01.362518 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" event={"ID":"1c673ef8-317e-40ef-a225-412a116cced5","Type":"ContainerStarted","Data":"927a68f9e4eca7ea38d0839d46c30439f8e20b0619d73b6e6f51bc4b47ca8d81"} Feb 20 15:06:01.364996 master-0 kubenswrapper[28120]: I0220 15:06:01.364744 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-7-master-0_ff9eff2a-6fb2-4972-b393-faa883504995/installer/0.log" Feb 20 15:06:01.364996 master-0 kubenswrapper[28120]: I0220 15:06:01.364837 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-7-master-0" event={"ID":"ff9eff2a-6fb2-4972-b393-faa883504995","Type":"ContainerDied","Data":"cbd93db5ce6e8e8ebf11a51c7296d7478ce3b60cf0071c5850483baf691b991b"} Feb 20 15:06:01.364996 master-0 kubenswrapper[28120]: I0220 15:06:01.364889 28120 scope.go:117] "RemoveContainer" containerID="199b626d3af169dd1863bd86a256a14084f72991c72c28ec4c1c5d1db9f5435f" Feb 20 15:06:01.365556 master-0 kubenswrapper[28120]: I0220 15:06:01.365505 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-7-master-0" Feb 20 15:06:01.371218 master-0 kubenswrapper[28120]: I0220 15:06:01.371170 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" event={"ID":"8ff8eaf3-8658-43d7-b20b-e00a3a9ecad2","Type":"ContainerStarted","Data":"f5e88dd553cb7a5f736903d554c26f1625197b3d9afd7fca13b19d090904d926"} Feb 20 15:06:01.376732 master-0 kubenswrapper[28120]: I0220 15:06:01.376653 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-68cd6dbb78-rhjhv" podStartSLOduration=4.95555919 podStartE2EDuration="5.37663645s" podCreationTimestamp="2026-02-20 15:05:56 +0000 UTC" firstStartedPulling="2026-02-20 15:06:00.275995924 +0000 UTC m=+298.536789497" lastFinishedPulling="2026-02-20 15:06:00.697073184 +0000 UTC m=+298.957866757" observedRunningTime="2026-02-20 15:06:01.376280041 +0000 UTC m=+299.637073694" watchObservedRunningTime="2026-02-20 15:06:01.37663645 +0000 UTC m=+299.637430023" Feb 20 15:06:01.446573 master-0 kubenswrapper[28120]: I0220 15:06:01.446488 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6bcb747b79-dfz8f" podStartSLOduration=2.447461911 podStartE2EDuration="8.446470758s" podCreationTimestamp="2026-02-20 15:05:53 +0000 UTC" firstStartedPulling="2026-02-20 15:05:54.33947978 +0000 UTC m=+292.600273343" lastFinishedPulling="2026-02-20 15:06:00.338488607 +0000 UTC m=+298.599282190" observedRunningTime="2026-02-20 15:06:01.438336103 +0000 UTC m=+299.699129706" watchObservedRunningTime="2026-02-20 15:06:01.446470758 +0000 UTC m=+299.707264321" Feb 20 15:06:01.469666 master-0 kubenswrapper[28120]: I0220 15:06:01.469592 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" podStartSLOduration=5.469572579 podStartE2EDuration="5.469572579s" podCreationTimestamp="2026-02-20 15:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:06:01.461893496 +0000 UTC m=+299.722687089" watchObservedRunningTime="2026-02-20 15:06:01.469572579 +0000 UTC m=+299.730366152" Feb 20 15:06:01.486406 master-0 kubenswrapper[28120]: I0220 15:06:01.486330 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-7-master-0"] Feb 20 15:06:01.496242 master-0 kubenswrapper[28120]: I0220 15:06:01.496192 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-7-master-0"] Feb 20 15:06:02.069879 master-0 kubenswrapper[28120]: I0220 15:06:02.069834 28120 kubelet.go:1505] "Image garbage collection succeeded" Feb 20 15:06:02.075658 master-0 kubenswrapper[28120]: I0220 15:06:02.074974 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff9eff2a-6fb2-4972-b393-faa883504995" path="/var/lib/kubelet/pods/ff9eff2a-6fb2-4972-b393-faa883504995/volumes" Feb 20 15:06:02.381905 master-0 kubenswrapper[28120]: I0220 15:06:02.381839 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" event={"ID":"1c673ef8-317e-40ef-a225-412a116cced5","Type":"ContainerStarted","Data":"e4b3d19cc55295485619b07f584ef754eb5ac60d27c714a4653273501ca23fd9"} Feb 20 15:06:02.382252 master-0 kubenswrapper[28120]: I0220 15:06:02.381908 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" event={"ID":"1c673ef8-317e-40ef-a225-412a116cced5","Type":"ContainerStarted","Data":"48c0e16f9bc56af3c72e1dca1771d05200208d87f4702d411d6125f3c6e7b36f"} Feb 20 15:06:02.382252 master-0 kubenswrapper[28120]: I0220 15:06:02.382111 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:06:02.382252 master-0 kubenswrapper[28120]: I0220 15:06:02.382163 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" event={"ID":"1c673ef8-317e-40ef-a225-412a116cced5","Type":"ContainerStarted","Data":"a508663f8420a0b81c8df82c759a571473ecace851146a5f094a3311c0c20e69"} Feb 20 15:06:02.389876 master-0 kubenswrapper[28120]: I0220 15:06:02.389772 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"48af5081-6d64-454b-979a-ee1bc7065bc4","Type":"ContainerStarted","Data":"dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6"} Feb 20 15:06:02.462918 master-0 kubenswrapper[28120]: I0220 15:06:02.462839 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" podStartSLOduration=2.734971018 podStartE2EDuration="9.462818892s" podCreationTimestamp="2026-02-20 15:05:53 +0000 UTC" firstStartedPulling="2026-02-20 15:05:54.720608834 +0000 UTC m=+292.981402407" lastFinishedPulling="2026-02-20 15:06:01.448456718 +0000 UTC m=+299.709250281" observedRunningTime="2026-02-20 15:06:02.418513396 +0000 UTC m=+300.679307039" watchObservedRunningTime="2026-02-20 15:06:02.462818892 +0000 UTC m=+300.723612455" Feb 20 15:06:02.464825 master-0 kubenswrapper[28120]: I0220 15:06:02.464780 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.350130511 podStartE2EDuration="10.464772541s" podCreationTimestamp="2026-02-20 15:05:52 +0000 UTC" firstStartedPulling="2026-02-20 15:05:54.248380527 +0000 UTC m=+292.509174090" lastFinishedPulling="2026-02-20 15:06:01.363022557 +0000 UTC m=+299.623816120" observedRunningTime="2026-02-20 15:06:02.462132494 +0000 UTC m=+300.722926097" watchObservedRunningTime="2026-02-20 15:06:02.464772541 +0000 UTC m=+300.725566104" Feb 20 15:06:03.386726 master-0 kubenswrapper[28120]: I0220 15:06:03.386665 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c6bcb95bb-jd879"] Feb 20 15:06:03.387237 master-0 kubenswrapper[28120]: I0220 15:06:03.386877 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" podUID="2f8fbacc-4698-43ac-8941-e3a3db7c5dea" containerName="controller-manager" containerID="cri-o://10d92bc258f8f8b9a79201f354e7c21782133cfd415da7b5b4cb3b3577c97674" gracePeriod=30 Feb 20 15:06:03.443316 master-0 kubenswrapper[28120]: I0220 15:06:03.441459 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r"] Feb 20 15:06:03.443639 master-0 kubenswrapper[28120]: I0220 15:06:03.443605 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" podUID="4e11583a-e3a5-4634-89bc-6edb03f6ba02" containerName="route-controller-manager" containerID="cri-o://b83059054242c5a75ae5aa3fcc22958bcea0cc829794298aa24a6a97133df2bf" gracePeriod=30 Feb 20 15:06:03.894540 master-0 kubenswrapper[28120]: I0220 15:06:03.894398 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:06:03.894882 master-0 kubenswrapper[28120]: I0220 15:06:03.894593 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:06:03.896043 master-0 kubenswrapper[28120]: I0220 15:06:03.896000 28120 patch_prober.go:28] interesting pod/console-6bcb747b79-dfz8f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Feb 20 15:06:03.896101 master-0 kubenswrapper[28120]: I0220 15:06:03.896062 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6bcb747b79-dfz8f" podUID="2ff24014-84b3-43df-a20a-7caa44088b0c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Feb 20 15:06:04.406741 master-0 kubenswrapper[28120]: I0220 15:06:04.406686 28120 generic.go:334] "Generic (PLEG): container finished" podID="4e11583a-e3a5-4634-89bc-6edb03f6ba02" containerID="b83059054242c5a75ae5aa3fcc22958bcea0cc829794298aa24a6a97133df2bf" exitCode=0 Feb 20 15:06:04.407414 master-0 kubenswrapper[28120]: I0220 15:06:04.406776 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" event={"ID":"4e11583a-e3a5-4634-89bc-6edb03f6ba02","Type":"ContainerDied","Data":"b83059054242c5a75ae5aa3fcc22958bcea0cc829794298aa24a6a97133df2bf"} Feb 20 15:06:04.408572 master-0 kubenswrapper[28120]: I0220 15:06:04.408544 28120 generic.go:334] "Generic (PLEG): container finished" podID="2f8fbacc-4698-43ac-8941-e3a3db7c5dea" containerID="10d92bc258f8f8b9a79201f354e7c21782133cfd415da7b5b4cb3b3577c97674" exitCode=0 Feb 20 15:06:04.408701 master-0 kubenswrapper[28120]: I0220 15:06:04.408588 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" event={"ID":"2f8fbacc-4698-43ac-8941-e3a3db7c5dea","Type":"ContainerDied","Data":"10d92bc258f8f8b9a79201f354e7c21782133cfd415da7b5b4cb3b3577c97674"} Feb 20 15:06:05.362746 master-0 kubenswrapper[28120]: I0220 15:06:05.362619 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:06:05.416805 master-0 kubenswrapper[28120]: I0220 15:06:05.416763 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v"] Feb 20 15:06:05.417342 master-0 kubenswrapper[28120]: E0220 15:06:05.417315 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e11583a-e3a5-4634-89bc-6edb03f6ba02" containerName="route-controller-manager" Feb 20 15:06:05.417416 master-0 kubenswrapper[28120]: I0220 15:06:05.417352 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e11583a-e3a5-4634-89bc-6edb03f6ba02" containerName="route-controller-manager" Feb 20 15:06:05.417416 master-0 kubenswrapper[28120]: E0220 15:06:05.417366 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9eff2a-6fb2-4972-b393-faa883504995" containerName="installer" Feb 20 15:06:05.417416 master-0 kubenswrapper[28120]: I0220 15:06:05.417374 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9eff2a-6fb2-4972-b393-faa883504995" containerName="installer" Feb 20 15:06:05.417570 master-0 kubenswrapper[28120]: I0220 15:06:05.417549 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e11583a-e3a5-4634-89bc-6edb03f6ba02" containerName="route-controller-manager" Feb 20 15:06:05.417649 master-0 kubenswrapper[28120]: I0220 15:06:05.417630 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff9eff2a-6fb2-4972-b393-faa883504995" containerName="installer" Feb 20 15:06:05.418313 master-0 kubenswrapper[28120]: I0220 15:06:05.418278 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:05.440777 master-0 kubenswrapper[28120]: I0220 15:06:05.440725 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmg86\" (UniqueName: \"kubernetes.io/projected/4e11583a-e3a5-4634-89bc-6edb03f6ba02-kube-api-access-kmg86\") pod \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\" (UID: \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\") " Feb 20 15:06:05.441001 master-0 kubenswrapper[28120]: I0220 15:06:05.440829 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" event={"ID":"4e11583a-e3a5-4634-89bc-6edb03f6ba02","Type":"ContainerDied","Data":"48ef437a777069d0baa3a357450465f63d399658e873940561cb3da1d9dacc37"} Feb 20 15:06:05.441001 master-0 kubenswrapper[28120]: I0220 15:06:05.440939 28120 scope.go:117] "RemoveContainer" containerID="b83059054242c5a75ae5aa3fcc22958bcea0cc829794298aa24a6a97133df2bf" Feb 20 15:06:05.441001 master-0 kubenswrapper[28120]: I0220 15:06:05.440984 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e11583a-e3a5-4634-89bc-6edb03f6ba02-client-ca\") pod \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\" (UID: \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\") " Feb 20 15:06:05.441133 master-0 kubenswrapper[28120]: I0220 15:06:05.440784 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r" Feb 20 15:06:05.441364 master-0 kubenswrapper[28120]: I0220 15:06:05.441196 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e11583a-e3a5-4634-89bc-6edb03f6ba02-config\") pod \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\" (UID: \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\") " Feb 20 15:06:05.441475 master-0 kubenswrapper[28120]: I0220 15:06:05.441450 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e11583a-e3a5-4634-89bc-6edb03f6ba02-serving-cert\") pod \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\" (UID: \"4e11583a-e3a5-4634-89bc-6edb03f6ba02\") " Feb 20 15:06:05.442091 master-0 kubenswrapper[28120]: I0220 15:06:05.441961 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e11583a-e3a5-4634-89bc-6edb03f6ba02-client-ca" (OuterVolumeSpecName: "client-ca") pod "4e11583a-e3a5-4634-89bc-6edb03f6ba02" (UID: "4e11583a-e3a5-4634-89bc-6edb03f6ba02"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:06:05.442374 master-0 kubenswrapper[28120]: I0220 15:06:05.442341 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e11583a-e3a5-4634-89bc-6edb03f6ba02-config" (OuterVolumeSpecName: "config") pod "4e11583a-e3a5-4634-89bc-6edb03f6ba02" (UID: "4e11583a-e3a5-4634-89bc-6edb03f6ba02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:06:05.443025 master-0 kubenswrapper[28120]: I0220 15:06:05.443005 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v"] Feb 20 15:06:05.443437 master-0 kubenswrapper[28120]: I0220 15:06:05.443375 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e11583a-e3a5-4634-89bc-6edb03f6ba02-kube-api-access-kmg86" (OuterVolumeSpecName: "kube-api-access-kmg86") pod "4e11583a-e3a5-4634-89bc-6edb03f6ba02" (UID: "4e11583a-e3a5-4634-89bc-6edb03f6ba02"). InnerVolumeSpecName "kube-api-access-kmg86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:06:05.443952 master-0 kubenswrapper[28120]: I0220 15:06:05.443847 28120 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e11583a-e3a5-4634-89bc-6edb03f6ba02-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:05.444013 master-0 kubenswrapper[28120]: I0220 15:06:05.443970 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e11583a-e3a5-4634-89bc-6edb03f6ba02-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:05.448250 master-0 kubenswrapper[28120]: I0220 15:06:05.448188 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e11583a-e3a5-4634-89bc-6edb03f6ba02-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4e11583a-e3a5-4634-89bc-6edb03f6ba02" (UID: "4e11583a-e3a5-4634-89bc-6edb03f6ba02"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:06:05.461969 master-0 kubenswrapper[28120]: I0220 15:06:05.460529 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7f79c358-60c9-4d3e-a830-ce6cab8e39d6","Type":"ContainerStarted","Data":"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8"} Feb 20 15:06:05.461969 master-0 kubenswrapper[28120]: I0220 15:06:05.460605 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7f79c358-60c9-4d3e-a830-ce6cab8e39d6","Type":"ContainerStarted","Data":"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d"} Feb 20 15:06:05.542111 master-0 kubenswrapper[28120]: I0220 15:06:05.542067 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:06:05.550577 master-0 kubenswrapper[28120]: I0220 15:06:05.550528 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xzwx\" (UniqueName: \"kubernetes.io/projected/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-kube-api-access-7xzwx\") pod \"route-controller-manager-dd587fc6d-zkg8v\" (UID: \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\") " pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:05.550676 master-0 kubenswrapper[28120]: I0220 15:06:05.550597 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-config\") pod \"route-controller-manager-dd587fc6d-zkg8v\" (UID: \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\") " pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:05.550776 master-0 kubenswrapper[28120]: I0220 15:06:05.550721 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-client-ca\") pod \"route-controller-manager-dd587fc6d-zkg8v\" (UID: \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\") " pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:05.550961 master-0 kubenswrapper[28120]: I0220 15:06:05.550899 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-serving-cert\") pod \"route-controller-manager-dd587fc6d-zkg8v\" (UID: \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\") " pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:05.551205 master-0 kubenswrapper[28120]: I0220 15:06:05.551152 28120 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e11583a-e3a5-4634-89bc-6edb03f6ba02-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:05.551205 master-0 kubenswrapper[28120]: I0220 15:06:05.551181 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmg86\" (UniqueName: \"kubernetes.io/projected/4e11583a-e3a5-4634-89bc-6edb03f6ba02-kube-api-access-kmg86\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:05.652309 master-0 kubenswrapper[28120]: I0220 15:06:05.652229 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-client-ca\") pod \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " Feb 20 15:06:05.652529 master-0 kubenswrapper[28120]: I0220 15:06:05.652514 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k4cl\" (UniqueName: \"kubernetes.io/projected/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-kube-api-access-6k4cl\") pod \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " Feb 20 15:06:05.653582 master-0 kubenswrapper[28120]: I0220 15:06:05.653054 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-client-ca" (OuterVolumeSpecName: "client-ca") pod "2f8fbacc-4698-43ac-8941-e3a3db7c5dea" (UID: "2f8fbacc-4698-43ac-8941-e3a3db7c5dea"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:06:05.653739 master-0 kubenswrapper[28120]: I0220 15:06:05.653714 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-config\") pod \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " Feb 20 15:06:05.654750 master-0 kubenswrapper[28120]: I0220 15:06:05.654394 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-config" (OuterVolumeSpecName: "config") pod "2f8fbacc-4698-43ac-8941-e3a3db7c5dea" (UID: "2f8fbacc-4698-43ac-8941-e3a3db7c5dea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:06:05.654854 master-0 kubenswrapper[28120]: I0220 15:06:05.654841 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-proxy-ca-bundles\") pod \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " Feb 20 15:06:05.655839 master-0 kubenswrapper[28120]: I0220 15:06:05.655366 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-kube-api-access-6k4cl" (OuterVolumeSpecName: "kube-api-access-6k4cl") pod "2f8fbacc-4698-43ac-8941-e3a3db7c5dea" (UID: "2f8fbacc-4698-43ac-8941-e3a3db7c5dea"). InnerVolumeSpecName "kube-api-access-6k4cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:06:05.655951 master-0 kubenswrapper[28120]: I0220 15:06:05.655707 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2f8fbacc-4698-43ac-8941-e3a3db7c5dea" (UID: "2f8fbacc-4698-43ac-8941-e3a3db7c5dea"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:06:05.656083 master-0 kubenswrapper[28120]: I0220 15:06:05.656070 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-serving-cert\") pod \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\" (UID: \"2f8fbacc-4698-43ac-8941-e3a3db7c5dea\") " Feb 20 15:06:05.657360 master-0 kubenswrapper[28120]: I0220 15:06:05.657343 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-client-ca\") pod \"route-controller-manager-dd587fc6d-zkg8v\" (UID: \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\") " pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:05.658085 master-0 kubenswrapper[28120]: I0220 15:06:05.658071 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-serving-cert\") pod \"route-controller-manager-dd587fc6d-zkg8v\" (UID: \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\") " pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:05.658341 master-0 kubenswrapper[28120]: I0220 15:06:05.658327 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2f8fbacc-4698-43ac-8941-e3a3db7c5dea" (UID: "2f8fbacc-4698-43ac-8941-e3a3db7c5dea"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:06:05.658418 master-0 kubenswrapper[28120]: I0220 15:06:05.658388 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-client-ca\") pod \"route-controller-manager-dd587fc6d-zkg8v\" (UID: \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\") " pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:05.658579 master-0 kubenswrapper[28120]: I0220 15:06:05.658565 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xzwx\" (UniqueName: \"kubernetes.io/projected/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-kube-api-access-7xzwx\") pod \"route-controller-manager-dd587fc6d-zkg8v\" (UID: \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\") " pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:05.659094 master-0 kubenswrapper[28120]: I0220 15:06:05.659080 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-config\") pod \"route-controller-manager-dd587fc6d-zkg8v\" (UID: \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\") " pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:05.661841 master-0 kubenswrapper[28120]: I0220 15:06:05.661826 28120 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:05.662148 master-0 kubenswrapper[28120]: I0220 15:06:05.662137 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k4cl\" (UniqueName: \"kubernetes.io/projected/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-kube-api-access-6k4cl\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:05.662237 master-0 kubenswrapper[28120]: I0220 15:06:05.662227 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:05.664215 master-0 kubenswrapper[28120]: I0220 15:06:05.664200 28120 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:05.664312 master-0 kubenswrapper[28120]: I0220 15:06:05.664301 28120 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f8fbacc-4698-43ac-8941-e3a3db7c5dea-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:05.664408 master-0 kubenswrapper[28120]: I0220 15:06:05.661742 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-config\") pod \"route-controller-manager-dd587fc6d-zkg8v\" (UID: \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\") " pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:05.664531 master-0 kubenswrapper[28120]: I0220 15:06:05.661371 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-serving-cert\") pod \"route-controller-manager-dd587fc6d-zkg8v\" (UID: \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\") " pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:05.677310 master-0 kubenswrapper[28120]: I0220 15:06:05.677268 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xzwx\" (UniqueName: \"kubernetes.io/projected/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-kube-api-access-7xzwx\") pod \"route-controller-manager-dd587fc6d-zkg8v\" (UID: \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\") " pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:05.754878 master-0 kubenswrapper[28120]: I0220 15:06:05.754845 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:05.781273 master-0 kubenswrapper[28120]: I0220 15:06:05.780982 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r"] Feb 20 15:06:05.787378 master-0 kubenswrapper[28120]: I0220 15:06:05.787327 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcf8cbd8f-6gz7r"] Feb 20 15:06:06.067065 master-0 kubenswrapper[28120]: I0220 15:06:06.066961 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e11583a-e3a5-4634-89bc-6edb03f6ba02" path="/var/lib/kubelet/pods/4e11583a-e3a5-4634-89bc-6edb03f6ba02/volumes" Feb 20 15:06:06.215359 master-0 kubenswrapper[28120]: I0220 15:06:06.215307 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v"] Feb 20 15:06:06.473403 master-0 kubenswrapper[28120]: I0220 15:06:06.473275 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7f79c358-60c9-4d3e-a830-ce6cab8e39d6","Type":"ContainerStarted","Data":"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704"} Feb 20 15:06:06.473403 master-0 kubenswrapper[28120]: I0220 15:06:06.473334 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7f79c358-60c9-4d3e-a830-ce6cab8e39d6","Type":"ContainerStarted","Data":"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33"} Feb 20 15:06:06.473403 master-0 kubenswrapper[28120]: I0220 15:06:06.473396 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7f79c358-60c9-4d3e-a830-ce6cab8e39d6","Type":"ContainerStarted","Data":"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5"} Feb 20 15:06:06.474008 master-0 kubenswrapper[28120]: I0220 15:06:06.473409 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7f79c358-60c9-4d3e-a830-ce6cab8e39d6","Type":"ContainerStarted","Data":"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e"} Feb 20 15:06:06.475855 master-0 kubenswrapper[28120]: I0220 15:06:06.475796 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" event={"ID":"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a","Type":"ContainerStarted","Data":"e65df5afa22c23ca0de307de75ab77426c289897e4168e101062dcfd0c757782"} Feb 20 15:06:06.475997 master-0 kubenswrapper[28120]: I0220 15:06:06.475883 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" event={"ID":"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a","Type":"ContainerStarted","Data":"1e87f29e65b2beccf98b92c82824b97df4b2eac69862d655aebbfd9197f6622d"} Feb 20 15:06:06.476082 master-0 kubenswrapper[28120]: I0220 15:06:06.476050 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:06.481729 master-0 kubenswrapper[28120]: I0220 15:06:06.481610 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" event={"ID":"2f8fbacc-4698-43ac-8941-e3a3db7c5dea","Type":"ContainerDied","Data":"6df8cdc58f9d96783bb3ca6e76faf33668e2e87dcd7359442ebf564c1915f2ff"} Feb 20 15:06:06.481826 master-0 kubenswrapper[28120]: I0220 15:06:06.481743 28120 scope.go:117] "RemoveContainer" containerID="10d92bc258f8f8b9a79201f354e7c21782133cfd415da7b5b4cb3b3577c97674" Feb 20 15:06:06.481826 master-0 kubenswrapper[28120]: I0220 15:06:06.481688 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c6bcb95bb-jd879" Feb 20 15:06:06.540156 master-0 kubenswrapper[28120]: I0220 15:06:06.539986 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.045617272 podStartE2EDuration="7.539963012s" podCreationTimestamp="2026-02-20 15:05:59 +0000 UTC" firstStartedPulling="2026-02-20 15:06:01.360370961 +0000 UTC m=+299.621164524" lastFinishedPulling="2026-02-20 15:06:04.854716691 +0000 UTC m=+303.115510264" observedRunningTime="2026-02-20 15:06:06.521006074 +0000 UTC m=+304.781799657" watchObservedRunningTime="2026-02-20 15:06:06.539963012 +0000 UTC m=+304.800756575" Feb 20 15:06:06.545254 master-0 kubenswrapper[28120]: I0220 15:06:06.545152 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c6bcb95bb-jd879"] Feb 20 15:06:06.551007 master-0 kubenswrapper[28120]: I0220 15:06:06.550953 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6c6bcb95bb-jd879"] Feb 20 15:06:06.584061 master-0 kubenswrapper[28120]: I0220 15:06:06.583970 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" podStartSLOduration=3.583943159 podStartE2EDuration="3.583943159s" podCreationTimestamp="2026-02-20 15:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:06:06.575338032 +0000 UTC m=+304.836131605" watchObservedRunningTime="2026-02-20 15:06:06.583943159 +0000 UTC m=+304.844736742" Feb 20 15:06:07.049725 master-0 kubenswrapper[28120]: I0220 15:06:07.049665 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:07.212694 master-0 kubenswrapper[28120]: I0220 15:06:07.212635 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:06:07.212694 master-0 kubenswrapper[28120]: I0220 15:06:07.212686 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:06:07.214156 master-0 kubenswrapper[28120]: I0220 15:06:07.214107 28120 patch_prober.go:28] interesting pod/console-68cd6dbb78-rhjhv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 20 15:06:07.214234 master-0 kubenswrapper[28120]: I0220 15:06:07.214164 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68cd6dbb78-rhjhv" podUID="bbe031c3-3ab8-42af-ab24-718d83d7d121" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 20 15:06:08.072897 master-0 kubenswrapper[28120]: I0220 15:06:08.072790 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f8fbacc-4698-43ac-8941-e3a3db7c5dea" path="/var/lib/kubelet/pods/2f8fbacc-4698-43ac-8941-e3a3db7c5dea/volumes" Feb 20 15:06:08.106373 master-0 kubenswrapper[28120]: I0220 15:06:08.106309 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5458766959-m6tn4"] Feb 20 15:06:08.106736 master-0 kubenswrapper[28120]: E0220 15:06:08.106699 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8fbacc-4698-43ac-8941-e3a3db7c5dea" containerName="controller-manager" Feb 20 15:06:08.106811 master-0 kubenswrapper[28120]: I0220 15:06:08.106730 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8fbacc-4698-43ac-8941-e3a3db7c5dea" containerName="controller-manager" Feb 20 15:06:08.107062 master-0 kubenswrapper[28120]: I0220 15:06:08.107026 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8fbacc-4698-43ac-8941-e3a3db7c5dea" containerName="controller-manager" Feb 20 15:06:08.107785 master-0 kubenswrapper[28120]: I0220 15:06:08.107736 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.113072 master-0 kubenswrapper[28120]: I0220 15:06:08.113019 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-9zvh6" Feb 20 15:06:08.113538 master-0 kubenswrapper[28120]: I0220 15:06:08.113504 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 15:06:08.113741 master-0 kubenswrapper[28120]: I0220 15:06:08.113685 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 15:06:08.114407 master-0 kubenswrapper[28120]: I0220 15:06:08.114350 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 15:06:08.119149 master-0 kubenswrapper[28120]: I0220 15:06:08.119078 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 15:06:08.119423 master-0 kubenswrapper[28120]: I0220 15:06:08.119380 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 15:06:08.119650 master-0 kubenswrapper[28120]: I0220 15:06:08.119613 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 15:06:08.122437 master-0 kubenswrapper[28120]: I0220 15:06:08.122378 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5458766959-m6tn4"] Feb 20 15:06:08.215530 master-0 kubenswrapper[28120]: I0220 15:06:08.215448 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-client-ca\") pod \"controller-manager-5458766959-m6tn4\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.215737 master-0 kubenswrapper[28120]: I0220 15:06:08.215597 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc9674e2-3da5-4014-86ed-6498001508c8-serving-cert\") pod \"controller-manager-5458766959-m6tn4\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.215737 master-0 kubenswrapper[28120]: I0220 15:06:08.215628 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9kjj\" (UniqueName: \"kubernetes.io/projected/fc9674e2-3da5-4014-86ed-6498001508c8-kube-api-access-n9kjj\") pod \"controller-manager-5458766959-m6tn4\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.215852 master-0 kubenswrapper[28120]: I0220 15:06:08.215785 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-proxy-ca-bundles\") pod \"controller-manager-5458766959-m6tn4\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.216115 master-0 kubenswrapper[28120]: I0220 15:06:08.216026 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-config\") pod \"controller-manager-5458766959-m6tn4\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.317158 master-0 kubenswrapper[28120]: I0220 15:06:08.317089 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-client-ca\") pod \"controller-manager-5458766959-m6tn4\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.317392 master-0 kubenswrapper[28120]: I0220 15:06:08.317182 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc9674e2-3da5-4014-86ed-6498001508c8-serving-cert\") pod \"controller-manager-5458766959-m6tn4\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.317392 master-0 kubenswrapper[28120]: I0220 15:06:08.317211 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9kjj\" (UniqueName: \"kubernetes.io/projected/fc9674e2-3da5-4014-86ed-6498001508c8-kube-api-access-n9kjj\") pod \"controller-manager-5458766959-m6tn4\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.317392 master-0 kubenswrapper[28120]: I0220 15:06:08.317239 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-proxy-ca-bundles\") pod \"controller-manager-5458766959-m6tn4\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.317587 master-0 kubenswrapper[28120]: I0220 15:06:08.317523 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-config\") pod \"controller-manager-5458766959-m6tn4\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.319883 master-0 kubenswrapper[28120]: I0220 15:06:08.319815 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-client-ca\") pod \"controller-manager-5458766959-m6tn4\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.320017 master-0 kubenswrapper[28120]: I0220 15:06:08.319994 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-config\") pod \"controller-manager-5458766959-m6tn4\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.320117 master-0 kubenswrapper[28120]: I0220 15:06:08.320073 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-proxy-ca-bundles\") pod \"controller-manager-5458766959-m6tn4\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.325060 master-0 kubenswrapper[28120]: I0220 15:06:08.324948 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc9674e2-3da5-4014-86ed-6498001508c8-serving-cert\") pod \"controller-manager-5458766959-m6tn4\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.345698 master-0 kubenswrapper[28120]: I0220 15:06:08.345648 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9kjj\" (UniqueName: \"kubernetes.io/projected/fc9674e2-3da5-4014-86ed-6498001508c8-kube-api-access-n9kjj\") pod \"controller-manager-5458766959-m6tn4\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.454120 master-0 kubenswrapper[28120]: I0220 15:06:08.454040 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:08.957773 master-0 kubenswrapper[28120]: W0220 15:06:08.957708 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc9674e2_3da5_4014_86ed_6498001508c8.slice/crio-4d96da117087fa997128e35ced257cd8a3ecdf7090723175519e9584a990136f WatchSource:0}: Error finding container 4d96da117087fa997128e35ced257cd8a3ecdf7090723175519e9584a990136f: Status 404 returned error can't find the container with id 4d96da117087fa997128e35ced257cd8a3ecdf7090723175519e9584a990136f Feb 20 15:06:08.961154 master-0 kubenswrapper[28120]: I0220 15:06:08.961097 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5458766959-m6tn4"] Feb 20 15:06:09.229264 master-0 kubenswrapper[28120]: I0220 15:06:09.229160 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5cb8875886-98kx5" Feb 20 15:06:09.512998 master-0 kubenswrapper[28120]: I0220 15:06:09.512912 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" event={"ID":"fc9674e2-3da5-4014-86ed-6498001508c8","Type":"ContainerStarted","Data":"2a5ddcebadd5cb1873be6fffdf4c849705745e92404a961a579be12e64de8044"} Feb 20 15:06:09.512998 master-0 kubenswrapper[28120]: I0220 15:06:09.513001 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" event={"ID":"fc9674e2-3da5-4014-86ed-6498001508c8","Type":"ContainerStarted","Data":"4d96da117087fa997128e35ced257cd8a3ecdf7090723175519e9584a990136f"} Feb 20 15:06:09.513406 master-0 kubenswrapper[28120]: I0220 15:06:09.513374 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:09.520089 master-0 kubenswrapper[28120]: I0220 15:06:09.520053 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:09.530944 master-0 kubenswrapper[28120]: I0220 15:06:09.530866 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" podStartSLOduration=6.53085274 podStartE2EDuration="6.53085274s" podCreationTimestamp="2026-02-20 15:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:06:09.528838579 +0000 UTC m=+307.789632142" watchObservedRunningTime="2026-02-20 15:06:09.53085274 +0000 UTC m=+307.791646303" Feb 20 15:06:09.808769 master-0 kubenswrapper[28120]: I0220 15:06:09.808658 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:06:10.388634 master-0 kubenswrapper[28120]: I0220 15:06:10.387141 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs"] Feb 20 15:06:13.898674 master-0 kubenswrapper[28120]: I0220 15:06:13.898213 28120 patch_prober.go:28] interesting pod/console-6bcb747b79-dfz8f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Feb 20 15:06:13.898674 master-0 kubenswrapper[28120]: I0220 15:06:13.898325 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6bcb747b79-dfz8f" podUID="2ff24014-84b3-43df-a20a-7caa44088b0c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Feb 20 15:06:16.917742 master-0 kubenswrapper[28120]: I0220 15:06:16.917674 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:06:16.918294 master-0 kubenswrapper[28120]: I0220 15:06:16.917796 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:06:17.213224 master-0 kubenswrapper[28120]: I0220 15:06:17.213126 28120 patch_prober.go:28] interesting pod/console-68cd6dbb78-rhjhv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 20 15:06:17.213224 master-0 kubenswrapper[28120]: I0220 15:06:17.213174 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68cd6dbb78-rhjhv" podUID="bbe031c3-3ab8-42af-ab24-718d83d7d121" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 20 15:06:23.447253 master-0 kubenswrapper[28120]: I0220 15:06:23.447100 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5458766959-m6tn4"] Feb 20 15:06:23.448181 master-0 kubenswrapper[28120]: I0220 15:06:23.448094 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" podUID="fc9674e2-3da5-4014-86ed-6498001508c8" containerName="controller-manager" containerID="cri-o://2a5ddcebadd5cb1873be6fffdf4c849705745e92404a961a579be12e64de8044" gracePeriod=30 Feb 20 15:06:23.522064 master-0 kubenswrapper[28120]: I0220 15:06:23.522010 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v"] Feb 20 15:06:23.522261 master-0 kubenswrapper[28120]: I0220 15:06:23.522232 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" podUID="d597d2e5-b22b-49d8-bde6-21c0f2cdd47a" containerName="route-controller-manager" containerID="cri-o://e65df5afa22c23ca0de307de75ab77426c289897e4168e101062dcfd0c757782" gracePeriod=30 Feb 20 15:06:23.630768 master-0 kubenswrapper[28120]: I0220 15:06:23.630706 28120 generic.go:334] "Generic (PLEG): container finished" podID="fc9674e2-3da5-4014-86ed-6498001508c8" containerID="2a5ddcebadd5cb1873be6fffdf4c849705745e92404a961a579be12e64de8044" exitCode=0 Feb 20 15:06:23.630870 master-0 kubenswrapper[28120]: I0220 15:06:23.630728 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" event={"ID":"fc9674e2-3da5-4014-86ed-6498001508c8","Type":"ContainerDied","Data":"2a5ddcebadd5cb1873be6fffdf4c849705745e92404a961a579be12e64de8044"} Feb 20 15:06:23.895337 master-0 kubenswrapper[28120]: I0220 15:06:23.895258 28120 patch_prober.go:28] interesting pod/console-6bcb747b79-dfz8f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Feb 20 15:06:23.895529 master-0 kubenswrapper[28120]: I0220 15:06:23.895410 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6bcb747b79-dfz8f" podUID="2ff24014-84b3-43df-a20a-7caa44088b0c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Feb 20 15:06:24.344553 master-0 kubenswrapper[28120]: I0220 15:06:24.344511 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:24.433758 master-0 kubenswrapper[28120]: I0220 15:06:24.433726 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:24.521994 master-0 kubenswrapper[28120]: I0220 15:06:24.521882 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc9674e2-3da5-4014-86ed-6498001508c8-serving-cert\") pod \"fc9674e2-3da5-4014-86ed-6498001508c8\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " Feb 20 15:06:24.521994 master-0 kubenswrapper[28120]: I0220 15:06:24.521950 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xzwx\" (UniqueName: \"kubernetes.io/projected/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-kube-api-access-7xzwx\") pod \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\" (UID: \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\") " Feb 20 15:06:24.521994 master-0 kubenswrapper[28120]: I0220 15:06:24.521983 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-client-ca\") pod \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\" (UID: \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\") " Feb 20 15:06:24.522792 master-0 kubenswrapper[28120]: I0220 15:06:24.522014 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-config\") pod \"fc9674e2-3da5-4014-86ed-6498001508c8\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " Feb 20 15:06:24.522792 master-0 kubenswrapper[28120]: I0220 15:06:24.522035 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-serving-cert\") pod \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\" (UID: \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\") " Feb 20 15:06:24.522792 master-0 kubenswrapper[28120]: I0220 15:06:24.522122 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-config\") pod \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\" (UID: \"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a\") " Feb 20 15:06:24.522792 master-0 kubenswrapper[28120]: I0220 15:06:24.522145 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9kjj\" (UniqueName: \"kubernetes.io/projected/fc9674e2-3da5-4014-86ed-6498001508c8-kube-api-access-n9kjj\") pod \"fc9674e2-3da5-4014-86ed-6498001508c8\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " Feb 20 15:06:24.522792 master-0 kubenswrapper[28120]: I0220 15:06:24.522163 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-proxy-ca-bundles\") pod \"fc9674e2-3da5-4014-86ed-6498001508c8\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " Feb 20 15:06:24.522792 master-0 kubenswrapper[28120]: I0220 15:06:24.522187 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-client-ca\") pod \"fc9674e2-3da5-4014-86ed-6498001508c8\" (UID: \"fc9674e2-3da5-4014-86ed-6498001508c8\") " Feb 20 15:06:24.523205 master-0 kubenswrapper[28120]: I0220 15:06:24.522814 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fc9674e2-3da5-4014-86ed-6498001508c8" (UID: "fc9674e2-3da5-4014-86ed-6498001508c8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:06:24.523205 master-0 kubenswrapper[28120]: I0220 15:06:24.522902 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-client-ca" (OuterVolumeSpecName: "client-ca") pod "d597d2e5-b22b-49d8-bde6-21c0f2cdd47a" (UID: "d597d2e5-b22b-49d8-bde6-21c0f2cdd47a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:06:24.523205 master-0 kubenswrapper[28120]: I0220 15:06:24.522916 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-client-ca" (OuterVolumeSpecName: "client-ca") pod "fc9674e2-3da5-4014-86ed-6498001508c8" (UID: "fc9674e2-3da5-4014-86ed-6498001508c8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:06:24.523524 master-0 kubenswrapper[28120]: I0220 15:06:24.523428 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-config" (OuterVolumeSpecName: "config") pod "fc9674e2-3da5-4014-86ed-6498001508c8" (UID: "fc9674e2-3da5-4014-86ed-6498001508c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:06:24.523622 master-0 kubenswrapper[28120]: I0220 15:06:24.523554 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-config" (OuterVolumeSpecName: "config") pod "d597d2e5-b22b-49d8-bde6-21c0f2cdd47a" (UID: "d597d2e5-b22b-49d8-bde6-21c0f2cdd47a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:06:24.524689 master-0 kubenswrapper[28120]: I0220 15:06:24.524625 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d597d2e5-b22b-49d8-bde6-21c0f2cdd47a" (UID: "d597d2e5-b22b-49d8-bde6-21c0f2cdd47a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:06:24.525111 master-0 kubenswrapper[28120]: I0220 15:06:24.525069 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9674e2-3da5-4014-86ed-6498001508c8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fc9674e2-3da5-4014-86ed-6498001508c8" (UID: "fc9674e2-3da5-4014-86ed-6498001508c8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:06:24.525361 master-0 kubenswrapper[28120]: I0220 15:06:24.525303 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9674e2-3da5-4014-86ed-6498001508c8-kube-api-access-n9kjj" (OuterVolumeSpecName: "kube-api-access-n9kjj") pod "fc9674e2-3da5-4014-86ed-6498001508c8" (UID: "fc9674e2-3da5-4014-86ed-6498001508c8"). InnerVolumeSpecName "kube-api-access-n9kjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:06:24.525561 master-0 kubenswrapper[28120]: I0220 15:06:24.525521 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-kube-api-access-7xzwx" (OuterVolumeSpecName: "kube-api-access-7xzwx") pod "d597d2e5-b22b-49d8-bde6-21c0f2cdd47a" (UID: "d597d2e5-b22b-49d8-bde6-21c0f2cdd47a"). InnerVolumeSpecName "kube-api-access-7xzwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:06:24.629042 master-0 kubenswrapper[28120]: I0220 15:06:24.624355 28120 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc9674e2-3da5-4014-86ed-6498001508c8-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:24.629042 master-0 kubenswrapper[28120]: I0220 15:06:24.624425 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xzwx\" (UniqueName: \"kubernetes.io/projected/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-kube-api-access-7xzwx\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:24.629042 master-0 kubenswrapper[28120]: I0220 15:06:24.624449 28120 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:24.629042 master-0 kubenswrapper[28120]: I0220 15:06:24.624470 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:24.629042 master-0 kubenswrapper[28120]: I0220 15:06:24.624488 28120 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:24.629042 master-0 kubenswrapper[28120]: I0220 15:06:24.624567 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:24.629042 master-0 kubenswrapper[28120]: I0220 15:06:24.624586 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9kjj\" (UniqueName: \"kubernetes.io/projected/fc9674e2-3da5-4014-86ed-6498001508c8-kube-api-access-n9kjj\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:24.629042 master-0 kubenswrapper[28120]: I0220 15:06:24.624604 28120 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:24.629042 master-0 kubenswrapper[28120]: I0220 15:06:24.624621 28120 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc9674e2-3da5-4014-86ed-6498001508c8-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:24.642957 master-0 kubenswrapper[28120]: I0220 15:06:24.642895 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" Feb 20 15:06:24.643159 master-0 kubenswrapper[28120]: I0220 15:06:24.642890 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5458766959-m6tn4" event={"ID":"fc9674e2-3da5-4014-86ed-6498001508c8","Type":"ContainerDied","Data":"4d96da117087fa997128e35ced257cd8a3ecdf7090723175519e9584a990136f"} Feb 20 15:06:24.643159 master-0 kubenswrapper[28120]: I0220 15:06:24.643086 28120 scope.go:117] "RemoveContainer" containerID="2a5ddcebadd5cb1873be6fffdf4c849705745e92404a961a579be12e64de8044" Feb 20 15:06:24.646615 master-0 kubenswrapper[28120]: I0220 15:06:24.645775 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-955b69498-f6nq7" event={"ID":"15b884b3-2614-453a-88c5-f47fcc14aad1","Type":"ContainerStarted","Data":"5f39bd0efd69d85533b509f111d6eac178fe68a59a0241f9aa2a2920bc5b14ae"} Feb 20 15:06:24.646615 master-0 kubenswrapper[28120]: I0220 15:06:24.646439 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-955b69498-f6nq7" Feb 20 15:06:24.650071 master-0 kubenswrapper[28120]: I0220 15:06:24.650026 28120 generic.go:334] "Generic (PLEG): container finished" podID="d597d2e5-b22b-49d8-bde6-21c0f2cdd47a" containerID="e65df5afa22c23ca0de307de75ab77426c289897e4168e101062dcfd0c757782" exitCode=0 Feb 20 15:06:24.650071 master-0 kubenswrapper[28120]: I0220 15:06:24.650065 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" event={"ID":"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a","Type":"ContainerDied","Data":"e65df5afa22c23ca0de307de75ab77426c289897e4168e101062dcfd0c757782"} Feb 20 15:06:24.650264 master-0 kubenswrapper[28120]: I0220 15:06:24.650086 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" event={"ID":"d597d2e5-b22b-49d8-bde6-21c0f2cdd47a","Type":"ContainerDied","Data":"1e87f29e65b2beccf98b92c82824b97df4b2eac69862d655aebbfd9197f6622d"} Feb 20 15:06:24.650264 master-0 kubenswrapper[28120]: I0220 15:06:24.650124 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v" Feb 20 15:06:24.651127 master-0 kubenswrapper[28120]: I0220 15:06:24.651058 28120 patch_prober.go:28] interesting pod/downloads-955b69498-f6nq7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.99:8080/\": dial tcp 10.128.0.99:8080: connect: connection refused" start-of-body= Feb 20 15:06:24.651307 master-0 kubenswrapper[28120]: I0220 15:06:24.651157 28120 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-f6nq7" podUID="15b884b3-2614-453a-88c5-f47fcc14aad1" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.99:8080/\": dial tcp 10.128.0.99:8080: connect: connection refused" Feb 20 15:06:24.686682 master-0 kubenswrapper[28120]: I0220 15:06:24.686638 28120 scope.go:117] "RemoveContainer" containerID="e65df5afa22c23ca0de307de75ab77426c289897e4168e101062dcfd0c757782" Feb 20 15:06:24.691594 master-0 kubenswrapper[28120]: I0220 15:06:24.691479 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-955b69498-f6nq7" podStartSLOduration=2.63995274 podStartE2EDuration="38.691454743s" podCreationTimestamp="2026-02-20 15:05:46 +0000 UTC" firstStartedPulling="2026-02-20 15:05:47.876806981 +0000 UTC m=+286.137600544" lastFinishedPulling="2026-02-20 15:06:23.928308994 +0000 UTC m=+322.189102547" observedRunningTime="2026-02-20 15:06:24.6829722 +0000 UTC m=+322.943765773" watchObservedRunningTime="2026-02-20 15:06:24.691454743 +0000 UTC m=+322.952248316" Feb 20 15:06:24.706639 master-0 kubenswrapper[28120]: I0220 15:06:24.706569 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v"] Feb 20 15:06:24.711576 master-0 kubenswrapper[28120]: I0220 15:06:24.711521 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dd587fc6d-zkg8v"] Feb 20 15:06:24.726744 master-0 kubenswrapper[28120]: I0220 15:06:24.726662 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5458766959-m6tn4"] Feb 20 15:06:24.727664 master-0 kubenswrapper[28120]: I0220 15:06:24.727643 28120 scope.go:117] "RemoveContainer" containerID="e65df5afa22c23ca0de307de75ab77426c289897e4168e101062dcfd0c757782" Feb 20 15:06:24.728400 master-0 kubenswrapper[28120]: E0220 15:06:24.728346 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e65df5afa22c23ca0de307de75ab77426c289897e4168e101062dcfd0c757782\": container with ID starting with e65df5afa22c23ca0de307de75ab77426c289897e4168e101062dcfd0c757782 not found: ID does not exist" containerID="e65df5afa22c23ca0de307de75ab77426c289897e4168e101062dcfd0c757782" Feb 20 15:06:24.728484 master-0 kubenswrapper[28120]: I0220 15:06:24.728412 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e65df5afa22c23ca0de307de75ab77426c289897e4168e101062dcfd0c757782"} err="failed to get container status \"e65df5afa22c23ca0de307de75ab77426c289897e4168e101062dcfd0c757782\": rpc error: code = NotFound desc = could not find container \"e65df5afa22c23ca0de307de75ab77426c289897e4168e101062dcfd0c757782\": container with ID starting with e65df5afa22c23ca0de307de75ab77426c289897e4168e101062dcfd0c757782 not found: ID does not exist" Feb 20 15:06:24.733026 master-0 kubenswrapper[28120]: I0220 15:06:24.732966 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5458766959-m6tn4"] Feb 20 15:06:25.123623 master-0 kubenswrapper[28120]: I0220 15:06:25.123531 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5"] Feb 20 15:06:25.124195 master-0 kubenswrapper[28120]: E0220 15:06:25.124146 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9674e2-3da5-4014-86ed-6498001508c8" containerName="controller-manager" Feb 20 15:06:25.124293 master-0 kubenswrapper[28120]: I0220 15:06:25.124185 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9674e2-3da5-4014-86ed-6498001508c8" containerName="controller-manager" Feb 20 15:06:25.124357 master-0 kubenswrapper[28120]: E0220 15:06:25.124282 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d597d2e5-b22b-49d8-bde6-21c0f2cdd47a" containerName="route-controller-manager" Feb 20 15:06:25.124357 master-0 kubenswrapper[28120]: I0220 15:06:25.124345 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d597d2e5-b22b-49d8-bde6-21c0f2cdd47a" containerName="route-controller-manager" Feb 20 15:06:25.126230 master-0 kubenswrapper[28120]: I0220 15:06:25.126193 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9674e2-3da5-4014-86ed-6498001508c8" containerName="controller-manager" Feb 20 15:06:25.126415 master-0 kubenswrapper[28120]: I0220 15:06:25.126278 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="d597d2e5-b22b-49d8-bde6-21c0f2cdd47a" containerName="route-controller-manager" Feb 20 15:06:25.128033 master-0 kubenswrapper[28120]: I0220 15:06:25.127981 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.132331 master-0 kubenswrapper[28120]: I0220 15:06:25.132266 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 15:06:25.132647 master-0 kubenswrapper[28120]: I0220 15:06:25.132465 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 15:06:25.133812 master-0 kubenswrapper[28120]: I0220 15:06:25.133641 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 15:06:25.142916 master-0 kubenswrapper[28120]: I0220 15:06:25.142841 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-9zvh6" Feb 20 15:06:25.143170 master-0 kubenswrapper[28120]: I0220 15:06:25.142963 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 15:06:25.143261 master-0 kubenswrapper[28120]: I0220 15:06:25.143186 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 15:06:25.143419 master-0 kubenswrapper[28120]: I0220 15:06:25.143352 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf"] Feb 20 15:06:25.147549 master-0 kubenswrapper[28120]: I0220 15:06:25.147497 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 15:06:25.149325 master-0 kubenswrapper[28120]: I0220 15:06:25.149223 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" Feb 20 15:06:25.153111 master-0 kubenswrapper[28120]: I0220 15:06:25.153059 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 20 15:06:25.153263 master-0 kubenswrapper[28120]: I0220 15:06:25.153118 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 20 15:06:25.153263 master-0 kubenswrapper[28120]: I0220 15:06:25.153139 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 20 15:06:25.153528 master-0 kubenswrapper[28120]: I0220 15:06:25.153402 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-2w8rc" Feb 20 15:06:25.153646 master-0 kubenswrapper[28120]: I0220 15:06:25.153563 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 20 15:06:25.154799 master-0 kubenswrapper[28120]: I0220 15:06:25.154745 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 20 15:06:25.165774 master-0 kubenswrapper[28120]: I0220 15:06:25.159808 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf"] Feb 20 15:06:25.169600 master-0 kubenswrapper[28120]: I0220 15:06:25.169493 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5"] Feb 20 15:06:25.242951 master-0 kubenswrapper[28120]: I0220 15:06:25.242875 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97c04153-c57f-4e1e-b392-ab727c37fa43-client-ca\") pod \"controller-manager-5b6db4d85b-5gzn5\" (UID: \"97c04153-c57f-4e1e-b392-ab727c37fa43\") " pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.243537 master-0 kubenswrapper[28120]: I0220 15:06:25.243150 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8xps\" (UniqueName: \"kubernetes.io/projected/97c04153-c57f-4e1e-b392-ab727c37fa43-kube-api-access-k8xps\") pod \"controller-manager-5b6db4d85b-5gzn5\" (UID: \"97c04153-c57f-4e1e-b392-ab727c37fa43\") " pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.243537 master-0 kubenswrapper[28120]: I0220 15:06:25.243220 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97c04153-c57f-4e1e-b392-ab727c37fa43-proxy-ca-bundles\") pod \"controller-manager-5b6db4d85b-5gzn5\" (UID: \"97c04153-c57f-4e1e-b392-ab727c37fa43\") " pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.243537 master-0 kubenswrapper[28120]: I0220 15:06:25.243291 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c04153-c57f-4e1e-b392-ab727c37fa43-config\") pod \"controller-manager-5b6db4d85b-5gzn5\" (UID: \"97c04153-c57f-4e1e-b392-ab727c37fa43\") " pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.243537 master-0 kubenswrapper[28120]: I0220 15:06:25.243339 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97c04153-c57f-4e1e-b392-ab727c37fa43-serving-cert\") pod \"controller-manager-5b6db4d85b-5gzn5\" (UID: \"97c04153-c57f-4e1e-b392-ab727c37fa43\") " pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.345538 master-0 kubenswrapper[28120]: I0220 15:06:25.345393 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e852502-9703-4703-b5ca-b1157e768277-client-ca\") pod \"route-controller-manager-6d6647c7c6-w7zgf\" (UID: \"1e852502-9703-4703-b5ca-b1157e768277\") " pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" Feb 20 15:06:25.345538 master-0 kubenswrapper[28120]: I0220 15:06:25.345514 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97c04153-c57f-4e1e-b392-ab727c37fa43-client-ca\") pod \"controller-manager-5b6db4d85b-5gzn5\" (UID: \"97c04153-c57f-4e1e-b392-ab727c37fa43\") " pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.345855 master-0 kubenswrapper[28120]: I0220 15:06:25.345716 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b9ks\" (UniqueName: \"kubernetes.io/projected/1e852502-9703-4703-b5ca-b1157e768277-kube-api-access-9b9ks\") pod \"route-controller-manager-6d6647c7c6-w7zgf\" (UID: \"1e852502-9703-4703-b5ca-b1157e768277\") " pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" Feb 20 15:06:25.345965 master-0 kubenswrapper[28120]: I0220 15:06:25.345905 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8xps\" (UniqueName: \"kubernetes.io/projected/97c04153-c57f-4e1e-b392-ab727c37fa43-kube-api-access-k8xps\") pod \"controller-manager-5b6db4d85b-5gzn5\" (UID: \"97c04153-c57f-4e1e-b392-ab727c37fa43\") " pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.346045 master-0 kubenswrapper[28120]: I0220 15:06:25.346002 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97c04153-c57f-4e1e-b392-ab727c37fa43-proxy-ca-bundles\") pod \"controller-manager-5b6db4d85b-5gzn5\" (UID: \"97c04153-c57f-4e1e-b392-ab727c37fa43\") " pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.346103 master-0 kubenswrapper[28120]: I0220 15:06:25.346041 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e852502-9703-4703-b5ca-b1157e768277-config\") pod \"route-controller-manager-6d6647c7c6-w7zgf\" (UID: \"1e852502-9703-4703-b5ca-b1157e768277\") " pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" Feb 20 15:06:25.346103 master-0 kubenswrapper[28120]: I0220 15:06:25.346077 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e852502-9703-4703-b5ca-b1157e768277-serving-cert\") pod \"route-controller-manager-6d6647c7c6-w7zgf\" (UID: \"1e852502-9703-4703-b5ca-b1157e768277\") " pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" Feb 20 15:06:25.346342 master-0 kubenswrapper[28120]: I0220 15:06:25.346130 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c04153-c57f-4e1e-b392-ab727c37fa43-config\") pod \"controller-manager-5b6db4d85b-5gzn5\" (UID: \"97c04153-c57f-4e1e-b392-ab727c37fa43\") " pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.346342 master-0 kubenswrapper[28120]: I0220 15:06:25.346200 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97c04153-c57f-4e1e-b392-ab727c37fa43-serving-cert\") pod \"controller-manager-5b6db4d85b-5gzn5\" (UID: \"97c04153-c57f-4e1e-b392-ab727c37fa43\") " pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.347242 master-0 kubenswrapper[28120]: I0220 15:06:25.347184 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97c04153-c57f-4e1e-b392-ab727c37fa43-client-ca\") pod \"controller-manager-5b6db4d85b-5gzn5\" (UID: \"97c04153-c57f-4e1e-b392-ab727c37fa43\") " pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.347562 master-0 kubenswrapper[28120]: I0220 15:06:25.347527 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c04153-c57f-4e1e-b392-ab727c37fa43-config\") pod \"controller-manager-5b6db4d85b-5gzn5\" (UID: \"97c04153-c57f-4e1e-b392-ab727c37fa43\") " pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.348583 master-0 kubenswrapper[28120]: I0220 15:06:25.348541 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97c04153-c57f-4e1e-b392-ab727c37fa43-proxy-ca-bundles\") pod \"controller-manager-5b6db4d85b-5gzn5\" (UID: \"97c04153-c57f-4e1e-b392-ab727c37fa43\") " pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.350593 master-0 kubenswrapper[28120]: I0220 15:06:25.350536 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97c04153-c57f-4e1e-b392-ab727c37fa43-serving-cert\") pod \"controller-manager-5b6db4d85b-5gzn5\" (UID: \"97c04153-c57f-4e1e-b392-ab727c37fa43\") " pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.372781 master-0 kubenswrapper[28120]: I0220 15:06:25.372716 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8xps\" (UniqueName: \"kubernetes.io/projected/97c04153-c57f-4e1e-b392-ab727c37fa43-kube-api-access-k8xps\") pod \"controller-manager-5b6db4d85b-5gzn5\" (UID: \"97c04153-c57f-4e1e-b392-ab727c37fa43\") " pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.447650 master-0 kubenswrapper[28120]: I0220 15:06:25.447491 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e852502-9703-4703-b5ca-b1157e768277-client-ca\") pod \"route-controller-manager-6d6647c7c6-w7zgf\" (UID: \"1e852502-9703-4703-b5ca-b1157e768277\") " pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" Feb 20 15:06:25.447650 master-0 kubenswrapper[28120]: I0220 15:06:25.447622 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b9ks\" (UniqueName: \"kubernetes.io/projected/1e852502-9703-4703-b5ca-b1157e768277-kube-api-access-9b9ks\") pod \"route-controller-manager-6d6647c7c6-w7zgf\" (UID: \"1e852502-9703-4703-b5ca-b1157e768277\") " pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" Feb 20 15:06:25.447910 master-0 kubenswrapper[28120]: I0220 15:06:25.447727 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e852502-9703-4703-b5ca-b1157e768277-config\") pod \"route-controller-manager-6d6647c7c6-w7zgf\" (UID: \"1e852502-9703-4703-b5ca-b1157e768277\") " pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" Feb 20 15:06:25.447910 master-0 kubenswrapper[28120]: I0220 15:06:25.447765 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e852502-9703-4703-b5ca-b1157e768277-serving-cert\") pod \"route-controller-manager-6d6647c7c6-w7zgf\" (UID: \"1e852502-9703-4703-b5ca-b1157e768277\") " pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" Feb 20 15:06:25.449285 master-0 kubenswrapper[28120]: I0220 15:06:25.449226 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e852502-9703-4703-b5ca-b1157e768277-client-ca\") pod \"route-controller-manager-6d6647c7c6-w7zgf\" (UID: \"1e852502-9703-4703-b5ca-b1157e768277\") " pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" Feb 20 15:06:25.450671 master-0 kubenswrapper[28120]: I0220 15:06:25.450618 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e852502-9703-4703-b5ca-b1157e768277-config\") pod \"route-controller-manager-6d6647c7c6-w7zgf\" (UID: \"1e852502-9703-4703-b5ca-b1157e768277\") " pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" Feb 20 15:06:25.452179 master-0 kubenswrapper[28120]: I0220 15:06:25.452127 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e852502-9703-4703-b5ca-b1157e768277-serving-cert\") pod \"route-controller-manager-6d6647c7c6-w7zgf\" (UID: \"1e852502-9703-4703-b5ca-b1157e768277\") " pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" Feb 20 15:06:25.479561 master-0 kubenswrapper[28120]: I0220 15:06:25.479499 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b9ks\" (UniqueName: \"kubernetes.io/projected/1e852502-9703-4703-b5ca-b1157e768277-kube-api-access-9b9ks\") pod \"route-controller-manager-6d6647c7c6-w7zgf\" (UID: \"1e852502-9703-4703-b5ca-b1157e768277\") " pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" Feb 20 15:06:25.490939 master-0 kubenswrapper[28120]: I0220 15:06:25.490865 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:25.525675 master-0 kubenswrapper[28120]: I0220 15:06:25.525584 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" Feb 20 15:06:25.668813 master-0 kubenswrapper[28120]: I0220 15:06:25.668753 28120 patch_prober.go:28] interesting pod/downloads-955b69498-f6nq7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.99:8080/\": dial tcp 10.128.0.99:8080: connect: connection refused" start-of-body= Feb 20 15:06:25.669032 master-0 kubenswrapper[28120]: I0220 15:06:25.668835 28120 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-f6nq7" podUID="15b884b3-2614-453a-88c5-f47fcc14aad1" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.99:8080/\": dial tcp 10.128.0.99:8080: connect: connection refused" Feb 20 15:06:25.996729 master-0 kubenswrapper[28120]: I0220 15:06:25.996655 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5"] Feb 20 15:06:25.997544 master-0 kubenswrapper[28120]: W0220 15:06:25.997498 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97c04153_c57f_4e1e_b392_ab727c37fa43.slice/crio-c764a9a3914adbbc34eb5578ab51444ff24fa89a43cc3cf2220c664b137438d9 WatchSource:0}: Error finding container c764a9a3914adbbc34eb5578ab51444ff24fa89a43cc3cf2220c664b137438d9: Status 404 returned error can't find the container with id c764a9a3914adbbc34eb5578ab51444ff24fa89a43cc3cf2220c664b137438d9 Feb 20 15:06:26.081731 master-0 kubenswrapper[28120]: I0220 15:06:26.081673 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d597d2e5-b22b-49d8-bde6-21c0f2cdd47a" path="/var/lib/kubelet/pods/d597d2e5-b22b-49d8-bde6-21c0f2cdd47a/volumes" Feb 20 15:06:26.082809 master-0 kubenswrapper[28120]: I0220 15:06:26.082769 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9674e2-3da5-4014-86ed-6498001508c8" path="/var/lib/kubelet/pods/fc9674e2-3da5-4014-86ed-6498001508c8/volumes" Feb 20 15:06:26.083739 master-0 kubenswrapper[28120]: W0220 15:06:26.083690 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e852502_9703_4703_b5ca_b1157e768277.slice/crio-4674cebd48fbe1e73ca4ec08c55d777115bf2e7d85a7194b30a95cbe757c2969 WatchSource:0}: Error finding container 4674cebd48fbe1e73ca4ec08c55d777115bf2e7d85a7194b30a95cbe757c2969: Status 404 returned error can't find the container with id 4674cebd48fbe1e73ca4ec08c55d777115bf2e7d85a7194b30a95cbe757c2969 Feb 20 15:06:26.095368 master-0 kubenswrapper[28120]: I0220 15:06:26.095301 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf"] Feb 20 15:06:26.681045 master-0 kubenswrapper[28120]: I0220 15:06:26.680857 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" event={"ID":"97c04153-c57f-4e1e-b392-ab727c37fa43","Type":"ContainerStarted","Data":"754a86c1f23491c66fe601f3189bb9ab92e2a590adfb7a26123f23953fb75170"} Feb 20 15:06:26.681045 master-0 kubenswrapper[28120]: I0220 15:06:26.680955 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" event={"ID":"97c04153-c57f-4e1e-b392-ab727c37fa43","Type":"ContainerStarted","Data":"c764a9a3914adbbc34eb5578ab51444ff24fa89a43cc3cf2220c664b137438d9"} Feb 20 15:06:26.684385 master-0 kubenswrapper[28120]: I0220 15:06:26.684306 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" event={"ID":"1e852502-9703-4703-b5ca-b1157e768277","Type":"ContainerStarted","Data":"a5cd916b5607b79c0cc3477526846c20a49ac3b33bf50b1c4609037a406dca7f"} Feb 20 15:06:26.684385 master-0 kubenswrapper[28120]: I0220 15:06:26.684385 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" event={"ID":"1e852502-9703-4703-b5ca-b1157e768277","Type":"ContainerStarted","Data":"4674cebd48fbe1e73ca4ec08c55d777115bf2e7d85a7194b30a95cbe757c2969"} Feb 20 15:06:26.684765 master-0 kubenswrapper[28120]: I0220 15:06:26.684715 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" Feb 20 15:06:26.825609 master-0 kubenswrapper[28120]: I0220 15:06:26.825468 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" podStartSLOduration=3.825435191 podStartE2EDuration="3.825435191s" podCreationTimestamp="2026-02-20 15:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:06:26.823498442 +0000 UTC m=+325.084292015" watchObservedRunningTime="2026-02-20 15:06:26.825435191 +0000 UTC m=+325.086228794" Feb 20 15:06:27.071481 master-0 kubenswrapper[28120]: I0220 15:06:27.071283 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" podStartSLOduration=4.071256978 podStartE2EDuration="4.071256978s" podCreationTimestamp="2026-02-20 15:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:06:27.066096419 +0000 UTC m=+325.326889992" watchObservedRunningTime="2026-02-20 15:06:27.071256978 +0000 UTC m=+325.332050541" Feb 20 15:06:27.075304 master-0 kubenswrapper[28120]: I0220 15:06:27.075252 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d6647c7c6-w7zgf" Feb 20 15:06:27.213933 master-0 kubenswrapper[28120]: I0220 15:06:27.213865 28120 patch_prober.go:28] interesting pod/console-68cd6dbb78-rhjhv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 20 15:06:27.214177 master-0 kubenswrapper[28120]: I0220 15:06:27.213936 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68cd6dbb78-rhjhv" podUID="bbe031c3-3ab8-42af-ab24-718d83d7d121" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 20 15:06:27.345217 master-0 kubenswrapper[28120]: I0220 15:06:27.345055 28120 patch_prober.go:28] interesting pod/downloads-955b69498-f6nq7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.99:8080/\": dial tcp 10.128.0.99:8080: connect: connection refused" start-of-body= Feb 20 15:06:27.345217 master-0 kubenswrapper[28120]: I0220 15:06:27.345140 28120 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-955b69498-f6nq7" podUID="15b884b3-2614-453a-88c5-f47fcc14aad1" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.99:8080/\": dial tcp 10.128.0.99:8080: connect: connection refused" Feb 20 15:06:27.345217 master-0 kubenswrapper[28120]: I0220 15:06:27.345170 28120 patch_prober.go:28] interesting pod/downloads-955b69498-f6nq7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.99:8080/\": dial tcp 10.128.0.99:8080: connect: connection refused" start-of-body= Feb 20 15:06:27.345591 master-0 kubenswrapper[28120]: I0220 15:06:27.345260 28120 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-f6nq7" podUID="15b884b3-2614-453a-88c5-f47fcc14aad1" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.99:8080/\": dial tcp 10.128.0.99:8080: connect: connection refused" Feb 20 15:06:27.692542 master-0 kubenswrapper[28120]: I0220 15:06:27.692410 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:27.698441 master-0 kubenswrapper[28120]: I0220 15:06:27.698383 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b6db4d85b-5gzn5" Feb 20 15:06:33.895169 master-0 kubenswrapper[28120]: I0220 15:06:33.895067 28120 patch_prober.go:28] interesting pod/console-6bcb747b79-dfz8f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Feb 20 15:06:33.896072 master-0 kubenswrapper[28120]: I0220 15:06:33.895166 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6bcb747b79-dfz8f" podUID="2ff24014-84b3-43df-a20a-7caa44088b0c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Feb 20 15:06:35.437634 master-0 kubenswrapper[28120]: I0220 15:06:35.437521 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" podUID="bc47b01c-8c45-4e60-9558-7d34770441d4" containerName="oauth-openshift" containerID="cri-o://fad731f5662d750a81e6190957a60cb50151b6c92bf5f4edd1fbaf227d85ba11" gracePeriod=15 Feb 20 15:06:36.808709 master-0 kubenswrapper[28120]: I0220 15:06:36.808639 28120 generic.go:334] "Generic (PLEG): container finished" podID="bc47b01c-8c45-4e60-9558-7d34770441d4" containerID="fad731f5662d750a81e6190957a60cb50151b6c92bf5f4edd1fbaf227d85ba11" exitCode=0 Feb 20 15:06:36.809514 master-0 kubenswrapper[28120]: I0220 15:06:36.808717 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" event={"ID":"bc47b01c-8c45-4e60-9558-7d34770441d4","Type":"ContainerDied","Data":"fad731f5662d750a81e6190957a60cb50151b6c92bf5f4edd1fbaf227d85ba11"} Feb 20 15:06:36.927517 master-0 kubenswrapper[28120]: I0220 15:06:36.927412 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:06:36.933241 master-0 kubenswrapper[28120]: I0220 15:06:36.933204 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-59db89dc96-xkdwh" Feb 20 15:06:37.120605 master-0 kubenswrapper[28120]: I0220 15:06:37.120506 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:06:37.175960 master-0 kubenswrapper[28120]: I0220 15:06:37.175599 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-serving-cert\") pod \"bc47b01c-8c45-4e60-9558-7d34770441d4\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " Feb 20 15:06:37.175960 master-0 kubenswrapper[28120]: I0220 15:06:37.175685 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-login\") pod \"bc47b01c-8c45-4e60-9558-7d34770441d4\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " Feb 20 15:06:37.175960 master-0 kubenswrapper[28120]: I0220 15:06:37.175734 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xllbv\" (UniqueName: \"kubernetes.io/projected/bc47b01c-8c45-4e60-9558-7d34770441d4-kube-api-access-xllbv\") pod \"bc47b01c-8c45-4e60-9558-7d34770441d4\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " Feb 20 15:06:37.176383 master-0 kubenswrapper[28120]: I0220 15:06:37.176150 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-trusted-ca-bundle\") pod \"bc47b01c-8c45-4e60-9558-7d34770441d4\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " Feb 20 15:06:37.176486 master-0 kubenswrapper[28120]: I0220 15:06:37.176433 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-cliconfig\") pod \"bc47b01c-8c45-4e60-9558-7d34770441d4\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " Feb 20 15:06:37.176608 master-0 kubenswrapper[28120]: I0220 15:06:37.176565 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-ocp-branding-template\") pod \"bc47b01c-8c45-4e60-9558-7d34770441d4\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " Feb 20 15:06:37.179503 master-0 kubenswrapper[28120]: I0220 15:06:37.176941 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bc47b01c-8c45-4e60-9558-7d34770441d4-audit-dir\") pod \"bc47b01c-8c45-4e60-9558-7d34770441d4\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " Feb 20 15:06:37.179503 master-0 kubenswrapper[28120]: I0220 15:06:37.177022 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "bc47b01c-8c45-4e60-9558-7d34770441d4" (UID: "bc47b01c-8c45-4e60-9558-7d34770441d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:06:37.179503 master-0 kubenswrapper[28120]: I0220 15:06:37.177034 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc47b01c-8c45-4e60-9558-7d34770441d4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "bc47b01c-8c45-4e60-9558-7d34770441d4" (UID: "bc47b01c-8c45-4e60-9558-7d34770441d4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:06:37.179503 master-0 kubenswrapper[28120]: I0220 15:06:37.177072 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "bc47b01c-8c45-4e60-9558-7d34770441d4" (UID: "bc47b01c-8c45-4e60-9558-7d34770441d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:06:37.179503 master-0 kubenswrapper[28120]: I0220 15:06:37.177061 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-error\") pod \"bc47b01c-8c45-4e60-9558-7d34770441d4\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " Feb 20 15:06:37.179503 master-0 kubenswrapper[28120]: I0220 15:06:37.177220 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-audit-policies\") pod \"bc47b01c-8c45-4e60-9558-7d34770441d4\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " Feb 20 15:06:37.179503 master-0 kubenswrapper[28120]: I0220 15:06:37.177298 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-provider-selection\") pod \"bc47b01c-8c45-4e60-9558-7d34770441d4\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " Feb 20 15:06:37.179503 master-0 kubenswrapper[28120]: I0220 15:06:37.177433 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-service-ca\") pod \"bc47b01c-8c45-4e60-9558-7d34770441d4\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " Feb 20 15:06:37.179503 master-0 kubenswrapper[28120]: I0220 15:06:37.177552 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-session\") pod \"bc47b01c-8c45-4e60-9558-7d34770441d4\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " Feb 20 15:06:37.179503 master-0 kubenswrapper[28120]: I0220 15:06:37.177608 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-router-certs\") pod \"bc47b01c-8c45-4e60-9558-7d34770441d4\" (UID: \"bc47b01c-8c45-4e60-9558-7d34770441d4\") " Feb 20 15:06:37.179503 master-0 kubenswrapper[28120]: I0220 15:06:37.177826 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "bc47b01c-8c45-4e60-9558-7d34770441d4" (UID: "bc47b01c-8c45-4e60-9558-7d34770441d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:06:37.179503 master-0 kubenswrapper[28120]: I0220 15:06:37.178807 28120 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:37.179503 master-0 kubenswrapper[28120]: I0220 15:06:37.178851 28120 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:37.179503 master-0 kubenswrapper[28120]: I0220 15:06:37.178878 28120 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bc47b01c-8c45-4e60-9558-7d34770441d4-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:37.179503 master-0 kubenswrapper[28120]: I0220 15:06:37.178902 28120 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:37.179503 master-0 kubenswrapper[28120]: I0220 15:06:37.178849 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "bc47b01c-8c45-4e60-9558-7d34770441d4" (UID: "bc47b01c-8c45-4e60-9558-7d34770441d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:06:37.181091 master-0 kubenswrapper[28120]: I0220 15:06:37.181021 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "bc47b01c-8c45-4e60-9558-7d34770441d4" (UID: "bc47b01c-8c45-4e60-9558-7d34770441d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:06:37.181264 master-0 kubenswrapper[28120]: I0220 15:06:37.181181 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "bc47b01c-8c45-4e60-9558-7d34770441d4" (UID: "bc47b01c-8c45-4e60-9558-7d34770441d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:06:37.181428 master-0 kubenswrapper[28120]: I0220 15:06:37.181259 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "bc47b01c-8c45-4e60-9558-7d34770441d4" (UID: "bc47b01c-8c45-4e60-9558-7d34770441d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:06:37.182367 master-0 kubenswrapper[28120]: I0220 15:06:37.182278 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "bc47b01c-8c45-4e60-9558-7d34770441d4" (UID: "bc47b01c-8c45-4e60-9558-7d34770441d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:06:37.182770 master-0 kubenswrapper[28120]: I0220 15:06:37.182671 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc47b01c-8c45-4e60-9558-7d34770441d4-kube-api-access-xllbv" (OuterVolumeSpecName: "kube-api-access-xllbv") pod "bc47b01c-8c45-4e60-9558-7d34770441d4" (UID: "bc47b01c-8c45-4e60-9558-7d34770441d4"). InnerVolumeSpecName "kube-api-access-xllbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:06:37.182770 master-0 kubenswrapper[28120]: I0220 15:06:37.182683 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "bc47b01c-8c45-4e60-9558-7d34770441d4" (UID: "bc47b01c-8c45-4e60-9558-7d34770441d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:06:37.184021 master-0 kubenswrapper[28120]: I0220 15:06:37.183964 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "bc47b01c-8c45-4e60-9558-7d34770441d4" (UID: "bc47b01c-8c45-4e60-9558-7d34770441d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:06:37.184982 master-0 kubenswrapper[28120]: I0220 15:06:37.184899 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "bc47b01c-8c45-4e60-9558-7d34770441d4" (UID: "bc47b01c-8c45-4e60-9558-7d34770441d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:06:37.214041 master-0 kubenswrapper[28120]: I0220 15:06:37.213970 28120 patch_prober.go:28] interesting pod/console-68cd6dbb78-rhjhv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 20 15:06:37.214351 master-0 kubenswrapper[28120]: I0220 15:06:37.214042 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68cd6dbb78-rhjhv" podUID="bbe031c3-3ab8-42af-ab24-718d83d7d121" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 20 15:06:37.280870 master-0 kubenswrapper[28120]: I0220 15:06:37.280764 28120 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:37.280870 master-0 kubenswrapper[28120]: I0220 15:06:37.280816 28120 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:37.280870 master-0 kubenswrapper[28120]: I0220 15:06:37.280830 28120 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:37.280870 master-0 kubenswrapper[28120]: I0220 15:06:37.280842 28120 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:37.280870 master-0 kubenswrapper[28120]: I0220 15:06:37.280854 28120 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:37.280870 master-0 kubenswrapper[28120]: I0220 15:06:37.280866 28120 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:37.280870 master-0 kubenswrapper[28120]: I0220 15:06:37.280880 28120 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:37.280870 master-0 kubenswrapper[28120]: I0220 15:06:37.280889 28120 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bc47b01c-8c45-4e60-9558-7d34770441d4-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:37.280870 master-0 kubenswrapper[28120]: I0220 15:06:37.280901 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xllbv\" (UniqueName: \"kubernetes.io/projected/bc47b01c-8c45-4e60-9558-7d34770441d4-kube-api-access-xllbv\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:37.358254 master-0 kubenswrapper[28120]: I0220 15:06:37.358048 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-955b69498-f6nq7" Feb 20 15:06:37.818703 master-0 kubenswrapper[28120]: I0220 15:06:37.818649 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" Feb 20 15:06:37.818703 master-0 kubenswrapper[28120]: I0220 15:06:37.818640 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs" event={"ID":"bc47b01c-8c45-4e60-9558-7d34770441d4","Type":"ContainerDied","Data":"89341d3c5d275b2547226300b721d50db25c215cd2a2005a65109649867d88aa"} Feb 20 15:06:37.819327 master-0 kubenswrapper[28120]: I0220 15:06:37.818732 28120 scope.go:117] "RemoveContainer" containerID="fad731f5662d750a81e6190957a60cb50151b6c92bf5f4edd1fbaf227d85ba11" Feb 20 15:06:38.430626 master-0 kubenswrapper[28120]: I0220 15:06:38.424474 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-58f7754bd5-24tqr"] Feb 20 15:06:38.430626 master-0 kubenswrapper[28120]: E0220 15:06:38.424853 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc47b01c-8c45-4e60-9558-7d34770441d4" containerName="oauth-openshift" Feb 20 15:06:38.430626 master-0 kubenswrapper[28120]: I0220 15:06:38.424869 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc47b01c-8c45-4e60-9558-7d34770441d4" containerName="oauth-openshift" Feb 20 15:06:38.430626 master-0 kubenswrapper[28120]: I0220 15:06:38.425058 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc47b01c-8c45-4e60-9558-7d34770441d4" containerName="oauth-openshift" Feb 20 15:06:38.430626 master-0 kubenswrapper[28120]: I0220 15:06:38.425757 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.430626 master-0 kubenswrapper[28120]: I0220 15:06:38.430513 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-2wv5x" Feb 20 15:06:38.437701 master-0 kubenswrapper[28120]: I0220 15:06:38.437514 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 20 15:06:38.437701 master-0 kubenswrapper[28120]: I0220 15:06:38.437697 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 20 15:06:38.438224 master-0 kubenswrapper[28120]: I0220 15:06:38.438137 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 20 15:06:38.438332 master-0 kubenswrapper[28120]: I0220 15:06:38.438241 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 20 15:06:38.438564 master-0 kubenswrapper[28120]: I0220 15:06:38.438537 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 20 15:06:38.438642 master-0 kubenswrapper[28120]: I0220 15:06:38.438599 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 20 15:06:38.443823 master-0 kubenswrapper[28120]: I0220 15:06:38.443782 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 20 15:06:38.444069 master-0 kubenswrapper[28120]: I0220 15:06:38.444033 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 20 15:06:38.444795 master-0 kubenswrapper[28120]: I0220 15:06:38.444760 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 20 15:06:38.444986 master-0 kubenswrapper[28120]: I0220 15:06:38.444957 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 20 15:06:38.445125 master-0 kubenswrapper[28120]: I0220 15:06:38.445096 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 20 15:06:38.452840 master-0 kubenswrapper[28120]: I0220 15:06:38.450860 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 20 15:06:38.456847 master-0 kubenswrapper[28120]: I0220 15:06:38.455632 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs"] Feb 20 15:06:38.459672 master-0 kubenswrapper[28120]: I0220 15:06:38.459619 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 20 15:06:38.488327 master-0 kubenswrapper[28120]: I0220 15:06:38.488251 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58f7754bd5-24tqr"] Feb 20 15:06:38.500402 master-0 kubenswrapper[28120]: I0220 15:06:38.500291 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-7b96f5c8d4-6xtzs"] Feb 20 15:06:38.533658 master-0 kubenswrapper[28120]: I0220 15:06:38.533585 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-user-template-login\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.533823 master-0 kubenswrapper[28120]: I0220 15:06:38.533739 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-service-ca\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.534021 master-0 kubenswrapper[28120]: I0220 15:06:38.533913 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-router-certs\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.534175 master-0 kubenswrapper[28120]: I0220 15:06:38.534132 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66e941e3-32f6-413b-a363-b611f5e2ae71-audit-dir\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.534230 master-0 kubenswrapper[28120]: I0220 15:06:38.534208 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.534337 master-0 kubenswrapper[28120]: I0220 15:06:38.534294 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.534452 master-0 kubenswrapper[28120]: I0220 15:06:38.534417 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/66e941e3-32f6-413b-a363-b611f5e2ae71-audit-policies\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.534629 master-0 kubenswrapper[28120]: I0220 15:06:38.534544 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-user-template-error\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.534683 master-0 kubenswrapper[28120]: I0220 15:06:38.534663 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.534817 master-0 kubenswrapper[28120]: I0220 15:06:38.534779 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbg5w\" (UniqueName: \"kubernetes.io/projected/66e941e3-32f6-413b-a363-b611f5e2ae71-kube-api-access-gbg5w\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.534899 master-0 kubenswrapper[28120]: I0220 15:06:38.534871 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.535022 master-0 kubenswrapper[28120]: I0220 15:06:38.534991 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.535143 master-0 kubenswrapper[28120]: I0220 15:06:38.535113 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-session\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.637057 master-0 kubenswrapper[28120]: I0220 15:06:38.636970 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.637310 master-0 kubenswrapper[28120]: I0220 15:06:38.637094 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-session\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.637310 master-0 kubenswrapper[28120]: I0220 15:06:38.637162 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-user-template-login\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.637446 master-0 kubenswrapper[28120]: I0220 15:06:38.637393 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-service-ca\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.637526 master-0 kubenswrapper[28120]: I0220 15:06:38.637471 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-router-certs\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.637595 master-0 kubenswrapper[28120]: I0220 15:06:38.637544 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66e941e3-32f6-413b-a363-b611f5e2ae71-audit-dir\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.637665 master-0 kubenswrapper[28120]: I0220 15:06:38.637598 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.637665 master-0 kubenswrapper[28120]: I0220 15:06:38.637640 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.638074 master-0 kubenswrapper[28120]: I0220 15:06:38.637989 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/66e941e3-32f6-413b-a363-b611f5e2ae71-audit-dir\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.638074 master-0 kubenswrapper[28120]: I0220 15:06:38.638023 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/66e941e3-32f6-413b-a363-b611f5e2ae71-audit-policies\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.638245 master-0 kubenswrapper[28120]: I0220 15:06:38.638157 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-user-template-error\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.638311 master-0 kubenswrapper[28120]: I0220 15:06:38.638273 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.638378 master-0 kubenswrapper[28120]: I0220 15:06:38.638347 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbg5w\" (UniqueName: \"kubernetes.io/projected/66e941e3-32f6-413b-a363-b611f5e2ae71-kube-api-access-gbg5w\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.638456 master-0 kubenswrapper[28120]: I0220 15:06:38.638390 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.639017 master-0 kubenswrapper[28120]: I0220 15:06:38.638909 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-service-ca\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.639727 master-0 kubenswrapper[28120]: I0220 15:06:38.639672 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.639916 master-0 kubenswrapper[28120]: I0220 15:06:38.639867 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/66e941e3-32f6-413b-a363-b611f5e2ae71-audit-policies\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.640510 master-0 kubenswrapper[28120]: I0220 15:06:38.640440 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.644440 master-0 kubenswrapper[28120]: I0220 15:06:38.644376 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-user-template-error\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.644560 master-0 kubenswrapper[28120]: I0220 15:06:38.644518 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-user-template-login\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.644637 master-0 kubenswrapper[28120]: I0220 15:06:38.644560 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.644991 master-0 kubenswrapper[28120]: I0220 15:06:38.644914 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.645146 master-0 kubenswrapper[28120]: I0220 15:06:38.645083 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-session\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.645492 master-0 kubenswrapper[28120]: I0220 15:06:38.645410 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.645816 master-0 kubenswrapper[28120]: I0220 15:06:38.645507 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/66e941e3-32f6-413b-a363-b611f5e2ae71-v4-0-config-system-router-certs\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.668971 master-0 kubenswrapper[28120]: I0220 15:06:38.668876 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbg5w\" (UniqueName: \"kubernetes.io/projected/66e941e3-32f6-413b-a363-b611f5e2ae71-kube-api-access-gbg5w\") pod \"oauth-openshift-58f7754bd5-24tqr\" (UID: \"66e941e3-32f6-413b-a363-b611f5e2ae71\") " pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:38.760124 master-0 kubenswrapper[28120]: I0220 15:06:38.760034 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:39.318426 master-0 kubenswrapper[28120]: I0220 15:06:39.318345 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58f7754bd5-24tqr"] Feb 20 15:06:39.321348 master-0 kubenswrapper[28120]: W0220 15:06:39.321281 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66e941e3_32f6_413b_a363_b611f5e2ae71.slice/crio-770e0afdc941f5f662955b4f3740943b624226f2cb05a7e0db7c5ef1a5bad3a2 WatchSource:0}: Error finding container 770e0afdc941f5f662955b4f3740943b624226f2cb05a7e0db7c5ef1a5bad3a2: Status 404 returned error can't find the container with id 770e0afdc941f5f662955b4f3740943b624226f2cb05a7e0db7c5ef1a5bad3a2 Feb 20 15:06:39.843968 master-0 kubenswrapper[28120]: I0220 15:06:39.843801 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" event={"ID":"66e941e3-32f6-413b-a363-b611f5e2ae71","Type":"ContainerStarted","Data":"5f5a38d35daa280dbeb25ba25c08ebb3bf7a8ee2580352c955bdaa926b191445"} Feb 20 15:06:39.843968 master-0 kubenswrapper[28120]: I0220 15:06:39.843894 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" event={"ID":"66e941e3-32f6-413b-a363-b611f5e2ae71","Type":"ContainerStarted","Data":"770e0afdc941f5f662955b4f3740943b624226f2cb05a7e0db7c5ef1a5bad3a2"} Feb 20 15:06:39.844408 master-0 kubenswrapper[28120]: I0220 15:06:39.844096 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:39.879849 master-0 kubenswrapper[28120]: I0220 15:06:39.879611 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" podStartSLOduration=29.879587831 podStartE2EDuration="29.879587831s" podCreationTimestamp="2026-02-20 15:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:06:39.874026651 +0000 UTC m=+338.134820244" watchObservedRunningTime="2026-02-20 15:06:39.879587831 +0000 UTC m=+338.140381434" Feb 20 15:06:40.068457 master-0 kubenswrapper[28120]: I0220 15:06:40.068385 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc47b01c-8c45-4e60-9558-7d34770441d4" path="/var/lib/kubelet/pods/bc47b01c-8c45-4e60-9558-7d34770441d4/volumes" Feb 20 15:06:40.231582 master-0 kubenswrapper[28120]: I0220 15:06:40.231474 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-58f7754bd5-24tqr" Feb 20 15:06:43.895353 master-0 kubenswrapper[28120]: I0220 15:06:43.895249 28120 patch_prober.go:28] interesting pod/console-6bcb747b79-dfz8f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Feb 20 15:06:43.896737 master-0 kubenswrapper[28120]: I0220 15:06:43.895350 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6bcb747b79-dfz8f" podUID="2ff24014-84b3-43df-a20a-7caa44088b0c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Feb 20 15:06:44.232424 master-0 kubenswrapper[28120]: I0220 15:06:44.232243 28120 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 20 15:06:44.232750 master-0 kubenswrapper[28120]: I0220 15:06:44.232666 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver" containerID="cri-o://949dc0da88d4a4460d2cb97e0f3c24fd8362d270a7436b3ee3b6374fb6d9feb8" gracePeriod=15 Feb 20 15:06:44.232848 master-0 kubenswrapper[28120]: I0220 15:06:44.232724 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c03e02eb5af2eb1fb6c7b0bf2b150876badffa6c377fc640e73fe017195c3955" gracePeriod=15 Feb 20 15:06:44.232848 master-0 kubenswrapper[28120]: I0220 15:06:44.232687 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver-check-endpoints" containerID="cri-o://9ffb45550122418b770ffe4855a0d3263f2d5ff36573901ef2f27c5cf8525e1a" gracePeriod=15 Feb 20 15:06:44.233017 master-0 kubenswrapper[28120]: I0220 15:06:44.232770 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://68667de4f5fd3fb66f60d8286fdaa32ea9f4758986e5bccc5b915c5cb496818d" gracePeriod=15 Feb 20 15:06:44.233017 master-0 kubenswrapper[28120]: I0220 15:06:44.232724 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://2a0e88d060e1d652903a6e0684ac8c18552ee919654dcf829cbacf4c11cd8f63" gracePeriod=15 Feb 20 15:06:44.234886 master-0 kubenswrapper[28120]: I0220 15:06:44.234832 28120 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 20 15:06:44.235226 master-0 kubenswrapper[28120]: E0220 15:06:44.235182 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver-insecure-readyz" Feb 20 15:06:44.235226 master-0 kubenswrapper[28120]: I0220 15:06:44.235209 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver-insecure-readyz" Feb 20 15:06:44.235364 master-0 kubenswrapper[28120]: E0220 15:06:44.235234 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="setup" Feb 20 15:06:44.235364 master-0 kubenswrapper[28120]: I0220 15:06:44.235243 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="setup" Feb 20 15:06:44.235364 master-0 kubenswrapper[28120]: E0220 15:06:44.235258 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver" Feb 20 15:06:44.235364 master-0 kubenswrapper[28120]: I0220 15:06:44.235266 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver" Feb 20 15:06:44.235364 master-0 kubenswrapper[28120]: E0220 15:06:44.235284 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver-cert-regeneration-controller" Feb 20 15:06:44.235364 master-0 kubenswrapper[28120]: I0220 15:06:44.235293 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver-cert-regeneration-controller" Feb 20 15:06:44.235364 master-0 kubenswrapper[28120]: E0220 15:06:44.235305 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver-check-endpoints" Feb 20 15:06:44.235364 master-0 kubenswrapper[28120]: I0220 15:06:44.235312 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver-check-endpoints" Feb 20 15:06:44.235364 master-0 kubenswrapper[28120]: E0220 15:06:44.235345 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver-cert-syncer" Feb 20 15:06:44.235364 master-0 kubenswrapper[28120]: I0220 15:06:44.235354 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver-cert-syncer" Feb 20 15:06:44.236250 master-0 kubenswrapper[28120]: I0220 15:06:44.235512 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver-cert-syncer" Feb 20 15:06:44.236250 master-0 kubenswrapper[28120]: I0220 15:06:44.235531 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver-cert-regeneration-controller" Feb 20 15:06:44.236250 master-0 kubenswrapper[28120]: I0220 15:06:44.235543 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver-insecure-readyz" Feb 20 15:06:44.236250 master-0 kubenswrapper[28120]: I0220 15:06:44.235565 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver-check-endpoints" Feb 20 15:06:44.236250 master-0 kubenswrapper[28120]: I0220 15:06:44.235588 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="487622064474ed0ec70f7bf2a0fcb80b" containerName="kube-apiserver" Feb 20 15:06:44.238356 master-0 kubenswrapper[28120]: I0220 15:06:44.238263 28120 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 20 15:06:44.240992 master-0 kubenswrapper[28120]: I0220 15:06:44.240807 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:44.251648 master-0 kubenswrapper[28120]: I0220 15:06:44.250659 28120 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="487622064474ed0ec70f7bf2a0fcb80b" podUID="ea6211626f1fb00ff366fe3d5c42f298" Feb 20 15:06:44.354816 master-0 kubenswrapper[28120]: I0220 15:06:44.354771 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:44.355045 master-0 kubenswrapper[28120]: I0220 15:06:44.354830 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ea6211626f1fb00ff366fe3d5c42f298-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"ea6211626f1fb00ff366fe3d5c42f298\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:06:44.355045 master-0 kubenswrapper[28120]: I0220 15:06:44.354972 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ea6211626f1fb00ff366fe3d5c42f298-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"ea6211626f1fb00ff366fe3d5c42f298\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:06:44.355227 master-0 kubenswrapper[28120]: I0220 15:06:44.355151 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:44.355296 master-0 kubenswrapper[28120]: I0220 15:06:44.355259 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:44.355781 master-0 kubenswrapper[28120]: I0220 15:06:44.355524 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea6211626f1fb00ff366fe3d5c42f298-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"ea6211626f1fb00ff366fe3d5c42f298\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:06:44.355781 master-0 kubenswrapper[28120]: I0220 15:06:44.355666 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:44.355906 master-0 kubenswrapper[28120]: I0220 15:06:44.355827 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:44.457420 master-0 kubenswrapper[28120]: I0220 15:06:44.457340 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ea6211626f1fb00ff366fe3d5c42f298-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"ea6211626f1fb00ff366fe3d5c42f298\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:06:44.457652 master-0 kubenswrapper[28120]: I0220 15:06:44.457443 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ea6211626f1fb00ff366fe3d5c42f298-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"ea6211626f1fb00ff366fe3d5c42f298\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:06:44.457652 master-0 kubenswrapper[28120]: I0220 15:06:44.457494 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:44.457652 master-0 kubenswrapper[28120]: I0220 15:06:44.457543 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:44.457884 master-0 kubenswrapper[28120]: I0220 15:06:44.457533 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ea6211626f1fb00ff366fe3d5c42f298-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"ea6211626f1fb00ff366fe3d5c42f298\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:06:44.457884 master-0 kubenswrapper[28120]: I0220 15:06:44.457706 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea6211626f1fb00ff366fe3d5c42f298-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"ea6211626f1fb00ff366fe3d5c42f298\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:06:44.457884 master-0 kubenswrapper[28120]: I0220 15:06:44.457661 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea6211626f1fb00ff366fe3d5c42f298-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"ea6211626f1fb00ff366fe3d5c42f298\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:06:44.457884 master-0 kubenswrapper[28120]: I0220 15:06:44.457686 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ea6211626f1fb00ff366fe3d5c42f298-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"ea6211626f1fb00ff366fe3d5c42f298\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:06:44.457884 master-0 kubenswrapper[28120]: I0220 15:06:44.457793 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:44.457884 master-0 kubenswrapper[28120]: I0220 15:06:44.457783 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:44.457884 master-0 kubenswrapper[28120]: I0220 15:06:44.457846 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:44.458337 master-0 kubenswrapper[28120]: I0220 15:06:44.457891 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:44.458337 master-0 kubenswrapper[28120]: I0220 15:06:44.457992 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:44.458337 master-0 kubenswrapper[28120]: I0220 15:06:44.458005 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:44.458337 master-0 kubenswrapper[28120]: I0220 15:06:44.458076 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:44.458337 master-0 kubenswrapper[28120]: I0220 15:06:44.458028 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:44.903107 master-0 kubenswrapper[28120]: I0220 15:06:44.903036 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_487622064474ed0ec70f7bf2a0fcb80b/kube-apiserver-cert-syncer/0.log" Feb 20 15:06:44.904997 master-0 kubenswrapper[28120]: I0220 15:06:44.904901 28120 generic.go:334] "Generic (PLEG): container finished" podID="487622064474ed0ec70f7bf2a0fcb80b" containerID="9ffb45550122418b770ffe4855a0d3263f2d5ff36573901ef2f27c5cf8525e1a" exitCode=0 Feb 20 15:06:44.905108 master-0 kubenswrapper[28120]: I0220 15:06:44.904997 28120 generic.go:334] "Generic (PLEG): container finished" podID="487622064474ed0ec70f7bf2a0fcb80b" containerID="68667de4f5fd3fb66f60d8286fdaa32ea9f4758986e5bccc5b915c5cb496818d" exitCode=0 Feb 20 15:06:44.905108 master-0 kubenswrapper[28120]: I0220 15:06:44.905017 28120 generic.go:334] "Generic (PLEG): container finished" podID="487622064474ed0ec70f7bf2a0fcb80b" containerID="2a0e88d060e1d652903a6e0684ac8c18552ee919654dcf829cbacf4c11cd8f63" exitCode=0 Feb 20 15:06:44.905108 master-0 kubenswrapper[28120]: I0220 15:06:44.905032 28120 generic.go:334] "Generic (PLEG): container finished" podID="487622064474ed0ec70f7bf2a0fcb80b" containerID="c03e02eb5af2eb1fb6c7b0bf2b150876badffa6c377fc640e73fe017195c3955" exitCode=2 Feb 20 15:06:44.908635 master-0 kubenswrapper[28120]: I0220 15:06:44.908560 28120 generic.go:334] "Generic (PLEG): container finished" podID="78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" containerID="b59d80a3e1ed21ce35267f7624e91d5c503d2c8145aa26f5499f55751a4bcbb1" exitCode=0 Feb 20 15:06:44.908771 master-0 kubenswrapper[28120]: I0220 15:06:44.908629 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-8-master-0" event={"ID":"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6","Type":"ContainerDied","Data":"b59d80a3e1ed21ce35267f7624e91d5c503d2c8145aa26f5499f55751a4bcbb1"} Feb 20 15:06:44.910341 master-0 kubenswrapper[28120]: I0220 15:06:44.910264 28120 status_manager.go:851] "Failed to get status for pod" podUID="78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" pod="openshift-kube-apiserver/installer-8-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-8-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:46.623237 master-0 kubenswrapper[28120]: I0220 15:06:46.623182 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-8-master-0" Feb 20 15:06:46.624571 master-0 kubenswrapper[28120]: I0220 15:06:46.624510 28120 status_manager.go:851] "Failed to get status for pod" podUID="78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" pod="openshift-kube-apiserver/installer-8-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-8-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:46.629265 master-0 kubenswrapper[28120]: I0220 15:06:46.629231 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_487622064474ed0ec70f7bf2a0fcb80b/kube-apiserver-cert-syncer/0.log" Feb 20 15:06:46.630260 master-0 kubenswrapper[28120]: I0220 15:06:46.630205 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:06:46.631558 master-0 kubenswrapper[28120]: I0220 15:06:46.631503 28120 status_manager.go:851] "Failed to get status for pod" podUID="78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" pod="openshift-kube-apiserver/installer-8-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-8-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:46.632310 master-0 kubenswrapper[28120]: I0220 15:06:46.632269 28120 status_manager.go:851] "Failed to get status for pod" podUID="487622064474ed0ec70f7bf2a0fcb80b" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:46.720201 master-0 kubenswrapper[28120]: I0220 15:06:46.720138 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-kube-api-access\") pod \"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6\" (UID: \"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6\") " Feb 20 15:06:46.720443 master-0 kubenswrapper[28120]: I0220 15:06:46.720331 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-var-lock\") pod \"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6\" (UID: \"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6\") " Feb 20 15:06:46.720443 master-0 kubenswrapper[28120]: I0220 15:06:46.720373 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-cert-dir\") pod \"487622064474ed0ec70f7bf2a0fcb80b\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " Feb 20 15:06:46.720547 master-0 kubenswrapper[28120]: I0220 15:06:46.720471 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-audit-dir\") pod \"487622064474ed0ec70f7bf2a0fcb80b\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " Feb 20 15:06:46.720547 master-0 kubenswrapper[28120]: I0220 15:06:46.720449 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-var-lock" (OuterVolumeSpecName: "var-lock") pod "78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" (UID: "78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:06:46.720547 master-0 kubenswrapper[28120]: I0220 15:06:46.720486 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-resource-dir\") pod \"487622064474ed0ec70f7bf2a0fcb80b\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " Feb 20 15:06:46.720709 master-0 kubenswrapper[28120]: I0220 15:06:46.720555 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "487622064474ed0ec70f7bf2a0fcb80b" (UID: "487622064474ed0ec70f7bf2a0fcb80b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:06:46.720709 master-0 kubenswrapper[28120]: I0220 15:06:46.720591 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "487622064474ed0ec70f7bf2a0fcb80b" (UID: "487622064474ed0ec70f7bf2a0fcb80b"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:06:46.720709 master-0 kubenswrapper[28120]: I0220 15:06:46.720604 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "487622064474ed0ec70f7bf2a0fcb80b" (UID: "487622064474ed0ec70f7bf2a0fcb80b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:06:46.720709 master-0 kubenswrapper[28120]: I0220 15:06:46.720631 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-kubelet-dir\") pod \"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6\" (UID: \"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6\") " Feb 20 15:06:46.720709 master-0 kubenswrapper[28120]: I0220 15:06:46.720692 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" (UID: "78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:06:46.721790 master-0 kubenswrapper[28120]: I0220 15:06:46.721725 28120 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:46.721790 master-0 kubenswrapper[28120]: I0220 15:06:46.721781 28120 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:46.721790 master-0 kubenswrapper[28120]: I0220 15:06:46.721808 28120 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:46.722246 master-0 kubenswrapper[28120]: I0220 15:06:46.721834 28120 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:46.722246 master-0 kubenswrapper[28120]: I0220 15:06:46.721858 28120 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:46.723469 master-0 kubenswrapper[28120]: I0220 15:06:46.723411 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" (UID: "78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:06:46.823831 master-0 kubenswrapper[28120]: I0220 15:06:46.823720 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 15:06:46.932680 master-0 kubenswrapper[28120]: I0220 15:06:46.932606 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_487622064474ed0ec70f7bf2a0fcb80b/kube-apiserver-cert-syncer/0.log" Feb 20 15:06:46.933474 master-0 kubenswrapper[28120]: I0220 15:06:46.933425 28120 generic.go:334] "Generic (PLEG): container finished" podID="487622064474ed0ec70f7bf2a0fcb80b" containerID="949dc0da88d4a4460d2cb97e0f3c24fd8362d270a7436b3ee3b6374fb6d9feb8" exitCode=0 Feb 20 15:06:46.933575 master-0 kubenswrapper[28120]: I0220 15:06:46.933521 28120 scope.go:117] "RemoveContainer" containerID="9ffb45550122418b770ffe4855a0d3263f2d5ff36573901ef2f27c5cf8525e1a" Feb 20 15:06:46.933640 master-0 kubenswrapper[28120]: I0220 15:06:46.933563 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:06:46.935840 master-0 kubenswrapper[28120]: I0220 15:06:46.935782 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-8-master-0" event={"ID":"78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6","Type":"ContainerDied","Data":"81938480325b6afb4453dfcf8feefefeaa92ffdb19e84afa903a902748890b30"} Feb 20 15:06:46.935840 master-0 kubenswrapper[28120]: I0220 15:06:46.935833 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81938480325b6afb4453dfcf8feefefeaa92ffdb19e84afa903a902748890b30" Feb 20 15:06:46.936050 master-0 kubenswrapper[28120]: I0220 15:06:46.935889 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-8-master-0" Feb 20 15:06:46.967263 master-0 kubenswrapper[28120]: I0220 15:06:46.967150 28120 scope.go:117] "RemoveContainer" containerID="68667de4f5fd3fb66f60d8286fdaa32ea9f4758986e5bccc5b915c5cb496818d" Feb 20 15:06:46.980148 master-0 kubenswrapper[28120]: I0220 15:06:46.980040 28120 status_manager.go:851] "Failed to get status for pod" podUID="487622064474ed0ec70f7bf2a0fcb80b" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:46.981033 master-0 kubenswrapper[28120]: I0220 15:06:46.980902 28120 status_manager.go:851] "Failed to get status for pod" podUID="78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" pod="openshift-kube-apiserver/installer-8-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-8-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:46.982361 master-0 kubenswrapper[28120]: I0220 15:06:46.982285 28120 status_manager.go:851] "Failed to get status for pod" podUID="487622064474ed0ec70f7bf2a0fcb80b" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:46.984382 master-0 kubenswrapper[28120]: I0220 15:06:46.984204 28120 status_manager.go:851] "Failed to get status for pod" podUID="78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" pod="openshift-kube-apiserver/installer-8-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-8-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:46.997712 master-0 kubenswrapper[28120]: I0220 15:06:46.997641 28120 scope.go:117] "RemoveContainer" containerID="2a0e88d060e1d652903a6e0684ac8c18552ee919654dcf829cbacf4c11cd8f63" Feb 20 15:06:47.024299 master-0 kubenswrapper[28120]: I0220 15:06:47.024207 28120 scope.go:117] "RemoveContainer" containerID="c03e02eb5af2eb1fb6c7b0bf2b150876badffa6c377fc640e73fe017195c3955" Feb 20 15:06:47.048868 master-0 kubenswrapper[28120]: I0220 15:06:47.048816 28120 scope.go:117] "RemoveContainer" containerID="949dc0da88d4a4460d2cb97e0f3c24fd8362d270a7436b3ee3b6374fb6d9feb8" Feb 20 15:06:47.080489 master-0 kubenswrapper[28120]: I0220 15:06:47.080420 28120 scope.go:117] "RemoveContainer" containerID="8f02c0ec25be8ae620d31fbe6d306947830daf49b081fbed83f45fe35912fea0" Feb 20 15:06:47.113490 master-0 kubenswrapper[28120]: I0220 15:06:47.113436 28120 scope.go:117] "RemoveContainer" containerID="9ffb45550122418b770ffe4855a0d3263f2d5ff36573901ef2f27c5cf8525e1a" Feb 20 15:06:47.114121 master-0 kubenswrapper[28120]: E0220 15:06:47.114071 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ffb45550122418b770ffe4855a0d3263f2d5ff36573901ef2f27c5cf8525e1a\": container with ID starting with 9ffb45550122418b770ffe4855a0d3263f2d5ff36573901ef2f27c5cf8525e1a not found: ID does not exist" containerID="9ffb45550122418b770ffe4855a0d3263f2d5ff36573901ef2f27c5cf8525e1a" Feb 20 15:06:47.114190 master-0 kubenswrapper[28120]: I0220 15:06:47.114125 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ffb45550122418b770ffe4855a0d3263f2d5ff36573901ef2f27c5cf8525e1a"} err="failed to get container status \"9ffb45550122418b770ffe4855a0d3263f2d5ff36573901ef2f27c5cf8525e1a\": rpc error: code = NotFound desc = could not find container \"9ffb45550122418b770ffe4855a0d3263f2d5ff36573901ef2f27c5cf8525e1a\": container with ID starting with 9ffb45550122418b770ffe4855a0d3263f2d5ff36573901ef2f27c5cf8525e1a not found: ID does not exist" Feb 20 15:06:47.114190 master-0 kubenswrapper[28120]: I0220 15:06:47.114166 28120 scope.go:117] "RemoveContainer" containerID="68667de4f5fd3fb66f60d8286fdaa32ea9f4758986e5bccc5b915c5cb496818d" Feb 20 15:06:47.114961 master-0 kubenswrapper[28120]: E0220 15:06:47.114864 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68667de4f5fd3fb66f60d8286fdaa32ea9f4758986e5bccc5b915c5cb496818d\": container with ID starting with 68667de4f5fd3fb66f60d8286fdaa32ea9f4758986e5bccc5b915c5cb496818d not found: ID does not exist" containerID="68667de4f5fd3fb66f60d8286fdaa32ea9f4758986e5bccc5b915c5cb496818d" Feb 20 15:06:47.115030 master-0 kubenswrapper[28120]: I0220 15:06:47.114970 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68667de4f5fd3fb66f60d8286fdaa32ea9f4758986e5bccc5b915c5cb496818d"} err="failed to get container status \"68667de4f5fd3fb66f60d8286fdaa32ea9f4758986e5bccc5b915c5cb496818d\": rpc error: code = NotFound desc = could not find container \"68667de4f5fd3fb66f60d8286fdaa32ea9f4758986e5bccc5b915c5cb496818d\": container with ID starting with 68667de4f5fd3fb66f60d8286fdaa32ea9f4758986e5bccc5b915c5cb496818d not found: ID does not exist" Feb 20 15:06:47.115069 master-0 kubenswrapper[28120]: I0220 15:06:47.115032 28120 scope.go:117] "RemoveContainer" containerID="2a0e88d060e1d652903a6e0684ac8c18552ee919654dcf829cbacf4c11cd8f63" Feb 20 15:06:47.115579 master-0 kubenswrapper[28120]: E0220 15:06:47.115533 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a0e88d060e1d652903a6e0684ac8c18552ee919654dcf829cbacf4c11cd8f63\": container with ID starting with 2a0e88d060e1d652903a6e0684ac8c18552ee919654dcf829cbacf4c11cd8f63 not found: ID does not exist" containerID="2a0e88d060e1d652903a6e0684ac8c18552ee919654dcf829cbacf4c11cd8f63" Feb 20 15:06:47.115625 master-0 kubenswrapper[28120]: I0220 15:06:47.115584 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0e88d060e1d652903a6e0684ac8c18552ee919654dcf829cbacf4c11cd8f63"} err="failed to get container status \"2a0e88d060e1d652903a6e0684ac8c18552ee919654dcf829cbacf4c11cd8f63\": rpc error: code = NotFound desc = could not find container \"2a0e88d060e1d652903a6e0684ac8c18552ee919654dcf829cbacf4c11cd8f63\": container with ID starting with 2a0e88d060e1d652903a6e0684ac8c18552ee919654dcf829cbacf4c11cd8f63 not found: ID does not exist" Feb 20 15:06:47.115625 master-0 kubenswrapper[28120]: I0220 15:06:47.115612 28120 scope.go:117] "RemoveContainer" containerID="c03e02eb5af2eb1fb6c7b0bf2b150876badffa6c377fc640e73fe017195c3955" Feb 20 15:06:47.116040 master-0 kubenswrapper[28120]: E0220 15:06:47.116000 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c03e02eb5af2eb1fb6c7b0bf2b150876badffa6c377fc640e73fe017195c3955\": container with ID starting with c03e02eb5af2eb1fb6c7b0bf2b150876badffa6c377fc640e73fe017195c3955 not found: ID does not exist" containerID="c03e02eb5af2eb1fb6c7b0bf2b150876badffa6c377fc640e73fe017195c3955" Feb 20 15:06:47.116089 master-0 kubenswrapper[28120]: I0220 15:06:47.116054 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c03e02eb5af2eb1fb6c7b0bf2b150876badffa6c377fc640e73fe017195c3955"} err="failed to get container status \"c03e02eb5af2eb1fb6c7b0bf2b150876badffa6c377fc640e73fe017195c3955\": rpc error: code = NotFound desc = could not find container \"c03e02eb5af2eb1fb6c7b0bf2b150876badffa6c377fc640e73fe017195c3955\": container with ID starting with c03e02eb5af2eb1fb6c7b0bf2b150876badffa6c377fc640e73fe017195c3955 not found: ID does not exist" Feb 20 15:06:47.116089 master-0 kubenswrapper[28120]: I0220 15:06:47.116076 28120 scope.go:117] "RemoveContainer" containerID="949dc0da88d4a4460d2cb97e0f3c24fd8362d270a7436b3ee3b6374fb6d9feb8" Feb 20 15:06:47.116506 master-0 kubenswrapper[28120]: E0220 15:06:47.116461 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"949dc0da88d4a4460d2cb97e0f3c24fd8362d270a7436b3ee3b6374fb6d9feb8\": container with ID starting with 949dc0da88d4a4460d2cb97e0f3c24fd8362d270a7436b3ee3b6374fb6d9feb8 not found: ID does not exist" containerID="949dc0da88d4a4460d2cb97e0f3c24fd8362d270a7436b3ee3b6374fb6d9feb8" Feb 20 15:06:47.116550 master-0 kubenswrapper[28120]: I0220 15:06:47.116506 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"949dc0da88d4a4460d2cb97e0f3c24fd8362d270a7436b3ee3b6374fb6d9feb8"} err="failed to get container status \"949dc0da88d4a4460d2cb97e0f3c24fd8362d270a7436b3ee3b6374fb6d9feb8\": rpc error: code = NotFound desc = could not find container \"949dc0da88d4a4460d2cb97e0f3c24fd8362d270a7436b3ee3b6374fb6d9feb8\": container with ID starting with 949dc0da88d4a4460d2cb97e0f3c24fd8362d270a7436b3ee3b6374fb6d9feb8 not found: ID does not exist" Feb 20 15:06:47.116550 master-0 kubenswrapper[28120]: I0220 15:06:47.116533 28120 scope.go:117] "RemoveContainer" containerID="8f02c0ec25be8ae620d31fbe6d306947830daf49b081fbed83f45fe35912fea0" Feb 20 15:06:47.117073 master-0 kubenswrapper[28120]: E0220 15:06:47.117020 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f02c0ec25be8ae620d31fbe6d306947830daf49b081fbed83f45fe35912fea0\": container with ID starting with 8f02c0ec25be8ae620d31fbe6d306947830daf49b081fbed83f45fe35912fea0 not found: ID does not exist" containerID="8f02c0ec25be8ae620d31fbe6d306947830daf49b081fbed83f45fe35912fea0" Feb 20 15:06:47.117073 master-0 kubenswrapper[28120]: I0220 15:06:47.117058 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f02c0ec25be8ae620d31fbe6d306947830daf49b081fbed83f45fe35912fea0"} err="failed to get container status \"8f02c0ec25be8ae620d31fbe6d306947830daf49b081fbed83f45fe35912fea0\": rpc error: code = NotFound desc = could not find container \"8f02c0ec25be8ae620d31fbe6d306947830daf49b081fbed83f45fe35912fea0\": container with ID starting with 8f02c0ec25be8ae620d31fbe6d306947830daf49b081fbed83f45fe35912fea0 not found: ID does not exist" Feb 20 15:06:47.213169 master-0 kubenswrapper[28120]: I0220 15:06:47.213096 28120 patch_prober.go:28] interesting pod/console-68cd6dbb78-rhjhv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 20 15:06:47.213169 master-0 kubenswrapper[28120]: I0220 15:06:47.213175 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68cd6dbb78-rhjhv" podUID="bbe031c3-3ab8-42af-ab24-718d83d7d121" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 20 15:06:47.214672 master-0 kubenswrapper[28120]: E0220 15:06:47.214209 28120 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/events/console-68cd6dbb78-rhjhv.1895fcc8a9e4a353\": dial tcp 192.168.32.10:6443: connect: connection refused" event=< Feb 20 15:06:47.214672 master-0 kubenswrapper[28120]: &Event{ObjectMeta:{console-68cd6dbb78-rhjhv.1895fcc8a9e4a353 openshift-console 18137 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-console,Name:console-68cd6dbb78-rhjhv,UID:bbe031c3-3ab8-42af-ab24-718d83d7d121,APIVersion:v1,ResourceVersion:17574,FieldPath:spec.containers{console},},Reason:ProbeError,Message:Startup probe error: Get "https://10.128.0.104:8443/health": dial tcp 10.128.0.104:8443: connect: connection refused Feb 20 15:06:47.214672 master-0 kubenswrapper[28120]: body: Feb 20 15:06:47.214672 master-0 kubenswrapper[28120]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 15:06:07 +0000 UTC,LastTimestamp:2026-02-20 15:06:47.213152003 +0000 UTC m=+345.473945566,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 20 15:06:47.214672 master-0 kubenswrapper[28120]: > Feb 20 15:06:48.028701 master-0 kubenswrapper[28120]: E0220 15:06:48.028623 28120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:48.029604 master-0 kubenswrapper[28120]: E0220 15:06:48.029386 28120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:48.029947 master-0 kubenswrapper[28120]: E0220 15:06:48.029843 28120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:48.031001 master-0 kubenswrapper[28120]: E0220 15:06:48.030911 28120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:48.032213 master-0 kubenswrapper[28120]: E0220 15:06:48.032128 28120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:48.032213 master-0 kubenswrapper[28120]: I0220 15:06:48.032211 28120 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 20 15:06:48.033102 master-0 kubenswrapper[28120]: E0220 15:06:48.032988 28120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 20 15:06:48.067411 master-0 kubenswrapper[28120]: I0220 15:06:48.067339 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="487622064474ed0ec70f7bf2a0fcb80b" path="/var/lib/kubelet/pods/487622064474ed0ec70f7bf2a0fcb80b/volumes" Feb 20 15:06:48.235138 master-0 kubenswrapper[28120]: E0220 15:06:48.235015 28120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 20 15:06:48.637365 master-0 kubenswrapper[28120]: E0220 15:06:48.637273 28120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 20 15:06:49.314387 master-0 kubenswrapper[28120]: E0220 15:06:49.314282 28120 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:49.315214 master-0 kubenswrapper[28120]: I0220 15:06:49.314855 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:49.361171 master-0 kubenswrapper[28120]: W0220 15:06:49.361071 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb562c0e662e60a50528cc033a91352f2.slice/crio-847b9f0dfeff46bab2ff144c6cb83b7546e030336dfe4cae9bbabb7c4275f02a WatchSource:0}: Error finding container 847b9f0dfeff46bab2ff144c6cb83b7546e030336dfe4cae9bbabb7c4275f02a: Status 404 returned error can't find the container with id 847b9f0dfeff46bab2ff144c6cb83b7546e030336dfe4cae9bbabb7c4275f02a Feb 20 15:06:49.438814 master-0 kubenswrapper[28120]: E0220 15:06:49.438759 28120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 20 15:06:49.978333 master-0 kubenswrapper[28120]: I0220 15:06:49.977984 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"b562c0e662e60a50528cc033a91352f2","Type":"ContainerStarted","Data":"b565aadbad872ccb9f55e50e2a8984e6d0b605d8356a58bc1a9a25819417ad92"} Feb 20 15:06:49.978333 master-0 kubenswrapper[28120]: I0220 15:06:49.978056 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"b562c0e662e60a50528cc033a91352f2","Type":"ContainerStarted","Data":"847b9f0dfeff46bab2ff144c6cb83b7546e030336dfe4cae9bbabb7c4275f02a"} Feb 20 15:06:49.979495 master-0 kubenswrapper[28120]: I0220 15:06:49.979430 28120 status_manager.go:851] "Failed to get status for pod" podUID="78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" pod="openshift-kube-apiserver/installer-8-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-8-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:49.979635 master-0 kubenswrapper[28120]: E0220 15:06:49.979546 28120 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:06:51.040719 master-0 kubenswrapper[28120]: E0220 15:06:51.040636 28120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 20 15:06:51.411754 master-0 kubenswrapper[28120]: E0220 15:06:51.411542 28120 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T15:06:51Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T15:06:51Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T15:06:51Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-20T15:06:51Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:51.412558 master-0 kubenswrapper[28120]: E0220 15:06:51.412497 28120 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:51.413539 master-0 kubenswrapper[28120]: E0220 15:06:51.413486 28120 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:51.414356 master-0 kubenswrapper[28120]: E0220 15:06:51.414288 28120 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:51.415261 master-0 kubenswrapper[28120]: E0220 15:06:51.415207 28120 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:51.415261 master-0 kubenswrapper[28120]: E0220 15:06:51.415246 28120 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 20 15:06:52.066739 master-0 kubenswrapper[28120]: I0220 15:06:52.066616 28120 status_manager.go:851] "Failed to get status for pod" podUID="78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" pod="openshift-kube-apiserver/installer-8-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-8-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:53.895964 master-0 kubenswrapper[28120]: I0220 15:06:53.895837 28120 patch_prober.go:28] interesting pod/console-6bcb747b79-dfz8f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Feb 20 15:06:53.896956 master-0 kubenswrapper[28120]: I0220 15:06:53.895980 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6bcb747b79-dfz8f" podUID="2ff24014-84b3-43df-a20a-7caa44088b0c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Feb 20 15:06:54.242843 master-0 kubenswrapper[28120]: E0220 15:06:54.242748 28120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 20 15:06:55.458676 master-0 kubenswrapper[28120]: E0220 15:06:55.458518 28120 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/events/console-68cd6dbb78-rhjhv.1895fcc8a9e4a353\": dial tcp 192.168.32.10:6443: connect: connection refused" event=< Feb 20 15:06:55.458676 master-0 kubenswrapper[28120]: &Event{ObjectMeta:{console-68cd6dbb78-rhjhv.1895fcc8a9e4a353 openshift-console 18137 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-console,Name:console-68cd6dbb78-rhjhv,UID:bbe031c3-3ab8-42af-ab24-718d83d7d121,APIVersion:v1,ResourceVersion:17574,FieldPath:spec.containers{console},},Reason:ProbeError,Message:Startup probe error: Get "https://10.128.0.104:8443/health": dial tcp 10.128.0.104:8443: connect: connection refused Feb 20 15:06:55.458676 master-0 kubenswrapper[28120]: body: Feb 20 15:06:55.458676 master-0 kubenswrapper[28120]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-20 15:06:07 +0000 UTC,LastTimestamp:2026-02-20 15:06:47.213152003 +0000 UTC m=+345.473945566,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 20 15:06:55.458676 master-0 kubenswrapper[28120]: > Feb 20 15:06:57.213967 master-0 kubenswrapper[28120]: I0220 15:06:57.213816 28120 patch_prober.go:28] interesting pod/console-68cd6dbb78-rhjhv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 20 15:06:57.214796 master-0 kubenswrapper[28120]: I0220 15:06:57.213978 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68cd6dbb78-rhjhv" podUID="bbe031c3-3ab8-42af-ab24-718d83d7d121" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 20 15:06:58.062918 master-0 kubenswrapper[28120]: I0220 15:06:58.062837 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_84d9b64313fdfb9864d29171f85c889a/kube-controller-manager/1.log" Feb 20 15:06:58.065799 master-0 kubenswrapper[28120]: I0220 15:06:58.065744 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_84d9b64313fdfb9864d29171f85c889a/kube-controller-manager/0.log" Feb 20 15:06:58.066140 master-0 kubenswrapper[28120]: I0220 15:06:58.066090 28120 generic.go:334] "Generic (PLEG): container finished" podID="84d9b64313fdfb9864d29171f85c889a" containerID="39277d8e84273f51dbeefcc1cf459f92a492385ff6b0a6961af7b62f9a72c4a1" exitCode=1 Feb 20 15:06:58.068783 master-0 kubenswrapper[28120]: I0220 15:06:58.068741 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"84d9b64313fdfb9864d29171f85c889a","Type":"ContainerDied","Data":"39277d8e84273f51dbeefcc1cf459f92a492385ff6b0a6961af7b62f9a72c4a1"} Feb 20 15:06:58.069124 master-0 kubenswrapper[28120]: I0220 15:06:58.069090 28120 scope.go:117] "RemoveContainer" containerID="08fd62cd27292ede66927864096610eb2cbbf6bc7bf62eed86f0d310cd58267b" Feb 20 15:06:58.070330 master-0 kubenswrapper[28120]: I0220 15:06:58.070263 28120 scope.go:117] "RemoveContainer" containerID="39277d8e84273f51dbeefcc1cf459f92a492385ff6b0a6961af7b62f9a72c4a1" Feb 20 15:06:58.070761 master-0 kubenswrapper[28120]: I0220 15:06:58.070701 28120 status_manager.go:851] "Failed to get status for pod" podUID="78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" pod="openshift-kube-apiserver/installer-8-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-8-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:58.071136 master-0 kubenswrapper[28120]: E0220 15:06:58.071069 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(84d9b64313fdfb9864d29171f85c889a)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="84d9b64313fdfb9864d29171f85c889a" Feb 20 15:06:58.072162 master-0 kubenswrapper[28120]: I0220 15:06:58.072092 28120 status_manager.go:851] "Failed to get status for pod" podUID="84d9b64313fdfb9864d29171f85c889a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:59.056355 master-0 kubenswrapper[28120]: I0220 15:06:59.056266 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:06:59.057985 master-0 kubenswrapper[28120]: I0220 15:06:59.057862 28120 status_manager.go:851] "Failed to get status for pod" podUID="78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" pod="openshift-kube-apiserver/installer-8-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-8-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:59.060150 master-0 kubenswrapper[28120]: I0220 15:06:59.059100 28120 status_manager.go:851] "Failed to get status for pod" podUID="84d9b64313fdfb9864d29171f85c889a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:59.080952 master-0 kubenswrapper[28120]: I0220 15:06:59.080840 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_84d9b64313fdfb9864d29171f85c889a/kube-controller-manager/1.log" Feb 20 15:06:59.093052 master-0 kubenswrapper[28120]: I0220 15:06:59.092971 28120 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71055f17-3b7a-49ac-b848-3ca2598ea5cb" Feb 20 15:06:59.093052 master-0 kubenswrapper[28120]: I0220 15:06:59.093044 28120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71055f17-3b7a-49ac-b848-3ca2598ea5cb" Feb 20 15:06:59.094317 master-0 kubenswrapper[28120]: E0220 15:06:59.094243 28120 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:06:59.094896 master-0 kubenswrapper[28120]: I0220 15:06:59.094845 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:06:59.135598 master-0 kubenswrapper[28120]: W0220 15:06:59.135521 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea6211626f1fb00ff366fe3d5c42f298.slice/crio-fa57925a55ee1a6ea2f8c5aa9287d79b2176bbd795f9cfaf332c1bdc508ac9af WatchSource:0}: Error finding container fa57925a55ee1a6ea2f8c5aa9287d79b2176bbd795f9cfaf332c1bdc508ac9af: Status 404 returned error can't find the container with id fa57925a55ee1a6ea2f8c5aa9287d79b2176bbd795f9cfaf332c1bdc508ac9af Feb 20 15:06:59.809149 master-0 kubenswrapper[28120]: I0220 15:06:59.809088 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:06:59.860158 master-0 kubenswrapper[28120]: I0220 15:06:59.860117 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:06:59.861389 master-0 kubenswrapper[28120]: I0220 15:06:59.861303 28120 status_manager.go:851] "Failed to get status for pod" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:59.862260 master-0 kubenswrapper[28120]: I0220 15:06:59.862181 28120 status_manager.go:851] "Failed to get status for pod" podUID="78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" pod="openshift-kube-apiserver/installer-8-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-8-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:06:59.863167 master-0 kubenswrapper[28120]: I0220 15:06:59.863100 28120 status_manager.go:851] "Failed to get status for pod" podUID="84d9b64313fdfb9864d29171f85c889a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:07:00.097778 master-0 kubenswrapper[28120]: I0220 15:07:00.097587 28120 generic.go:334] "Generic (PLEG): container finished" podID="ea6211626f1fb00ff366fe3d5c42f298" containerID="9b1682df9693e3d436f6fc07379961138778863cf3c342c0ec957e9a05e4c6c2" exitCode=0 Feb 20 15:07:00.097778 master-0 kubenswrapper[28120]: I0220 15:07:00.097737 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ea6211626f1fb00ff366fe3d5c42f298","Type":"ContainerDied","Data":"9b1682df9693e3d436f6fc07379961138778863cf3c342c0ec957e9a05e4c6c2"} Feb 20 15:07:00.098688 master-0 kubenswrapper[28120]: I0220 15:07:00.097827 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ea6211626f1fb00ff366fe3d5c42f298","Type":"ContainerStarted","Data":"fa57925a55ee1a6ea2f8c5aa9287d79b2176bbd795f9cfaf332c1bdc508ac9af"} Feb 20 15:07:00.098768 master-0 kubenswrapper[28120]: I0220 15:07:00.098666 28120 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71055f17-3b7a-49ac-b848-3ca2598ea5cb" Feb 20 15:07:00.098768 master-0 kubenswrapper[28120]: I0220 15:07:00.098714 28120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71055f17-3b7a-49ac-b848-3ca2598ea5cb" Feb 20 15:07:00.100035 master-0 kubenswrapper[28120]: E0220 15:07:00.099966 28120 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:07:00.100160 master-0 kubenswrapper[28120]: I0220 15:07:00.099994 28120 status_manager.go:851] "Failed to get status for pod" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:07:00.101581 master-0 kubenswrapper[28120]: I0220 15:07:00.101085 28120 status_manager.go:851] "Failed to get status for pod" podUID="78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" pod="openshift-kube-apiserver/installer-8-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-8-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:07:00.102092 master-0 kubenswrapper[28120]: I0220 15:07:00.102026 28120 status_manager.go:851] "Failed to get status for pod" podUID="84d9b64313fdfb9864d29171f85c889a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:07:00.149333 master-0 kubenswrapper[28120]: I0220 15:07:00.149265 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:07:00.150699 master-0 kubenswrapper[28120]: I0220 15:07:00.150622 28120 status_manager.go:851] "Failed to get status for pod" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:07:00.151663 master-0 kubenswrapper[28120]: I0220 15:07:00.151586 28120 status_manager.go:851] "Failed to get status for pod" podUID="78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" pod="openshift-kube-apiserver/installer-8-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-8-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:07:00.152547 master-0 kubenswrapper[28120]: I0220 15:07:00.152486 28120 status_manager.go:851] "Failed to get status for pod" podUID="84d9b64313fdfb9864d29171f85c889a" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 20 15:07:01.109033 master-0 kubenswrapper[28120]: I0220 15:07:01.108990 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ea6211626f1fb00ff366fe3d5c42f298","Type":"ContainerStarted","Data":"a8490ca62f690591b921e001622bc222a01c2687157a0a7fcea7b53a28e464bd"} Feb 20 15:07:01.109033 master-0 kubenswrapper[28120]: I0220 15:07:01.109032 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ea6211626f1fb00ff366fe3d5c42f298","Type":"ContainerStarted","Data":"9a0accf4948922f187c44f52bde55c44ea0fd760696b3ca5b473c3aa1eda6590"} Feb 20 15:07:02.143777 master-0 kubenswrapper[28120]: I0220 15:07:02.143732 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ea6211626f1fb00ff366fe3d5c42f298","Type":"ContainerStarted","Data":"bcc329f2404d8b8b177a5405f256bb85c5b760c8ee6cd2f874e7b5f4a2c8e4bf"} Feb 20 15:07:02.143777 master-0 kubenswrapper[28120]: I0220 15:07:02.143784 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ea6211626f1fb00ff366fe3d5c42f298","Type":"ContainerStarted","Data":"252ae0fb3c6c05f118c19cc74d133c5aab173b7dbd204db4af9867f2490abcbb"} Feb 20 15:07:02.144281 master-0 kubenswrapper[28120]: I0220 15:07:02.143800 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ea6211626f1fb00ff366fe3d5c42f298","Type":"ContainerStarted","Data":"193435f9557cb3124fc1a2eed9ef7d4f0d7d67d77014c813c9a3f40178ac3af8"} Feb 20 15:07:02.144281 master-0 kubenswrapper[28120]: I0220 15:07:02.143984 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:07:02.144281 master-0 kubenswrapper[28120]: I0220 15:07:02.144152 28120 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71055f17-3b7a-49ac-b848-3ca2598ea5cb" Feb 20 15:07:02.144281 master-0 kubenswrapper[28120]: I0220 15:07:02.144189 28120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71055f17-3b7a-49ac-b848-3ca2598ea5cb" Feb 20 15:07:02.384492 master-0 kubenswrapper[28120]: I0220 15:07:02.384417 28120 scope.go:117] "RemoveContainer" containerID="648933d86ebc41b4f0c29dee7c6def360e8626c8f16e72ee5fb4e3e4b02a93f1" Feb 20 15:07:03.901219 master-0 kubenswrapper[28120]: I0220 15:07:03.901134 28120 patch_prober.go:28] interesting pod/console-6bcb747b79-dfz8f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Feb 20 15:07:03.902025 master-0 kubenswrapper[28120]: I0220 15:07:03.901242 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6bcb747b79-dfz8f" podUID="2ff24014-84b3-43df-a20a-7caa44088b0c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Feb 20 15:07:04.095465 master-0 kubenswrapper[28120]: I0220 15:07:04.095414 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:07:04.095733 master-0 kubenswrapper[28120]: I0220 15:07:04.095715 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:07:04.101636 master-0 kubenswrapper[28120]: I0220 15:07:04.101606 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:07:05.123644 master-0 kubenswrapper[28120]: I0220 15:07:05.123532 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:07:05.123644 master-0 kubenswrapper[28120]: I0220 15:07:05.123623 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:07:05.123644 master-0 kubenswrapper[28120]: I0220 15:07:05.123653 28120 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:07:05.124912 master-0 kubenswrapper[28120]: I0220 15:07:05.124705 28120 scope.go:117] "RemoveContainer" containerID="39277d8e84273f51dbeefcc1cf459f92a492385ff6b0a6961af7b62f9a72c4a1" Feb 20 15:07:05.126069 master-0 kubenswrapper[28120]: E0220 15:07:05.125550 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(84d9b64313fdfb9864d29171f85c889a)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="84d9b64313fdfb9864d29171f85c889a" Feb 20 15:07:07.164283 master-0 kubenswrapper[28120]: I0220 15:07:07.164229 28120 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:07:07.184169 master-0 kubenswrapper[28120]: I0220 15:07:07.184052 28120 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="ea6211626f1fb00ff366fe3d5c42f298" podUID="c8df3929-a630-4d36-b99b-fefcdc6593de" Feb 20 15:07:07.198785 master-0 kubenswrapper[28120]: I0220 15:07:07.198720 28120 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71055f17-3b7a-49ac-b848-3ca2598ea5cb" Feb 20 15:07:07.198785 master-0 kubenswrapper[28120]: I0220 15:07:07.198768 28120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71055f17-3b7a-49ac-b848-3ca2598ea5cb" Feb 20 15:07:07.207069 master-0 kubenswrapper[28120]: I0220 15:07:07.206989 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:07:07.213875 master-0 kubenswrapper[28120]: I0220 15:07:07.213775 28120 patch_prober.go:28] interesting pod/console-68cd6dbb78-rhjhv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 20 15:07:07.214043 master-0 kubenswrapper[28120]: I0220 15:07:07.213888 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68cd6dbb78-rhjhv" podUID="bbe031c3-3ab8-42af-ab24-718d83d7d121" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 20 15:07:07.288069 master-0 kubenswrapper[28120]: I0220 15:07:07.288014 28120 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="ea6211626f1fb00ff366fe3d5c42f298" podUID="c8df3929-a630-4d36-b99b-fefcdc6593de" Feb 20 15:07:08.205986 master-0 kubenswrapper[28120]: I0220 15:07:08.205910 28120 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71055f17-3b7a-49ac-b848-3ca2598ea5cb" Feb 20 15:07:08.205986 master-0 kubenswrapper[28120]: I0220 15:07:08.205976 28120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="71055f17-3b7a-49ac-b848-3ca2598ea5cb" Feb 20 15:07:08.209945 master-0 kubenswrapper[28120]: I0220 15:07:08.209863 28120 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="ea6211626f1fb00ff366fe3d5c42f298" podUID="c8df3929-a630-4d36-b99b-fefcdc6593de" Feb 20 15:07:13.895503 master-0 kubenswrapper[28120]: I0220 15:07:13.895421 28120 patch_prober.go:28] interesting pod/console-6bcb747b79-dfz8f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Feb 20 15:07:13.896529 master-0 kubenswrapper[28120]: I0220 15:07:13.895514 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6bcb747b79-dfz8f" podUID="2ff24014-84b3-43df-a20a-7caa44088b0c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Feb 20 15:07:16.758545 master-0 kubenswrapper[28120]: I0220 15:07:16.758459 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 20 15:07:16.919795 master-0 kubenswrapper[28120]: I0220 15:07:16.919734 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 20 15:07:17.057031 master-0 kubenswrapper[28120]: I0220 15:07:17.056821 28120 scope.go:117] "RemoveContainer" containerID="39277d8e84273f51dbeefcc1cf459f92a492385ff6b0a6961af7b62f9a72c4a1" Feb 20 15:07:17.213498 master-0 kubenswrapper[28120]: I0220 15:07:17.213429 28120 patch_prober.go:28] interesting pod/console-68cd6dbb78-rhjhv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Feb 20 15:07:17.213897 master-0 kubenswrapper[28120]: I0220 15:07:17.213499 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68cd6dbb78-rhjhv" podUID="bbe031c3-3ab8-42af-ab24-718d83d7d121" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Feb 20 15:07:17.309635 master-0 kubenswrapper[28120]: I0220 15:07:17.258094 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 20 15:07:17.363966 master-0 kubenswrapper[28120]: I0220 15:07:17.363888 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 20 15:07:17.439629 master-0 kubenswrapper[28120]: I0220 15:07:17.439567 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-38nuos4fhcl5a" Feb 20 15:07:17.577110 master-0 kubenswrapper[28120]: I0220 15:07:17.576890 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 20 15:07:17.817375 master-0 kubenswrapper[28120]: I0220 15:07:17.817302 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-jtt44" Feb 20 15:07:18.087154 master-0 kubenswrapper[28120]: I0220 15:07:18.087038 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 20 15:07:18.298263 master-0 kubenswrapper[28120]: I0220 15:07:18.298179 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 20 15:07:18.321531 master-0 kubenswrapper[28120]: I0220 15:07:18.321474 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_84d9b64313fdfb9864d29171f85c889a/kube-controller-manager/1.log" Feb 20 15:07:18.323574 master-0 kubenswrapper[28120]: I0220 15:07:18.323531 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"84d9b64313fdfb9864d29171f85c889a","Type":"ContainerStarted","Data":"c11585243e474d29c5abcfced9e9d122c303847e4a995025645e667d5a6f2999"} Feb 20 15:07:18.465201 master-0 kubenswrapper[28120]: I0220 15:07:18.464451 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 20 15:07:18.486357 master-0 kubenswrapper[28120]: I0220 15:07:18.486285 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 20 15:07:18.531404 master-0 kubenswrapper[28120]: I0220 15:07:18.531345 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-8m9cn" Feb 20 15:07:18.706668 master-0 kubenswrapper[28120]: I0220 15:07:18.706583 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-d7z2t" Feb 20 15:07:18.820441 master-0 kubenswrapper[28120]: I0220 15:07:18.820333 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 20 15:07:19.214145 master-0 kubenswrapper[28120]: I0220 15:07:19.213978 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 20 15:07:19.267131 master-0 kubenswrapper[28120]: I0220 15:07:19.267038 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 20 15:07:19.307699 master-0 kubenswrapper[28120]: I0220 15:07:19.307610 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 20 15:07:19.387890 master-0 kubenswrapper[28120]: I0220 15:07:19.387815 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 20 15:07:19.484088 master-0 kubenswrapper[28120]: I0220 15:07:19.483896 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 20 15:07:19.542473 master-0 kubenswrapper[28120]: I0220 15:07:19.542371 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-btmxs" Feb 20 15:07:19.572327 master-0 kubenswrapper[28120]: I0220 15:07:19.572234 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 20 15:07:19.647889 master-0 kubenswrapper[28120]: I0220 15:07:19.647781 28120 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 20 15:07:19.914164 master-0 kubenswrapper[28120]: I0220 15:07:19.914113 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 20 15:07:19.991867 master-0 kubenswrapper[28120]: I0220 15:07:19.991793 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 20 15:07:20.056781 master-0 kubenswrapper[28120]: I0220 15:07:20.056719 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-cpp79" Feb 20 15:07:20.100429 master-0 kubenswrapper[28120]: I0220 15:07:20.100319 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 20 15:07:20.104159 master-0 kubenswrapper[28120]: I0220 15:07:20.103313 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 20 15:07:20.107061 master-0 kubenswrapper[28120]: I0220 15:07:20.106998 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 20 15:07:20.201811 master-0 kubenswrapper[28120]: I0220 15:07:20.201693 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 20 15:07:20.203151 master-0 kubenswrapper[28120]: I0220 15:07:20.203105 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 20 15:07:20.210265 master-0 kubenswrapper[28120]: I0220 15:07:20.210227 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 20 15:07:20.232818 master-0 kubenswrapper[28120]: I0220 15:07:20.232780 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 20 15:07:20.262184 master-0 kubenswrapper[28120]: I0220 15:07:20.262123 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 20 15:07:20.291408 master-0 kubenswrapper[28120]: I0220 15:07:20.291353 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 20 15:07:20.300127 master-0 kubenswrapper[28120]: I0220 15:07:20.300087 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 20 15:07:20.395891 master-0 kubenswrapper[28120]: I0220 15:07:20.394578 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 20 15:07:20.463055 master-0 kubenswrapper[28120]: I0220 15:07:20.458606 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 20 15:07:20.485902 master-0 kubenswrapper[28120]: I0220 15:07:20.485824 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 20 15:07:20.518233 master-0 kubenswrapper[28120]: I0220 15:07:20.518129 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 20 15:07:20.524913 master-0 kubenswrapper[28120]: I0220 15:07:20.524836 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 20 15:07:20.578027 master-0 kubenswrapper[28120]: I0220 15:07:20.577912 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 20 15:07:20.595808 master-0 kubenswrapper[28120]: I0220 15:07:20.595713 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 20 15:07:20.655092 master-0 kubenswrapper[28120]: I0220 15:07:20.654999 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 20 15:07:20.675155 master-0 kubenswrapper[28120]: I0220 15:07:20.675078 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 20 15:07:20.784981 master-0 kubenswrapper[28120]: I0220 15:07:20.784908 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 20 15:07:20.796547 master-0 kubenswrapper[28120]: I0220 15:07:20.796486 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 20 15:07:20.838031 master-0 kubenswrapper[28120]: I0220 15:07:20.837954 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-brpdw" Feb 20 15:07:20.917625 master-0 kubenswrapper[28120]: I0220 15:07:20.917562 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-c2dd6" Feb 20 15:07:20.937274 master-0 kubenswrapper[28120]: I0220 15:07:20.937215 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 20 15:07:21.037660 master-0 kubenswrapper[28120]: I0220 15:07:21.037508 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 20 15:07:21.124364 master-0 kubenswrapper[28120]: I0220 15:07:21.124309 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-lr4lf" Feb 20 15:07:21.144094 master-0 kubenswrapper[28120]: I0220 15:07:21.144051 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 20 15:07:21.184772 master-0 kubenswrapper[28120]: I0220 15:07:21.184697 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xrgsf" Feb 20 15:07:21.187109 master-0 kubenswrapper[28120]: I0220 15:07:21.187065 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-mnmfc" Feb 20 15:07:21.311022 master-0 kubenswrapper[28120]: I0220 15:07:21.310819 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 20 15:07:21.313515 master-0 kubenswrapper[28120]: I0220 15:07:21.313462 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 20 15:07:21.320775 master-0 kubenswrapper[28120]: I0220 15:07:21.320713 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 20 15:07:21.333422 master-0 kubenswrapper[28120]: I0220 15:07:21.333377 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 20 15:07:21.379514 master-0 kubenswrapper[28120]: I0220 15:07:21.379450 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-s2d9t" Feb 20 15:07:21.391864 master-0 kubenswrapper[28120]: I0220 15:07:21.391812 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 20 15:07:21.404382 master-0 kubenswrapper[28120]: I0220 15:07:21.404320 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 20 15:07:21.418325 master-0 kubenswrapper[28120]: I0220 15:07:21.418263 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 20 15:07:21.513796 master-0 kubenswrapper[28120]: I0220 15:07:21.513703 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 20 15:07:21.544386 master-0 kubenswrapper[28120]: I0220 15:07:21.544320 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 20 15:07:21.570082 master-0 kubenswrapper[28120]: I0220 15:07:21.569948 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 20 15:07:21.577133 master-0 kubenswrapper[28120]: I0220 15:07:21.577067 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 20 15:07:21.582743 master-0 kubenswrapper[28120]: I0220 15:07:21.582713 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 20 15:07:21.715479 master-0 kubenswrapper[28120]: I0220 15:07:21.715427 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-rnbdm" Feb 20 15:07:21.724179 master-0 kubenswrapper[28120]: I0220 15:07:21.724119 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 20 15:07:21.734870 master-0 kubenswrapper[28120]: I0220 15:07:21.734822 28120 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 20 15:07:21.754578 master-0 kubenswrapper[28120]: I0220 15:07:21.754499 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 20 15:07:21.764853 master-0 kubenswrapper[28120]: I0220 15:07:21.764783 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-mblrr" Feb 20 15:07:21.827345 master-0 kubenswrapper[28120]: I0220 15:07:21.827161 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 20 15:07:21.833234 master-0 kubenswrapper[28120]: I0220 15:07:21.833147 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 20 15:07:21.914186 master-0 kubenswrapper[28120]: I0220 15:07:21.914100 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 20 15:07:21.933573 master-0 kubenswrapper[28120]: I0220 15:07:21.933469 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 20 15:07:21.958846 master-0 kubenswrapper[28120]: I0220 15:07:21.958733 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 20 15:07:21.960036 master-0 kubenswrapper[28120]: I0220 15:07:21.959979 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 20 15:07:22.030309 master-0 kubenswrapper[28120]: I0220 15:07:22.030228 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 20 15:07:22.066962 master-0 kubenswrapper[28120]: I0220 15:07:22.063256 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 20 15:07:22.078162 master-0 kubenswrapper[28120]: I0220 15:07:22.075554 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 20 15:07:22.088844 master-0 kubenswrapper[28120]: I0220 15:07:22.088791 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 20 15:07:22.091022 master-0 kubenswrapper[28120]: I0220 15:07:22.090157 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 20 15:07:22.113625 master-0 kubenswrapper[28120]: I0220 15:07:22.113547 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 20 15:07:22.113854 master-0 kubenswrapper[28120]: I0220 15:07:22.113620 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 20 15:07:22.125540 master-0 kubenswrapper[28120]: I0220 15:07:22.125477 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 20 15:07:22.137812 master-0 kubenswrapper[28120]: I0220 15:07:22.137730 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 20 15:07:22.200576 master-0 kubenswrapper[28120]: I0220 15:07:22.200523 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 20 15:07:22.272467 master-0 kubenswrapper[28120]: I0220 15:07:22.272413 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 20 15:07:22.340877 master-0 kubenswrapper[28120]: I0220 15:07:22.340747 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 20 15:07:22.379537 master-0 kubenswrapper[28120]: I0220 15:07:22.379472 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 20 15:07:22.414797 master-0 kubenswrapper[28120]: I0220 15:07:22.414741 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-tkxrl" Feb 20 15:07:22.431376 master-0 kubenswrapper[28120]: I0220 15:07:22.431318 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 20 15:07:22.446061 master-0 kubenswrapper[28120]: I0220 15:07:22.445992 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 20 15:07:22.473031 master-0 kubenswrapper[28120]: I0220 15:07:22.472958 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 20 15:07:22.511458 master-0 kubenswrapper[28120]: I0220 15:07:22.511404 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 20 15:07:22.511622 master-0 kubenswrapper[28120]: I0220 15:07:22.511517 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 20 15:07:22.593508 master-0 kubenswrapper[28120]: I0220 15:07:22.593325 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-sn97p" Feb 20 15:07:22.614725 master-0 kubenswrapper[28120]: I0220 15:07:22.614655 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 20 15:07:22.654621 master-0 kubenswrapper[28120]: I0220 15:07:22.654536 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-9g7zv" Feb 20 15:07:22.687644 master-0 kubenswrapper[28120]: I0220 15:07:22.687548 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-5mrbx" Feb 20 15:07:22.712654 master-0 kubenswrapper[28120]: I0220 15:07:22.712597 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 20 15:07:22.797838 master-0 kubenswrapper[28120]: I0220 15:07:22.797730 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 20 15:07:22.805900 master-0 kubenswrapper[28120]: I0220 15:07:22.805818 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-7vdpw" Feb 20 15:07:22.805900 master-0 kubenswrapper[28120]: I0220 15:07:22.805865 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-9zvh6" Feb 20 15:07:22.857494 master-0 kubenswrapper[28120]: I0220 15:07:22.857284 28120 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 20 15:07:22.859402 master-0 kubenswrapper[28120]: I0220 15:07:22.859329 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 20 15:07:22.868874 master-0 kubenswrapper[28120]: I0220 15:07:22.868804 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 20 15:07:22.869049 master-0 kubenswrapper[28120]: I0220 15:07:22.868909 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 20 15:07:22.875800 master-0 kubenswrapper[28120]: I0220 15:07:22.875730 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 20 15:07:22.889794 master-0 kubenswrapper[28120]: I0220 15:07:22.889432 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 20 15:07:22.909479 master-0 kubenswrapper[28120]: I0220 15:07:22.909116 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=15.909090626 podStartE2EDuration="15.909090626s" podCreationTimestamp="2026-02-20 15:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:07:22.895988948 +0000 UTC m=+381.156782541" watchObservedRunningTime="2026-02-20 15:07:22.909090626 +0000 UTC m=+381.169884229" Feb 20 15:07:22.945638 master-0 kubenswrapper[28120]: I0220 15:07:22.945480 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 20 15:07:22.949095 master-0 kubenswrapper[28120]: I0220 15:07:22.949041 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 20 15:07:23.022479 master-0 kubenswrapper[28120]: I0220 15:07:23.022395 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 20 15:07:23.062951 master-0 kubenswrapper[28120]: I0220 15:07:23.059739 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 20 15:07:23.093172 master-0 kubenswrapper[28120]: I0220 15:07:23.093062 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 20 15:07:23.148171 master-0 kubenswrapper[28120]: I0220 15:07:23.148039 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 20 15:07:23.148171 master-0 kubenswrapper[28120]: I0220 15:07:23.148142 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 20 15:07:23.215059 master-0 kubenswrapper[28120]: I0220 15:07:23.214912 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 20 15:07:23.276640 master-0 kubenswrapper[28120]: I0220 15:07:23.276567 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 20 15:07:23.355904 master-0 kubenswrapper[28120]: I0220 15:07:23.355823 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 20 15:07:23.422118 master-0 kubenswrapper[28120]: I0220 15:07:23.421976 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 20 15:07:23.440050 master-0 kubenswrapper[28120]: I0220 15:07:23.439989 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 20 15:07:23.513398 master-0 kubenswrapper[28120]: I0220 15:07:23.513326 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 20 15:07:23.525016 master-0 kubenswrapper[28120]: I0220 15:07:23.524909 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 20 15:07:23.534610 master-0 kubenswrapper[28120]: I0220 15:07:23.534544 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 20 15:07:23.593960 master-0 kubenswrapper[28120]: I0220 15:07:23.593871 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 20 15:07:23.655158 master-0 kubenswrapper[28120]: I0220 15:07:23.655046 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 20 15:07:23.723162 master-0 kubenswrapper[28120]: I0220 15:07:23.722990 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 20 15:07:23.726978 master-0 kubenswrapper[28120]: I0220 15:07:23.726889 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-gfr9m" Feb 20 15:07:23.728754 master-0 kubenswrapper[28120]: I0220 15:07:23.728689 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 20 15:07:23.740427 master-0 kubenswrapper[28120]: I0220 15:07:23.740372 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-2w8rc" Feb 20 15:07:23.748032 master-0 kubenswrapper[28120]: I0220 15:07:23.747970 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 20 15:07:23.767698 master-0 kubenswrapper[28120]: I0220 15:07:23.767603 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 20 15:07:23.792896 master-0 kubenswrapper[28120]: I0220 15:07:23.792825 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-dm2ds" Feb 20 15:07:23.821772 master-0 kubenswrapper[28120]: I0220 15:07:23.821680 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 20 15:07:23.905857 master-0 kubenswrapper[28120]: I0220 15:07:23.905782 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:07:23.912853 master-0 kubenswrapper[28120]: I0220 15:07:23.912789 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:07:23.927446 master-0 kubenswrapper[28120]: I0220 15:07:23.927387 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-4ru6f463aj8a5" Feb 20 15:07:23.935486 master-0 kubenswrapper[28120]: I0220 15:07:23.935413 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 20 15:07:23.947146 master-0 kubenswrapper[28120]: I0220 15:07:23.947056 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 20 15:07:24.003408 master-0 kubenswrapper[28120]: I0220 15:07:24.003320 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 20 15:07:24.049315 master-0 kubenswrapper[28120]: I0220 15:07:24.049243 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 20 15:07:24.052157 master-0 kubenswrapper[28120]: I0220 15:07:24.052097 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 20 15:07:24.071582 master-0 kubenswrapper[28120]: I0220 15:07:24.071466 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 20 15:07:24.170832 master-0 kubenswrapper[28120]: I0220 15:07:24.170750 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 20 15:07:24.198891 master-0 kubenswrapper[28120]: I0220 15:07:24.198848 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 20 15:07:24.199265 master-0 kubenswrapper[28120]: I0220 15:07:24.199223 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 20 15:07:24.234554 master-0 kubenswrapper[28120]: I0220 15:07:24.234496 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 20 15:07:24.250529 master-0 kubenswrapper[28120]: I0220 15:07:24.250472 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 20 15:07:24.259577 master-0 kubenswrapper[28120]: I0220 15:07:24.259472 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 20 15:07:24.385713 master-0 kubenswrapper[28120]: I0220 15:07:24.385609 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 20 15:07:24.446803 master-0 kubenswrapper[28120]: I0220 15:07:24.446701 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 20 15:07:24.478690 master-0 kubenswrapper[28120]: I0220 15:07:24.478624 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 20 15:07:24.484356 master-0 kubenswrapper[28120]: I0220 15:07:24.484311 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 20 15:07:24.578432 master-0 kubenswrapper[28120]: I0220 15:07:24.578216 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 20 15:07:24.579519 master-0 kubenswrapper[28120]: I0220 15:07:24.579438 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-6nj4c" Feb 20 15:07:24.656632 master-0 kubenswrapper[28120]: I0220 15:07:24.656542 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 20 15:07:24.789802 master-0 kubenswrapper[28120]: I0220 15:07:24.789733 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 20 15:07:24.841284 master-0 kubenswrapper[28120]: I0220 15:07:24.841127 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 20 15:07:24.907999 master-0 kubenswrapper[28120]: I0220 15:07:24.907871 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 20 15:07:24.908661 master-0 kubenswrapper[28120]: I0220 15:07:24.908623 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 20 15:07:24.998390 master-0 kubenswrapper[28120]: I0220 15:07:24.998307 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 20 15:07:24.999216 master-0 kubenswrapper[28120]: I0220 15:07:24.998594 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 20 15:07:25.009650 master-0 kubenswrapper[28120]: I0220 15:07:25.009599 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 20 15:07:25.019574 master-0 kubenswrapper[28120]: I0220 15:07:25.019526 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 20 15:07:25.037091 master-0 kubenswrapper[28120]: I0220 15:07:25.036975 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 20 15:07:25.051258 master-0 kubenswrapper[28120]: I0220 15:07:25.051203 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 20 15:07:25.123579 master-0 kubenswrapper[28120]: I0220 15:07:25.123373 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:07:25.123579 master-0 kubenswrapper[28120]: I0220 15:07:25.123493 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:07:25.132037 master-0 kubenswrapper[28120]: I0220 15:07:25.131914 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:07:25.196418 master-0 kubenswrapper[28120]: I0220 15:07:25.196301 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 20 15:07:25.217087 master-0 kubenswrapper[28120]: I0220 15:07:25.217027 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 20 15:07:25.332047 master-0 kubenswrapper[28120]: I0220 15:07:25.331888 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 20 15:07:25.452401 master-0 kubenswrapper[28120]: I0220 15:07:25.452206 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 20 15:07:25.463343 master-0 kubenswrapper[28120]: I0220 15:07:25.463282 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 20 15:07:25.482010 master-0 kubenswrapper[28120]: I0220 15:07:25.481913 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7pkl9jqft06ca" Feb 20 15:07:25.494162 master-0 kubenswrapper[28120]: I0220 15:07:25.494092 28120 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 20 15:07:25.567553 master-0 kubenswrapper[28120]: I0220 15:07:25.567468 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 20 15:07:25.570170 master-0 kubenswrapper[28120]: I0220 15:07:25.570036 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 20 15:07:25.625809 master-0 kubenswrapper[28120]: I0220 15:07:25.625733 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 20 15:07:25.676309 master-0 kubenswrapper[28120]: I0220 15:07:25.676183 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 20 15:07:25.690298 master-0 kubenswrapper[28120]: I0220 15:07:25.690209 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 20 15:07:25.763140 master-0 kubenswrapper[28120]: I0220 15:07:25.762973 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4x7c8" Feb 20 15:07:25.852270 master-0 kubenswrapper[28120]: I0220 15:07:25.852172 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 20 15:07:25.884572 master-0 kubenswrapper[28120]: I0220 15:07:25.884467 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 20 15:07:25.906081 master-0 kubenswrapper[28120]: I0220 15:07:25.906003 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 20 15:07:25.958208 master-0 kubenswrapper[28120]: I0220 15:07:25.958123 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 15:07:25.965683 master-0 kubenswrapper[28120]: I0220 15:07:25.965604 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 20 15:07:25.966298 master-0 kubenswrapper[28120]: I0220 15:07:25.966234 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 20 15:07:26.050159 master-0 kubenswrapper[28120]: I0220 15:07:26.049995 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 20 15:07:26.133824 master-0 kubenswrapper[28120]: I0220 15:07:26.133730 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 20 15:07:26.162678 master-0 kubenswrapper[28120]: I0220 15:07:26.162565 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 20 15:07:26.204760 master-0 kubenswrapper[28120]: I0220 15:07:26.204683 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 20 15:07:26.261632 master-0 kubenswrapper[28120]: I0220 15:07:26.261485 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 20 15:07:26.349426 master-0 kubenswrapper[28120]: I0220 15:07:26.349240 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 20 15:07:26.374384 master-0 kubenswrapper[28120]: I0220 15:07:26.374345 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 20 15:07:26.398420 master-0 kubenswrapper[28120]: I0220 15:07:26.398364 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 20 15:07:26.616180 master-0 kubenswrapper[28120]: I0220 15:07:26.616042 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 20 15:07:26.627265 master-0 kubenswrapper[28120]: I0220 15:07:26.627217 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 20 15:07:26.656718 master-0 kubenswrapper[28120]: I0220 15:07:26.656634 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 20 15:07:26.662432 master-0 kubenswrapper[28120]: I0220 15:07:26.662356 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 20 15:07:26.675710 master-0 kubenswrapper[28120]: I0220 15:07:26.675669 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 20 15:07:26.767075 master-0 kubenswrapper[28120]: I0220 15:07:26.766994 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 20 15:07:26.900841 master-0 kubenswrapper[28120]: I0220 15:07:26.900691 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 20 15:07:26.905546 master-0 kubenswrapper[28120]: I0220 15:07:26.905500 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 20 15:07:26.950150 master-0 kubenswrapper[28120]: I0220 15:07:26.950061 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 20 15:07:26.984840 master-0 kubenswrapper[28120]: I0220 15:07:26.984780 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 20 15:07:26.995112 master-0 kubenswrapper[28120]: I0220 15:07:26.995060 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 20 15:07:27.031769 master-0 kubenswrapper[28120]: I0220 15:07:27.031711 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 20 15:07:27.079754 master-0 kubenswrapper[28120]: I0220 15:07:27.079677 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 20 15:07:27.099178 master-0 kubenswrapper[28120]: I0220 15:07:27.099095 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mvrxq" Feb 20 15:07:27.104584 master-0 kubenswrapper[28120]: I0220 15:07:27.104491 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 20 15:07:27.128236 master-0 kubenswrapper[28120]: I0220 15:07:27.128177 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 20 15:07:27.172305 master-0 kubenswrapper[28120]: I0220 15:07:27.172161 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 20 15:07:27.174386 master-0 kubenswrapper[28120]: I0220 15:07:27.174341 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 20 15:07:27.175990 master-0 kubenswrapper[28120]: I0220 15:07:27.175896 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 20 15:07:27.194112 master-0 kubenswrapper[28120]: I0220 15:07:27.194042 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 20 15:07:27.214393 master-0 kubenswrapper[28120]: I0220 15:07:27.214326 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 20 15:07:27.217315 master-0 kubenswrapper[28120]: I0220 15:07:27.217241 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:07:27.223056 master-0 kubenswrapper[28120]: I0220 15:07:27.222163 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:07:27.312069 master-0 kubenswrapper[28120]: I0220 15:07:27.311971 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 20 15:07:27.355389 master-0 kubenswrapper[28120]: I0220 15:07:27.355317 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 20 15:07:27.418826 master-0 kubenswrapper[28120]: I0220 15:07:27.418770 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 20 15:07:27.448529 master-0 kubenswrapper[28120]: I0220 15:07:27.448392 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 20 15:07:27.510973 master-0 kubenswrapper[28120]: I0220 15:07:27.510862 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 20 15:07:27.639791 master-0 kubenswrapper[28120]: I0220 15:07:27.639730 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 20 15:07:27.670431 master-0 kubenswrapper[28120]: I0220 15:07:27.670371 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 20 15:07:27.707979 master-0 kubenswrapper[28120]: I0220 15:07:27.707799 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 20 15:07:27.716658 master-0 kubenswrapper[28120]: I0220 15:07:27.716604 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 20 15:07:27.743164 master-0 kubenswrapper[28120]: I0220 15:07:27.743110 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 20 15:07:27.784333 master-0 kubenswrapper[28120]: I0220 15:07:27.784270 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-tljfd" Feb 20 15:07:27.799947 master-0 kubenswrapper[28120]: I0220 15:07:27.799864 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 20 15:07:27.857095 master-0 kubenswrapper[28120]: I0220 15:07:27.857008 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 20 15:07:27.874205 master-0 kubenswrapper[28120]: I0220 15:07:27.874108 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 20 15:07:27.881189 master-0 kubenswrapper[28120]: I0220 15:07:27.881130 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 20 15:07:27.894507 master-0 kubenswrapper[28120]: I0220 15:07:27.894415 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 20 15:07:27.911854 master-0 kubenswrapper[28120]: I0220 15:07:27.911786 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 20 15:07:27.958140 master-0 kubenswrapper[28120]: I0220 15:07:27.957991 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 20 15:07:28.005040 master-0 kubenswrapper[28120]: I0220 15:07:28.004919 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 20 15:07:28.107455 master-0 kubenswrapper[28120]: I0220 15:07:28.107376 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 20 15:07:28.189104 master-0 kubenswrapper[28120]: I0220 15:07:28.189022 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 20 15:07:28.204349 master-0 kubenswrapper[28120]: I0220 15:07:28.204268 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-4xtlh" Feb 20 15:07:28.206379 master-0 kubenswrapper[28120]: I0220 15:07:28.206313 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 20 15:07:28.236231 master-0 kubenswrapper[28120]: I0220 15:07:28.236072 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 20 15:07:28.243120 master-0 kubenswrapper[28120]: I0220 15:07:28.243068 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 20 15:07:28.291968 master-0 kubenswrapper[28120]: I0220 15:07:28.291188 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 20 15:07:28.331956 master-0 kubenswrapper[28120]: I0220 15:07:28.331471 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 20 15:07:28.337452 master-0 kubenswrapper[28120]: I0220 15:07:28.337406 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 20 15:07:28.390656 master-0 kubenswrapper[28120]: I0220 15:07:28.390580 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-n8qfb" Feb 20 15:07:28.414851 master-0 kubenswrapper[28120]: I0220 15:07:28.414757 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 20 15:07:28.430071 master-0 kubenswrapper[28120]: I0220 15:07:28.429990 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 20 15:07:28.446424 master-0 kubenswrapper[28120]: I0220 15:07:28.446363 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 20 15:07:28.447214 master-0 kubenswrapper[28120]: I0220 15:07:28.447160 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-k6xtb" Feb 20 15:07:28.450049 master-0 kubenswrapper[28120]: I0220 15:07:28.450005 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 20 15:07:28.512979 master-0 kubenswrapper[28120]: I0220 15:07:28.512864 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 20 15:07:28.537659 master-0 kubenswrapper[28120]: I0220 15:07:28.537591 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 20 15:07:28.541999 master-0 kubenswrapper[28120]: I0220 15:07:28.541858 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 20 15:07:28.573547 master-0 kubenswrapper[28120]: I0220 15:07:28.573463 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-cp6wb" Feb 20 15:07:28.599198 master-0 kubenswrapper[28120]: I0220 15:07:28.599097 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 20 15:07:28.678994 master-0 kubenswrapper[28120]: I0220 15:07:28.677989 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 20 15:07:28.685590 master-0 kubenswrapper[28120]: I0220 15:07:28.685543 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 20 15:07:28.816058 master-0 kubenswrapper[28120]: I0220 15:07:28.815826 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 20 15:07:28.839106 master-0 kubenswrapper[28120]: I0220 15:07:28.839041 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-mxnq7" Feb 20 15:07:28.863661 master-0 kubenswrapper[28120]: I0220 15:07:28.863602 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 20 15:07:28.873876 master-0 kubenswrapper[28120]: I0220 15:07:28.873826 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 20 15:07:28.896588 master-0 kubenswrapper[28120]: I0220 15:07:28.896520 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 20 15:07:28.913266 master-0 kubenswrapper[28120]: I0220 15:07:28.913170 28120 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 20 15:07:28.929504 master-0 kubenswrapper[28120]: I0220 15:07:28.929449 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-7zgzx" Feb 20 15:07:28.946462 master-0 kubenswrapper[28120]: I0220 15:07:28.946413 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 20 15:07:28.969345 master-0 kubenswrapper[28120]: I0220 15:07:28.969278 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 20 15:07:29.019389 master-0 kubenswrapper[28120]: I0220 15:07:29.019298 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-2wv5x" Feb 20 15:07:29.045865 master-0 kubenswrapper[28120]: I0220 15:07:29.045773 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 20 15:07:29.103472 master-0 kubenswrapper[28120]: I0220 15:07:29.103305 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 20 15:07:29.117104 master-0 kubenswrapper[28120]: I0220 15:07:29.117007 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 20 15:07:29.133910 master-0 kubenswrapper[28120]: I0220 15:07:29.133843 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 20 15:07:29.133910 master-0 kubenswrapper[28120]: I0220 15:07:29.133843 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 20 15:07:29.184255 master-0 kubenswrapper[28120]: I0220 15:07:29.184185 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 20 15:07:29.195353 master-0 kubenswrapper[28120]: I0220 15:07:29.195298 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 20 15:07:29.196976 master-0 kubenswrapper[28120]: I0220 15:07:29.196858 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 20 15:07:29.232166 master-0 kubenswrapper[28120]: I0220 15:07:29.232072 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 20 15:07:29.343812 master-0 kubenswrapper[28120]: I0220 15:07:29.343726 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-77fdh" Feb 20 15:07:29.434874 master-0 kubenswrapper[28120]: I0220 15:07:29.434704 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 20 15:07:29.473199 master-0 kubenswrapper[28120]: I0220 15:07:29.473101 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 20 15:07:29.476158 master-0 kubenswrapper[28120]: I0220 15:07:29.476091 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-ts5zc" Feb 20 15:07:29.491542 master-0 kubenswrapper[28120]: I0220 15:07:29.491500 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 20 15:07:29.505526 master-0 kubenswrapper[28120]: I0220 15:07:29.505429 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 20 15:07:29.508135 master-0 kubenswrapper[28120]: I0220 15:07:29.508075 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 20 15:07:29.508304 master-0 kubenswrapper[28120]: I0220 15:07:29.508243 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 20 15:07:29.524271 master-0 kubenswrapper[28120]: I0220 15:07:29.524193 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 20 15:07:29.616239 master-0 kubenswrapper[28120]: I0220 15:07:29.616155 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 20 15:07:29.715162 master-0 kubenswrapper[28120]: I0220 15:07:29.714978 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 20 15:07:29.757523 master-0 kubenswrapper[28120]: I0220 15:07:29.757442 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-7m26s" Feb 20 15:07:29.821609 master-0 kubenswrapper[28120]: I0220 15:07:29.821536 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 20 15:07:29.822943 master-0 kubenswrapper[28120]: I0220 15:07:29.822869 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 20 15:07:29.915779 master-0 kubenswrapper[28120]: I0220 15:07:29.915670 28120 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 20 15:07:29.916199 master-0 kubenswrapper[28120]: I0220 15:07:29.916069 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="b562c0e662e60a50528cc033a91352f2" containerName="startup-monitor" containerID="cri-o://b565aadbad872ccb9f55e50e2a8984e6d0b605d8356a58bc1a9a25819417ad92" gracePeriod=5 Feb 20 15:07:29.965969 master-0 kubenswrapper[28120]: I0220 15:07:29.965785 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 20 15:07:29.969194 master-0 kubenswrapper[28120]: I0220 15:07:29.969127 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 20 15:07:29.986446 master-0 kubenswrapper[28120]: I0220 15:07:29.986357 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-jscmz" Feb 20 15:07:30.002954 master-0 kubenswrapper[28120]: I0220 15:07:30.002854 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 20 15:07:30.029304 master-0 kubenswrapper[28120]: I0220 15:07:30.029235 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 20 15:07:30.098133 master-0 kubenswrapper[28120]: I0220 15:07:30.098037 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 20 15:07:30.105641 master-0 kubenswrapper[28120]: I0220 15:07:30.105581 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 20 15:07:30.184137 master-0 kubenswrapper[28120]: I0220 15:07:30.184079 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 20 15:07:30.185307 master-0 kubenswrapper[28120]: I0220 15:07:30.185270 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 20 15:07:30.223781 master-0 kubenswrapper[28120]: I0220 15:07:30.223604 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 20 15:07:30.246770 master-0 kubenswrapper[28120]: I0220 15:07:30.246700 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 20 15:07:30.314275 master-0 kubenswrapper[28120]: I0220 15:07:30.314162 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 20 15:07:30.375010 master-0 kubenswrapper[28120]: I0220 15:07:30.374878 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 20 15:07:30.397846 master-0 kubenswrapper[28120]: I0220 15:07:30.397762 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 20 15:07:30.467330 master-0 kubenswrapper[28120]: I0220 15:07:30.467239 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 20 15:07:30.490790 master-0 kubenswrapper[28120]: I0220 15:07:30.490706 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 20 15:07:30.561325 master-0 kubenswrapper[28120]: I0220 15:07:30.561263 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 20 15:07:30.596903 master-0 kubenswrapper[28120]: I0220 15:07:30.596821 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 20 15:07:30.727292 master-0 kubenswrapper[28120]: I0220 15:07:30.726331 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 20 15:07:30.823949 master-0 kubenswrapper[28120]: I0220 15:07:30.823763 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 20 15:07:30.849661 master-0 kubenswrapper[28120]: I0220 15:07:30.849600 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 20 15:07:30.864801 master-0 kubenswrapper[28120]: I0220 15:07:30.864740 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 20 15:07:30.907301 master-0 kubenswrapper[28120]: I0220 15:07:30.907218 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-qb2q7" Feb 20 15:07:30.922800 master-0 kubenswrapper[28120]: I0220 15:07:30.922739 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 20 15:07:30.980224 master-0 kubenswrapper[28120]: I0220 15:07:30.980167 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 20 15:07:30.999230 master-0 kubenswrapper[28120]: I0220 15:07:30.999161 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 20 15:07:31.076787 master-0 kubenswrapper[28120]: I0220 15:07:31.074255 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 20 15:07:31.529391 master-0 kubenswrapper[28120]: I0220 15:07:31.529270 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-c8rnz" Feb 20 15:07:31.581436 master-0 kubenswrapper[28120]: I0220 15:07:31.581361 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 20 15:07:31.581899 master-0 kubenswrapper[28120]: I0220 15:07:31.581853 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 20 15:07:31.747558 master-0 kubenswrapper[28120]: I0220 15:07:31.747490 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-d2ca3efq1akg1" Feb 20 15:07:31.830308 master-0 kubenswrapper[28120]: I0220 15:07:31.830104 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 20 15:07:32.186860 master-0 kubenswrapper[28120]: I0220 15:07:32.186737 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 20 15:07:32.280358 master-0 kubenswrapper[28120]: I0220 15:07:32.280295 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 20 15:07:32.616908 master-0 kubenswrapper[28120]: I0220 15:07:32.616791 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 20 15:07:32.785540 master-0 kubenswrapper[28120]: I0220 15:07:32.785461 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 20 15:07:32.828292 master-0 kubenswrapper[28120]: I0220 15:07:32.828238 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 20 15:07:33.213690 master-0 kubenswrapper[28120]: I0220 15:07:33.213621 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 20 15:07:33.397920 master-0 kubenswrapper[28120]: I0220 15:07:33.397859 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 20 15:07:34.190055 master-0 kubenswrapper[28120]: I0220 15:07:34.189741 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6bcb747b79-dfz8f"] Feb 20 15:07:35.127833 master-0 kubenswrapper[28120]: I0220 15:07:35.127744 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:07:35.477256 master-0 kubenswrapper[28120]: I0220 15:07:35.477195 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_b562c0e662e60a50528cc033a91352f2/startup-monitor/0.log" Feb 20 15:07:35.477256 master-0 kubenswrapper[28120]: I0220 15:07:35.477251 28120 generic.go:334] "Generic (PLEG): container finished" podID="b562c0e662e60a50528cc033a91352f2" containerID="b565aadbad872ccb9f55e50e2a8984e6d0b605d8356a58bc1a9a25819417ad92" exitCode=137 Feb 20 15:07:35.478090 master-0 kubenswrapper[28120]: I0220 15:07:35.477293 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="847b9f0dfeff46bab2ff144c6cb83b7546e030336dfe4cae9bbabb7c4275f02a" Feb 20 15:07:35.524776 master-0 kubenswrapper[28120]: I0220 15:07:35.524666 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_b562c0e662e60a50528cc033a91352f2/startup-monitor/0.log" Feb 20 15:07:35.524979 master-0 kubenswrapper[28120]: I0220 15:07:35.524817 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:07:35.648907 master-0 kubenswrapper[28120]: I0220 15:07:35.648814 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-pod-resource-dir\") pod \"b562c0e662e60a50528cc033a91352f2\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " Feb 20 15:07:35.648907 master-0 kubenswrapper[28120]: I0220 15:07:35.648896 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-var-lock\") pod \"b562c0e662e60a50528cc033a91352f2\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " Feb 20 15:07:35.649360 master-0 kubenswrapper[28120]: I0220 15:07:35.649101 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-manifests\") pod \"b562c0e662e60a50528cc033a91352f2\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " Feb 20 15:07:35.649360 master-0 kubenswrapper[28120]: I0220 15:07:35.649133 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-resource-dir\") pod \"b562c0e662e60a50528cc033a91352f2\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " Feb 20 15:07:35.649360 master-0 kubenswrapper[28120]: I0220 15:07:35.649196 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-var-log\") pod \"b562c0e662e60a50528cc033a91352f2\" (UID: \"b562c0e662e60a50528cc033a91352f2\") " Feb 20 15:07:35.649642 master-0 kubenswrapper[28120]: I0220 15:07:35.649384 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-manifests" (OuterVolumeSpecName: "manifests") pod "b562c0e662e60a50528cc033a91352f2" (UID: "b562c0e662e60a50528cc033a91352f2"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:07:35.649642 master-0 kubenswrapper[28120]: I0220 15:07:35.649514 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-var-log" (OuterVolumeSpecName: "var-log") pod "b562c0e662e60a50528cc033a91352f2" (UID: "b562c0e662e60a50528cc033a91352f2"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:07:35.649642 master-0 kubenswrapper[28120]: I0220 15:07:35.649553 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "b562c0e662e60a50528cc033a91352f2" (UID: "b562c0e662e60a50528cc033a91352f2"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:07:35.649642 master-0 kubenswrapper[28120]: I0220 15:07:35.649596 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-var-lock" (OuterVolumeSpecName: "var-lock") pod "b562c0e662e60a50528cc033a91352f2" (UID: "b562c0e662e60a50528cc033a91352f2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:07:35.650092 master-0 kubenswrapper[28120]: I0220 15:07:35.649757 28120 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-manifests\") on node \"master-0\" DevicePath \"\"" Feb 20 15:07:35.650092 master-0 kubenswrapper[28120]: I0220 15:07:35.649779 28120 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:07:35.650092 master-0 kubenswrapper[28120]: I0220 15:07:35.649796 28120 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-var-log\") on node \"master-0\" DevicePath \"\"" Feb 20 15:07:35.650092 master-0 kubenswrapper[28120]: I0220 15:07:35.649812 28120 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 15:07:35.670997 master-0 kubenswrapper[28120]: I0220 15:07:35.668746 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "b562c0e662e60a50528cc033a91352f2" (UID: "b562c0e662e60a50528cc033a91352f2"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:07:35.751121 master-0 kubenswrapper[28120]: I0220 15:07:35.751059 28120 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b562c0e662e60a50528cc033a91352f2-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:07:36.071388 master-0 kubenswrapper[28120]: I0220 15:07:36.071291 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b562c0e662e60a50528cc033a91352f2" path="/var/lib/kubelet/pods/b562c0e662e60a50528cc033a91352f2/volumes" Feb 20 15:07:36.484611 master-0 kubenswrapper[28120]: I0220 15:07:36.484433 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 20 15:07:50.535457 master-0 kubenswrapper[28120]: I0220 15:07:50.535291 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 20 15:07:53.081377 master-0 kubenswrapper[28120]: I0220 15:07:53.081097 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 20 15:07:54.810838 master-0 kubenswrapper[28120]: I0220 15:07:54.810374 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 20 15:07:55.642696 master-0 kubenswrapper[28120]: I0220 15:07:55.642630 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-544f96cb59-lzxgw"] Feb 20 15:07:55.642988 master-0 kubenswrapper[28120]: E0220 15:07:55.642895 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b562c0e662e60a50528cc033a91352f2" containerName="startup-monitor" Feb 20 15:07:55.642988 master-0 kubenswrapper[28120]: I0220 15:07:55.642907 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b562c0e662e60a50528cc033a91352f2" containerName="startup-monitor" Feb 20 15:07:55.642988 master-0 kubenswrapper[28120]: E0220 15:07:55.642967 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" containerName="installer" Feb 20 15:07:55.642988 master-0 kubenswrapper[28120]: I0220 15:07:55.642974 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" containerName="installer" Feb 20 15:07:55.643163 master-0 kubenswrapper[28120]: I0220 15:07:55.643097 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="78eaf3b1-0c29-48ea-bf93-e3a6b9a79aa6" containerName="installer" Feb 20 15:07:55.643163 master-0 kubenswrapper[28120]: I0220 15:07:55.643119 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="b562c0e662e60a50528cc033a91352f2" containerName="startup-monitor" Feb 20 15:07:55.643584 master-0 kubenswrapper[28120]: I0220 15:07:55.643560 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.661410 master-0 kubenswrapper[28120]: I0220 15:07:55.661340 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-544f96cb59-lzxgw"] Feb 20 15:07:55.758853 master-0 kubenswrapper[28120]: I0220 15:07:55.758781 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-trusted-ca-bundle\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.758853 master-0 kubenswrapper[28120]: I0220 15:07:55.758848 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-config\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.759197 master-0 kubenswrapper[28120]: I0220 15:07:55.758969 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-serving-cert\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.759197 master-0 kubenswrapper[28120]: I0220 15:07:55.759101 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-oauth-config\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.759197 master-0 kubenswrapper[28120]: I0220 15:07:55.759157 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-service-ca\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.759333 master-0 kubenswrapper[28120]: I0220 15:07:55.759238 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qj5v\" (UniqueName: \"kubernetes.io/projected/5c2b6b66-eba8-4316-bb3f-beed2b53f173-kube-api-access-4qj5v\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.759381 master-0 kubenswrapper[28120]: I0220 15:07:55.759362 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-oauth-serving-cert\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.861332 master-0 kubenswrapper[28120]: I0220 15:07:55.861251 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-serving-cert\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.861904 master-0 kubenswrapper[28120]: I0220 15:07:55.861484 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-oauth-config\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.861904 master-0 kubenswrapper[28120]: I0220 15:07:55.861539 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-service-ca\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.861904 master-0 kubenswrapper[28120]: I0220 15:07:55.861582 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qj5v\" (UniqueName: \"kubernetes.io/projected/5c2b6b66-eba8-4316-bb3f-beed2b53f173-kube-api-access-4qj5v\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.861904 master-0 kubenswrapper[28120]: I0220 15:07:55.861872 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-oauth-serving-cert\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.862135 master-0 kubenswrapper[28120]: I0220 15:07:55.861979 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-trusted-ca-bundle\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.862135 master-0 kubenswrapper[28120]: I0220 15:07:55.862059 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-config\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.863319 master-0 kubenswrapper[28120]: I0220 15:07:55.863289 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-config\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.863574 master-0 kubenswrapper[28120]: I0220 15:07:55.863515 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-service-ca\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.863647 master-0 kubenswrapper[28120]: I0220 15:07:55.863535 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-oauth-serving-cert\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.864230 master-0 kubenswrapper[28120]: I0220 15:07:55.864188 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-trusted-ca-bundle\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.866048 master-0 kubenswrapper[28120]: I0220 15:07:55.866014 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-oauth-config\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.867472 master-0 kubenswrapper[28120]: I0220 15:07:55.867413 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-serving-cert\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:55.884379 master-0 kubenswrapper[28120]: I0220 15:07:55.884292 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qj5v\" (UniqueName: \"kubernetes.io/projected/5c2b6b66-eba8-4316-bb3f-beed2b53f173-kube-api-access-4qj5v\") pod \"console-544f96cb59-lzxgw\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:56.017746 master-0 kubenswrapper[28120]: I0220 15:07:56.017664 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:07:56.467107 master-0 kubenswrapper[28120]: I0220 15:07:56.467023 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-544f96cb59-lzxgw"] Feb 20 15:07:56.479969 master-0 kubenswrapper[28120]: W0220 15:07:56.479895 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c2b6b66_eba8_4316_bb3f_beed2b53f173.slice/crio-d6d9c20e2d07c266849de452b2e503c1aa0433e6fb5dbd805fefce1a0fadada3 WatchSource:0}: Error finding container d6d9c20e2d07c266849de452b2e503c1aa0433e6fb5dbd805fefce1a0fadada3: Status 404 returned error can't find the container with id d6d9c20e2d07c266849de452b2e503c1aa0433e6fb5dbd805fefce1a0fadada3 Feb 20 15:07:56.677790 master-0 kubenswrapper[28120]: I0220 15:07:56.677670 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-544f96cb59-lzxgw" event={"ID":"5c2b6b66-eba8-4316-bb3f-beed2b53f173","Type":"ContainerStarted","Data":"630ff5e27d347be173d2110677d188651f8072bd313ccb9e664c37c8eeedc990"} Feb 20 15:07:56.677790 master-0 kubenswrapper[28120]: I0220 15:07:56.677785 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-544f96cb59-lzxgw" event={"ID":"5c2b6b66-eba8-4316-bb3f-beed2b53f173","Type":"ContainerStarted","Data":"d6d9c20e2d07c266849de452b2e503c1aa0433e6fb5dbd805fefce1a0fadada3"} Feb 20 15:07:56.714681 master-0 kubenswrapper[28120]: I0220 15:07:56.714494 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-544f96cb59-lzxgw" podStartSLOduration=1.714462913 podStartE2EDuration="1.714462913s" podCreationTimestamp="2026-02-20 15:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:07:56.706736099 +0000 UTC m=+414.967529682" watchObservedRunningTime="2026-02-20 15:07:56.714462913 +0000 UTC m=+414.975256516" Feb 20 15:07:59.242917 master-0 kubenswrapper[28120]: I0220 15:07:59.242801 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6bcb747b79-dfz8f" podUID="2ff24014-84b3-43df-a20a-7caa44088b0c" containerName="console" containerID="cri-o://43fec6224584ee0092a54e87a413a46e7446e54f8e1d6d0f153460368c1604d6" gracePeriod=15 Feb 20 15:07:59.704104 master-0 kubenswrapper[28120]: I0220 15:07:59.704027 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6bcb747b79-dfz8f_2ff24014-84b3-43df-a20a-7caa44088b0c/console/0.log" Feb 20 15:07:59.704104 master-0 kubenswrapper[28120]: I0220 15:07:59.704097 28120 generic.go:334] "Generic (PLEG): container finished" podID="2ff24014-84b3-43df-a20a-7caa44088b0c" containerID="43fec6224584ee0092a54e87a413a46e7446e54f8e1d6d0f153460368c1604d6" exitCode=2 Feb 20 15:07:59.704427 master-0 kubenswrapper[28120]: I0220 15:07:59.704140 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bcb747b79-dfz8f" event={"ID":"2ff24014-84b3-43df-a20a-7caa44088b0c","Type":"ContainerDied","Data":"43fec6224584ee0092a54e87a413a46e7446e54f8e1d6d0f153460368c1604d6"} Feb 20 15:07:59.875818 master-0 kubenswrapper[28120]: I0220 15:07:59.875498 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6bcb747b79-dfz8f_2ff24014-84b3-43df-a20a-7caa44088b0c/console/0.log" Feb 20 15:07:59.875818 master-0 kubenswrapper[28120]: I0220 15:07:59.875694 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:07:59.936962 master-0 kubenswrapper[28120]: I0220 15:07:59.933736 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2ff24014-84b3-43df-a20a-7caa44088b0c-console-oauth-config\") pod \"2ff24014-84b3-43df-a20a-7caa44088b0c\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " Feb 20 15:07:59.936962 master-0 kubenswrapper[28120]: I0220 15:07:59.933992 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxtgp\" (UniqueName: \"kubernetes.io/projected/2ff24014-84b3-43df-a20a-7caa44088b0c-kube-api-access-cxtgp\") pod \"2ff24014-84b3-43df-a20a-7caa44088b0c\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " Feb 20 15:07:59.936962 master-0 kubenswrapper[28120]: I0220 15:07:59.934106 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-oauth-serving-cert\") pod \"2ff24014-84b3-43df-a20a-7caa44088b0c\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " Feb 20 15:07:59.936962 master-0 kubenswrapper[28120]: I0220 15:07:59.934189 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-console-config\") pod \"2ff24014-84b3-43df-a20a-7caa44088b0c\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " Feb 20 15:07:59.936962 master-0 kubenswrapper[28120]: I0220 15:07:59.934367 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff24014-84b3-43df-a20a-7caa44088b0c-console-serving-cert\") pod \"2ff24014-84b3-43df-a20a-7caa44088b0c\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " Feb 20 15:07:59.936962 master-0 kubenswrapper[28120]: I0220 15:07:59.934403 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-service-ca\") pod \"2ff24014-84b3-43df-a20a-7caa44088b0c\" (UID: \"2ff24014-84b3-43df-a20a-7caa44088b0c\") " Feb 20 15:07:59.936962 master-0 kubenswrapper[28120]: I0220 15:07:59.934831 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2ff24014-84b3-43df-a20a-7caa44088b0c" (UID: "2ff24014-84b3-43df-a20a-7caa44088b0c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:07:59.936962 master-0 kubenswrapper[28120]: I0220 15:07:59.934990 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-console-config" (OuterVolumeSpecName: "console-config") pod "2ff24014-84b3-43df-a20a-7caa44088b0c" (UID: "2ff24014-84b3-43df-a20a-7caa44088b0c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:07:59.936962 master-0 kubenswrapper[28120]: I0220 15:07:59.935101 28120 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 15:07:59.936962 master-0 kubenswrapper[28120]: I0220 15:07:59.935373 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-service-ca" (OuterVolumeSpecName: "service-ca") pod "2ff24014-84b3-43df-a20a-7caa44088b0c" (UID: "2ff24014-84b3-43df-a20a-7caa44088b0c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:07:59.938132 master-0 kubenswrapper[28120]: I0220 15:07:59.938065 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ff24014-84b3-43df-a20a-7caa44088b0c-kube-api-access-cxtgp" (OuterVolumeSpecName: "kube-api-access-cxtgp") pod "2ff24014-84b3-43df-a20a-7caa44088b0c" (UID: "2ff24014-84b3-43df-a20a-7caa44088b0c"). InnerVolumeSpecName "kube-api-access-cxtgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:07:59.938132 master-0 kubenswrapper[28120]: I0220 15:07:59.938109 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff24014-84b3-43df-a20a-7caa44088b0c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2ff24014-84b3-43df-a20a-7caa44088b0c" (UID: "2ff24014-84b3-43df-a20a-7caa44088b0c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:07:59.938773 master-0 kubenswrapper[28120]: I0220 15:07:59.938708 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff24014-84b3-43df-a20a-7caa44088b0c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2ff24014-84b3-43df-a20a-7caa44088b0c" (UID: "2ff24014-84b3-43df-a20a-7caa44088b0c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:00.037263 master-0 kubenswrapper[28120]: I0220 15:08:00.037198 28120 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2ff24014-84b3-43df-a20a-7caa44088b0c-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:00.037263 master-0 kubenswrapper[28120]: I0220 15:08:00.037260 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxtgp\" (UniqueName: \"kubernetes.io/projected/2ff24014-84b3-43df-a20a-7caa44088b0c-kube-api-access-cxtgp\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:00.037467 master-0 kubenswrapper[28120]: I0220 15:08:00.037280 28120 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-console-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:00.037467 master-0 kubenswrapper[28120]: I0220 15:08:00.037298 28120 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff24014-84b3-43df-a20a-7caa44088b0c-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:00.037467 master-0 kubenswrapper[28120]: I0220 15:08:00.037315 28120 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2ff24014-84b3-43df-a20a-7caa44088b0c-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:00.718499 master-0 kubenswrapper[28120]: I0220 15:08:00.718435 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6bcb747b79-dfz8f_2ff24014-84b3-43df-a20a-7caa44088b0c/console/0.log" Feb 20 15:08:00.719384 master-0 kubenswrapper[28120]: I0220 15:08:00.718518 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bcb747b79-dfz8f" event={"ID":"2ff24014-84b3-43df-a20a-7caa44088b0c","Type":"ContainerDied","Data":"1a03ab185ca4d95b1b2224e20bb956f383e9e0ac8e82050e65434df2ae9a4d44"} Feb 20 15:08:00.719384 master-0 kubenswrapper[28120]: I0220 15:08:00.718571 28120 scope.go:117] "RemoveContainer" containerID="43fec6224584ee0092a54e87a413a46e7446e54f8e1d6d0f153460368c1604d6" Feb 20 15:08:00.719384 master-0 kubenswrapper[28120]: I0220 15:08:00.718684 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bcb747b79-dfz8f" Feb 20 15:08:00.753975 master-0 kubenswrapper[28120]: I0220 15:08:00.753882 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6bcb747b79-dfz8f"] Feb 20 15:08:00.765099 master-0 kubenswrapper[28120]: I0220 15:08:00.765034 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6bcb747b79-dfz8f"] Feb 20 15:08:01.506707 master-0 kubenswrapper[28120]: I0220 15:08:01.506621 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-67ksg" Feb 20 15:08:02.079774 master-0 kubenswrapper[28120]: I0220 15:08:02.079693 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ff24014-84b3-43df-a20a-7caa44088b0c" path="/var/lib/kubelet/pods/2ff24014-84b3-43df-a20a-7caa44088b0c/volumes" Feb 20 15:08:02.523964 master-0 kubenswrapper[28120]: I0220 15:08:02.523866 28120 scope.go:117] "RemoveContainer" containerID="a40f295459b4a2af08fc1680c69148647ff567e1baab8a7d9c6564aa36182635" Feb 20 15:08:04.256498 master-0 kubenswrapper[28120]: I0220 15:08:04.256378 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 20 15:08:06.018204 master-0 kubenswrapper[28120]: I0220 15:08:06.018106 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:08:06.018204 master-0 kubenswrapper[28120]: I0220 15:08:06.018185 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:08:06.023446 master-0 kubenswrapper[28120]: I0220 15:08:06.023376 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:08:06.775342 master-0 kubenswrapper[28120]: I0220 15:08:06.775293 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:08:06.871037 master-0 kubenswrapper[28120]: I0220 15:08:06.870964 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-68cd6dbb78-rhjhv"] Feb 20 15:08:07.486460 master-0 kubenswrapper[28120]: I0220 15:08:07.486178 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 20 15:08:18.072904 master-0 kubenswrapper[28120]: I0220 15:08:18.072816 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6fbcbfc7fb-bq4tx"] Feb 20 15:08:18.073617 master-0 kubenswrapper[28120]: E0220 15:08:18.073303 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff24014-84b3-43df-a20a-7caa44088b0c" containerName="console" Feb 20 15:08:18.073617 master-0 kubenswrapper[28120]: I0220 15:08:18.073328 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff24014-84b3-43df-a20a-7caa44088b0c" containerName="console" Feb 20 15:08:18.073723 master-0 kubenswrapper[28120]: I0220 15:08:18.073627 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff24014-84b3-43df-a20a-7caa44088b0c" containerName="console" Feb 20 15:08:18.079593 master-0 kubenswrapper[28120]: I0220 15:08:18.079544 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.109131 master-0 kubenswrapper[28120]: I0220 15:08:18.109071 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fbcbfc7fb-bq4tx"] Feb 20 15:08:18.205292 master-0 kubenswrapper[28120]: I0220 15:08:18.205208 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-oauth-serving-cert\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.205521 master-0 kubenswrapper[28120]: I0220 15:08:18.205386 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-console-config\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.205521 master-0 kubenswrapper[28120]: I0220 15:08:18.205425 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbl5w\" (UniqueName: \"kubernetes.io/projected/744bf91d-59af-4aab-bb6b-71bde572550f-kube-api-access-lbl5w\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.205644 master-0 kubenswrapper[28120]: I0220 15:08:18.205603 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/744bf91d-59af-4aab-bb6b-71bde572550f-console-serving-cert\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.205770 master-0 kubenswrapper[28120]: I0220 15:08:18.205746 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-service-ca\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.205850 master-0 kubenswrapper[28120]: I0220 15:08:18.205829 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/744bf91d-59af-4aab-bb6b-71bde572550f-console-oauth-config\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.206157 master-0 kubenswrapper[28120]: I0220 15:08:18.206074 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-trusted-ca-bundle\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.308994 master-0 kubenswrapper[28120]: I0220 15:08:18.308907 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/744bf91d-59af-4aab-bb6b-71bde572550f-console-oauth-config\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.309203 master-0 kubenswrapper[28120]: I0220 15:08:18.309047 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-trusted-ca-bundle\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.309203 master-0 kubenswrapper[28120]: I0220 15:08:18.309107 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-oauth-serving-cert\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.309203 master-0 kubenswrapper[28120]: I0220 15:08:18.309175 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbl5w\" (UniqueName: \"kubernetes.io/projected/744bf91d-59af-4aab-bb6b-71bde572550f-kube-api-access-lbl5w\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.309299 master-0 kubenswrapper[28120]: I0220 15:08:18.309207 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-console-config\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.309333 master-0 kubenswrapper[28120]: I0220 15:08:18.309305 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/744bf91d-59af-4aab-bb6b-71bde572550f-console-serving-cert\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.309365 master-0 kubenswrapper[28120]: I0220 15:08:18.309342 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-service-ca\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.310581 master-0 kubenswrapper[28120]: I0220 15:08:18.310548 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-service-ca\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.311906 master-0 kubenswrapper[28120]: I0220 15:08:18.310670 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-oauth-serving-cert\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.311906 master-0 kubenswrapper[28120]: I0220 15:08:18.311277 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-console-config\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.311906 master-0 kubenswrapper[28120]: I0220 15:08:18.311742 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-trusted-ca-bundle\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.313975 master-0 kubenswrapper[28120]: I0220 15:08:18.313859 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/744bf91d-59af-4aab-bb6b-71bde572550f-console-oauth-config\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.315431 master-0 kubenswrapper[28120]: I0220 15:08:18.315382 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/744bf91d-59af-4aab-bb6b-71bde572550f-console-serving-cert\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.334378 master-0 kubenswrapper[28120]: I0220 15:08:18.334257 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbl5w\" (UniqueName: \"kubernetes.io/projected/744bf91d-59af-4aab-bb6b-71bde572550f-kube-api-access-lbl5w\") pod \"console-6fbcbfc7fb-bq4tx\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.409329 master-0 kubenswrapper[28120]: I0220 15:08:18.409233 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:18.753122 master-0 kubenswrapper[28120]: W0220 15:08:18.750483 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744bf91d_59af_4aab_bb6b_71bde572550f.slice/crio-fed62013b96ce4ab5651cc1ca0a130154335be7494045a62fb7335f6d97a1509 WatchSource:0}: Error finding container fed62013b96ce4ab5651cc1ca0a130154335be7494045a62fb7335f6d97a1509: Status 404 returned error can't find the container with id fed62013b96ce4ab5651cc1ca0a130154335be7494045a62fb7335f6d97a1509 Feb 20 15:08:18.771059 master-0 kubenswrapper[28120]: I0220 15:08:18.765478 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fbcbfc7fb-bq4tx"] Feb 20 15:08:18.887185 master-0 kubenswrapper[28120]: I0220 15:08:18.887113 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fbcbfc7fb-bq4tx" event={"ID":"744bf91d-59af-4aab-bb6b-71bde572550f","Type":"ContainerStarted","Data":"fed62013b96ce4ab5651cc1ca0a130154335be7494045a62fb7335f6d97a1509"} Feb 20 15:08:18.961342 master-0 kubenswrapper[28120]: I0220 15:08:18.961237 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 20 15:08:18.962282 master-0 kubenswrapper[28120]: I0220 15:08:18.961733 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="alertmanager" containerID="cri-o://0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01" gracePeriod=120 Feb 20 15:08:18.962717 master-0 kubenswrapper[28120]: I0220 15:08:18.961868 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="prom-label-proxy" containerID="cri-o://dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6" gracePeriod=120 Feb 20 15:08:18.962846 master-0 kubenswrapper[28120]: I0220 15:08:18.961886 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="config-reloader" containerID="cri-o://be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1" gracePeriod=120 Feb 20 15:08:18.962846 master-0 kubenswrapper[28120]: I0220 15:08:18.961916 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="kube-rbac-proxy-web" containerID="cri-o://3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d" gracePeriod=120 Feb 20 15:08:18.962846 master-0 kubenswrapper[28120]: I0220 15:08:18.961957 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="kube-rbac-proxy" containerID="cri-o://eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b" gracePeriod=120 Feb 20 15:08:18.963136 master-0 kubenswrapper[28120]: I0220 15:08:18.962896 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="kube-rbac-proxy-metric" containerID="cri-o://8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d" gracePeriod=120 Feb 20 15:08:19.902249 master-0 kubenswrapper[28120]: I0220 15:08:19.902140 28120 generic.go:334] "Generic (PLEG): container finished" podID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerID="dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6" exitCode=0 Feb 20 15:08:19.902249 master-0 kubenswrapper[28120]: I0220 15:08:19.902200 28120 generic.go:334] "Generic (PLEG): container finished" podID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerID="eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b" exitCode=0 Feb 20 15:08:19.902249 master-0 kubenswrapper[28120]: I0220 15:08:19.902217 28120 generic.go:334] "Generic (PLEG): container finished" podID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerID="be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1" exitCode=0 Feb 20 15:08:19.902249 master-0 kubenswrapper[28120]: I0220 15:08:19.902233 28120 generic.go:334] "Generic (PLEG): container finished" podID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerID="0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01" exitCode=0 Feb 20 15:08:19.903619 master-0 kubenswrapper[28120]: I0220 15:08:19.902280 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"48af5081-6d64-454b-979a-ee1bc7065bc4","Type":"ContainerDied","Data":"dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6"} Feb 20 15:08:19.903619 master-0 kubenswrapper[28120]: I0220 15:08:19.902366 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"48af5081-6d64-454b-979a-ee1bc7065bc4","Type":"ContainerDied","Data":"eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b"} Feb 20 15:08:19.903619 master-0 kubenswrapper[28120]: I0220 15:08:19.902397 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"48af5081-6d64-454b-979a-ee1bc7065bc4","Type":"ContainerDied","Data":"be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1"} Feb 20 15:08:19.903619 master-0 kubenswrapper[28120]: I0220 15:08:19.902433 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"48af5081-6d64-454b-979a-ee1bc7065bc4","Type":"ContainerDied","Data":"0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01"} Feb 20 15:08:19.904904 master-0 kubenswrapper[28120]: I0220 15:08:19.904827 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fbcbfc7fb-bq4tx" event={"ID":"744bf91d-59af-4aab-bb6b-71bde572550f","Type":"ContainerStarted","Data":"19e1bf99a56344cb4e8da65d6ed306afe6382354ad74a4856e13bf3bec3ad560"} Feb 20 15:08:19.934207 master-0 kubenswrapper[28120]: I0220 15:08:19.934065 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6fbcbfc7fb-bq4tx" podStartSLOduration=1.934038267 podStartE2EDuration="1.934038267s" podCreationTimestamp="2026-02-20 15:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:08:19.932763085 +0000 UTC m=+438.193556688" watchObservedRunningTime="2026-02-20 15:08:19.934038267 +0000 UTC m=+438.194831870" Feb 20 15:08:20.589846 master-0 kubenswrapper[28120]: I0220 15:08:20.589793 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:20.666724 master-0 kubenswrapper[28120]: I0220 15:08:20.666649 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy\") pod \"48af5081-6d64-454b-979a-ee1bc7065bc4\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " Feb 20 15:08:20.667017 master-0 kubenswrapper[28120]: I0220 15:08:20.666738 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"48af5081-6d64-454b-979a-ee1bc7065bc4\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " Feb 20 15:08:20.667017 master-0 kubenswrapper[28120]: I0220 15:08:20.666855 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/48af5081-6d64-454b-979a-ee1bc7065bc4-alertmanager-main-db\") pod \"48af5081-6d64-454b-979a-ee1bc7065bc4\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " Feb 20 15:08:20.667017 master-0 kubenswrapper[28120]: I0220 15:08:20.666950 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-config-volume\") pod \"48af5081-6d64-454b-979a-ee1bc7065bc4\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " Feb 20 15:08:20.667017 master-0 kubenswrapper[28120]: I0220 15:08:20.666994 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-main-tls\") pod \"48af5081-6d64-454b-979a-ee1bc7065bc4\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " Feb 20 15:08:20.667287 master-0 kubenswrapper[28120]: I0220 15:08:20.667039 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy-web\") pod \"48af5081-6d64-454b-979a-ee1bc7065bc4\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " Feb 20 15:08:20.667287 master-0 kubenswrapper[28120]: I0220 15:08:20.667070 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48af5081-6d64-454b-979a-ee1bc7065bc4-alertmanager-trusted-ca-bundle\") pod \"48af5081-6d64-454b-979a-ee1bc7065bc4\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " Feb 20 15:08:20.667287 master-0 kubenswrapper[28120]: I0220 15:08:20.667099 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/48af5081-6d64-454b-979a-ee1bc7065bc4-metrics-client-ca\") pod \"48af5081-6d64-454b-979a-ee1bc7065bc4\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " Feb 20 15:08:20.667287 master-0 kubenswrapper[28120]: I0220 15:08:20.667159 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lj7n\" (UniqueName: \"kubernetes.io/projected/48af5081-6d64-454b-979a-ee1bc7065bc4-kube-api-access-8lj7n\") pod \"48af5081-6d64-454b-979a-ee1bc7065bc4\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " Feb 20 15:08:20.667287 master-0 kubenswrapper[28120]: I0220 15:08:20.667229 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/48af5081-6d64-454b-979a-ee1bc7065bc4-tls-assets\") pod \"48af5081-6d64-454b-979a-ee1bc7065bc4\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " Feb 20 15:08:20.667287 master-0 kubenswrapper[28120]: I0220 15:08:20.667283 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-web-config\") pod \"48af5081-6d64-454b-979a-ee1bc7065bc4\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " Feb 20 15:08:20.667678 master-0 kubenswrapper[28120]: I0220 15:08:20.667313 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/48af5081-6d64-454b-979a-ee1bc7065bc4-config-out\") pod \"48af5081-6d64-454b-979a-ee1bc7065bc4\" (UID: \"48af5081-6d64-454b-979a-ee1bc7065bc4\") " Feb 20 15:08:20.672551 master-0 kubenswrapper[28120]: I0220 15:08:20.672500 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48af5081-6d64-454b-979a-ee1bc7065bc4-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "48af5081-6d64-454b-979a-ee1bc7065bc4" (UID: "48af5081-6d64-454b-979a-ee1bc7065bc4"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:20.673321 master-0 kubenswrapper[28120]: I0220 15:08:20.673278 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48af5081-6d64-454b-979a-ee1bc7065bc4-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "48af5081-6d64-454b-979a-ee1bc7065bc4" (UID: "48af5081-6d64-454b-979a-ee1bc7065bc4"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:20.673434 master-0 kubenswrapper[28120]: I0220 15:08:20.673306 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48af5081-6d64-454b-979a-ee1bc7065bc4-config-out" (OuterVolumeSpecName: "config-out") pod "48af5081-6d64-454b-979a-ee1bc7065bc4" (UID: "48af5081-6d64-454b-979a-ee1bc7065bc4"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:08:20.675323 master-0 kubenswrapper[28120]: I0220 15:08:20.675265 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48af5081-6d64-454b-979a-ee1bc7065bc4-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "48af5081-6d64-454b-979a-ee1bc7065bc4" (UID: "48af5081-6d64-454b-979a-ee1bc7065bc4"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:08:20.675323 master-0 kubenswrapper[28120]: I0220 15:08:20.675291 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "48af5081-6d64-454b-979a-ee1bc7065bc4" (UID: "48af5081-6d64-454b-979a-ee1bc7065bc4"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:20.676555 master-0 kubenswrapper[28120]: I0220 15:08:20.676495 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "48af5081-6d64-454b-979a-ee1bc7065bc4" (UID: "48af5081-6d64-454b-979a-ee1bc7065bc4"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:20.677761 master-0 kubenswrapper[28120]: I0220 15:08:20.677712 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48af5081-6d64-454b-979a-ee1bc7065bc4-kube-api-access-8lj7n" (OuterVolumeSpecName: "kube-api-access-8lj7n") pod "48af5081-6d64-454b-979a-ee1bc7065bc4" (UID: "48af5081-6d64-454b-979a-ee1bc7065bc4"). InnerVolumeSpecName "kube-api-access-8lj7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:08:20.679172 master-0 kubenswrapper[28120]: I0220 15:08:20.679078 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-config-volume" (OuterVolumeSpecName: "config-volume") pod "48af5081-6d64-454b-979a-ee1bc7065bc4" (UID: "48af5081-6d64-454b-979a-ee1bc7065bc4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:20.680442 master-0 kubenswrapper[28120]: I0220 15:08:20.680399 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "48af5081-6d64-454b-979a-ee1bc7065bc4" (UID: "48af5081-6d64-454b-979a-ee1bc7065bc4"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:20.682107 master-0 kubenswrapper[28120]: I0220 15:08:20.682063 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48af5081-6d64-454b-979a-ee1bc7065bc4-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "48af5081-6d64-454b-979a-ee1bc7065bc4" (UID: "48af5081-6d64-454b-979a-ee1bc7065bc4"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:08:20.682239 master-0 kubenswrapper[28120]: I0220 15:08:20.682161 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "48af5081-6d64-454b-979a-ee1bc7065bc4" (UID: "48af5081-6d64-454b-979a-ee1bc7065bc4"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:20.743380 master-0 kubenswrapper[28120]: I0220 15:08:20.743281 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-web-config" (OuterVolumeSpecName: "web-config") pod "48af5081-6d64-454b-979a-ee1bc7065bc4" (UID: "48af5081-6d64-454b-979a-ee1bc7065bc4"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:20.769554 master-0 kubenswrapper[28120]: I0220 15:08:20.769469 28120 reconciler_common.go:293] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/48af5081-6d64-454b-979a-ee1bc7065bc4-alertmanager-main-db\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:20.769554 master-0 kubenswrapper[28120]: I0220 15:08:20.769542 28120 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:20.769554 master-0 kubenswrapper[28120]: I0220 15:08:20.769559 28120 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-main-tls\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:20.769765 master-0 kubenswrapper[28120]: I0220 15:08:20.769575 28120 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:20.769765 master-0 kubenswrapper[28120]: I0220 15:08:20.769590 28120 reconciler_common.go:293] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48af5081-6d64-454b-979a-ee1bc7065bc4-alertmanager-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:20.769765 master-0 kubenswrapper[28120]: I0220 15:08:20.769627 28120 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/48af5081-6d64-454b-979a-ee1bc7065bc4-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:20.769765 master-0 kubenswrapper[28120]: I0220 15:08:20.769643 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lj7n\" (UniqueName: \"kubernetes.io/projected/48af5081-6d64-454b-979a-ee1bc7065bc4-kube-api-access-8lj7n\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:20.769765 master-0 kubenswrapper[28120]: I0220 15:08:20.769655 28120 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/48af5081-6d64-454b-979a-ee1bc7065bc4-tls-assets\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:20.769765 master-0 kubenswrapper[28120]: I0220 15:08:20.769668 28120 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-web-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:20.769765 master-0 kubenswrapper[28120]: I0220 15:08:20.769679 28120 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/48af5081-6d64-454b-979a-ee1bc7065bc4-config-out\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:20.769765 master-0 kubenswrapper[28120]: I0220 15:08:20.769691 28120 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:20.769765 master-0 kubenswrapper[28120]: I0220 15:08:20.769704 28120 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/48af5081-6d64-454b-979a-ee1bc7065bc4-secret-alertmanager-kube-rbac-proxy-metric\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:20.921532 master-0 kubenswrapper[28120]: I0220 15:08:20.921409 28120 generic.go:334] "Generic (PLEG): container finished" podID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerID="8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d" exitCode=0 Feb 20 15:08:20.921532 master-0 kubenswrapper[28120]: I0220 15:08:20.921485 28120 generic.go:334] "Generic (PLEG): container finished" podID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerID="3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d" exitCode=0 Feb 20 15:08:20.922656 master-0 kubenswrapper[28120]: I0220 15:08:20.921542 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"48af5081-6d64-454b-979a-ee1bc7065bc4","Type":"ContainerDied","Data":"8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d"} Feb 20 15:08:20.922656 master-0 kubenswrapper[28120]: I0220 15:08:20.921608 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"48af5081-6d64-454b-979a-ee1bc7065bc4","Type":"ContainerDied","Data":"3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d"} Feb 20 15:08:20.922656 master-0 kubenswrapper[28120]: I0220 15:08:20.921553 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:20.922656 master-0 kubenswrapper[28120]: I0220 15:08:20.921645 28120 scope.go:117] "RemoveContainer" containerID="dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6" Feb 20 15:08:20.922656 master-0 kubenswrapper[28120]: I0220 15:08:20.921628 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"48af5081-6d64-454b-979a-ee1bc7065bc4","Type":"ContainerDied","Data":"19a83332786319b4e2e769e38ccf1e9e48f51720bf2ae179d6d2b27a26a896ff"} Feb 20 15:08:20.946864 master-0 kubenswrapper[28120]: I0220 15:08:20.946770 28120 scope.go:117] "RemoveContainer" containerID="8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d" Feb 20 15:08:20.975356 master-0 kubenswrapper[28120]: I0220 15:08:20.975300 28120 scope.go:117] "RemoveContainer" containerID="eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b" Feb 20 15:08:20.983715 master-0 kubenswrapper[28120]: I0220 15:08:20.983601 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 20 15:08:20.990702 master-0 kubenswrapper[28120]: I0220 15:08:20.990640 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 20 15:08:21.020527 master-0 kubenswrapper[28120]: I0220 15:08:21.020480 28120 scope.go:117] "RemoveContainer" containerID="3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d" Feb 20 15:08:21.041728 master-0 kubenswrapper[28120]: I0220 15:08:21.041655 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 20 15:08:21.042074 master-0 kubenswrapper[28120]: E0220 15:08:21.042039 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="prom-label-proxy" Feb 20 15:08:21.042074 master-0 kubenswrapper[28120]: I0220 15:08:21.042065 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="prom-label-proxy" Feb 20 15:08:21.042202 master-0 kubenswrapper[28120]: E0220 15:08:21.042076 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="kube-rbac-proxy" Feb 20 15:08:21.042202 master-0 kubenswrapper[28120]: I0220 15:08:21.042089 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="kube-rbac-proxy" Feb 20 15:08:21.042202 master-0 kubenswrapper[28120]: E0220 15:08:21.042116 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="alertmanager" Feb 20 15:08:21.042202 master-0 kubenswrapper[28120]: I0220 15:08:21.042124 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="alertmanager" Feb 20 15:08:21.042202 master-0 kubenswrapper[28120]: E0220 15:08:21.042140 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="kube-rbac-proxy-web" Feb 20 15:08:21.042202 master-0 kubenswrapper[28120]: I0220 15:08:21.042148 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="kube-rbac-proxy-web" Feb 20 15:08:21.042202 master-0 kubenswrapper[28120]: E0220 15:08:21.042159 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="init-config-reloader" Feb 20 15:08:21.042202 master-0 kubenswrapper[28120]: I0220 15:08:21.042167 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="init-config-reloader" Feb 20 15:08:21.042202 master-0 kubenswrapper[28120]: E0220 15:08:21.042184 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="kube-rbac-proxy-metric" Feb 20 15:08:21.042202 master-0 kubenswrapper[28120]: I0220 15:08:21.042195 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="kube-rbac-proxy-metric" Feb 20 15:08:21.042627 master-0 kubenswrapper[28120]: E0220 15:08:21.042214 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="config-reloader" Feb 20 15:08:21.042627 master-0 kubenswrapper[28120]: I0220 15:08:21.042224 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="config-reloader" Feb 20 15:08:21.042627 master-0 kubenswrapper[28120]: I0220 15:08:21.042465 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="prom-label-proxy" Feb 20 15:08:21.042627 master-0 kubenswrapper[28120]: I0220 15:08:21.042496 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="alertmanager" Feb 20 15:08:21.042627 master-0 kubenswrapper[28120]: I0220 15:08:21.042510 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="kube-rbac-proxy-metric" Feb 20 15:08:21.042627 master-0 kubenswrapper[28120]: I0220 15:08:21.042544 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="config-reloader" Feb 20 15:08:21.042627 master-0 kubenswrapper[28120]: I0220 15:08:21.042558 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="kube-rbac-proxy" Feb 20 15:08:21.042627 master-0 kubenswrapper[28120]: I0220 15:08:21.042573 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" containerName="kube-rbac-proxy-web" Feb 20 15:08:21.043949 master-0 kubenswrapper[28120]: I0220 15:08:21.043660 28120 scope.go:117] "RemoveContainer" containerID="be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1" Feb 20 15:08:21.045455 master-0 kubenswrapper[28120]: I0220 15:08:21.045409 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.047762 master-0 kubenswrapper[28120]: I0220 15:08:21.047669 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 20 15:08:21.054890 master-0 kubenswrapper[28120]: I0220 15:08:21.051268 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-7sd87" Feb 20 15:08:21.054890 master-0 kubenswrapper[28120]: I0220 15:08:21.054045 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 20 15:08:21.054890 master-0 kubenswrapper[28120]: I0220 15:08:21.054384 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 20 15:08:21.054890 master-0 kubenswrapper[28120]: I0220 15:08:21.054613 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 20 15:08:21.054890 master-0 kubenswrapper[28120]: I0220 15:08:21.054733 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 20 15:08:21.054890 master-0 kubenswrapper[28120]: I0220 15:08:21.054809 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 20 15:08:21.056967 master-0 kubenswrapper[28120]: I0220 15:08:21.056498 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 20 15:08:21.064886 master-0 kubenswrapper[28120]: I0220 15:08:21.064216 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 20 15:08:21.072039 master-0 kubenswrapper[28120]: I0220 15:08:21.071992 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 20 15:08:21.090688 master-0 kubenswrapper[28120]: I0220 15:08:21.083483 28120 scope.go:117] "RemoveContainer" containerID="0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01" Feb 20 15:08:21.106318 master-0 kubenswrapper[28120]: I0220 15:08:21.106274 28120 scope.go:117] "RemoveContainer" containerID="5ad622774ead6e685f9fb4dfc0db016a0fd7e7b66e65e0e546c696a6aa340199" Feb 20 15:08:21.122014 master-0 kubenswrapper[28120]: I0220 15:08:21.121958 28120 scope.go:117] "RemoveContainer" containerID="dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6" Feb 20 15:08:21.122367 master-0 kubenswrapper[28120]: E0220 15:08:21.122343 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6\": container with ID starting with dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6 not found: ID does not exist" containerID="dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6" Feb 20 15:08:21.122445 master-0 kubenswrapper[28120]: I0220 15:08:21.122373 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6"} err="failed to get container status \"dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6\": rpc error: code = NotFound desc = could not find container \"dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6\": container with ID starting with dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6 not found: ID does not exist" Feb 20 15:08:21.122445 master-0 kubenswrapper[28120]: I0220 15:08:21.122397 28120 scope.go:117] "RemoveContainer" containerID="8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d" Feb 20 15:08:21.122856 master-0 kubenswrapper[28120]: E0220 15:08:21.122653 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d\": container with ID starting with 8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d not found: ID does not exist" containerID="8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d" Feb 20 15:08:21.122856 master-0 kubenswrapper[28120]: I0220 15:08:21.122703 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d"} err="failed to get container status \"8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d\": rpc error: code = NotFound desc = could not find container \"8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d\": container with ID starting with 8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d not found: ID does not exist" Feb 20 15:08:21.122856 master-0 kubenswrapper[28120]: I0220 15:08:21.122736 28120 scope.go:117] "RemoveContainer" containerID="eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b" Feb 20 15:08:21.123089 master-0 kubenswrapper[28120]: E0220 15:08:21.123041 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b\": container with ID starting with eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b not found: ID does not exist" containerID="eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b" Feb 20 15:08:21.123089 master-0 kubenswrapper[28120]: I0220 15:08:21.123067 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b"} err="failed to get container status \"eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b\": rpc error: code = NotFound desc = could not find container \"eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b\": container with ID starting with eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b not found: ID does not exist" Feb 20 15:08:21.123089 master-0 kubenswrapper[28120]: I0220 15:08:21.123084 28120 scope.go:117] "RemoveContainer" containerID="3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d" Feb 20 15:08:21.123332 master-0 kubenswrapper[28120]: E0220 15:08:21.123308 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d\": container with ID starting with 3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d not found: ID does not exist" containerID="3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d" Feb 20 15:08:21.123416 master-0 kubenswrapper[28120]: I0220 15:08:21.123331 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d"} err="failed to get container status \"3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d\": rpc error: code = NotFound desc = could not find container \"3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d\": container with ID starting with 3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d not found: ID does not exist" Feb 20 15:08:21.123416 master-0 kubenswrapper[28120]: I0220 15:08:21.123345 28120 scope.go:117] "RemoveContainer" containerID="be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1" Feb 20 15:08:21.123580 master-0 kubenswrapper[28120]: E0220 15:08:21.123540 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1\": container with ID starting with be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1 not found: ID does not exist" containerID="be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1" Feb 20 15:08:21.123580 master-0 kubenswrapper[28120]: I0220 15:08:21.123556 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1"} err="failed to get container status \"be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1\": rpc error: code = NotFound desc = could not find container \"be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1\": container with ID starting with be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1 not found: ID does not exist" Feb 20 15:08:21.123580 master-0 kubenswrapper[28120]: I0220 15:08:21.123569 28120 scope.go:117] "RemoveContainer" containerID="0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01" Feb 20 15:08:21.123826 master-0 kubenswrapper[28120]: E0220 15:08:21.123779 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01\": container with ID starting with 0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01 not found: ID does not exist" containerID="0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01" Feb 20 15:08:21.123826 master-0 kubenswrapper[28120]: I0220 15:08:21.123806 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01"} err="failed to get container status \"0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01\": rpc error: code = NotFound desc = could not find container \"0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01\": container with ID starting with 0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01 not found: ID does not exist" Feb 20 15:08:21.123826 master-0 kubenswrapper[28120]: I0220 15:08:21.123823 28120 scope.go:117] "RemoveContainer" containerID="5ad622774ead6e685f9fb4dfc0db016a0fd7e7b66e65e0e546c696a6aa340199" Feb 20 15:08:21.124220 master-0 kubenswrapper[28120]: E0220 15:08:21.124198 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ad622774ead6e685f9fb4dfc0db016a0fd7e7b66e65e0e546c696a6aa340199\": container with ID starting with 5ad622774ead6e685f9fb4dfc0db016a0fd7e7b66e65e0e546c696a6aa340199 not found: ID does not exist" containerID="5ad622774ead6e685f9fb4dfc0db016a0fd7e7b66e65e0e546c696a6aa340199" Feb 20 15:08:21.124220 master-0 kubenswrapper[28120]: I0220 15:08:21.124217 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad622774ead6e685f9fb4dfc0db016a0fd7e7b66e65e0e546c696a6aa340199"} err="failed to get container status \"5ad622774ead6e685f9fb4dfc0db016a0fd7e7b66e65e0e546c696a6aa340199\": rpc error: code = NotFound desc = could not find container \"5ad622774ead6e685f9fb4dfc0db016a0fd7e7b66e65e0e546c696a6aa340199\": container with ID starting with 5ad622774ead6e685f9fb4dfc0db016a0fd7e7b66e65e0e546c696a6aa340199 not found: ID does not exist" Feb 20 15:08:21.124372 master-0 kubenswrapper[28120]: I0220 15:08:21.124229 28120 scope.go:117] "RemoveContainer" containerID="dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6" Feb 20 15:08:21.124518 master-0 kubenswrapper[28120]: I0220 15:08:21.124441 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6"} err="failed to get container status \"dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6\": rpc error: code = NotFound desc = could not find container \"dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6\": container with ID starting with dbe6327788b4b56b4ff2ba390aafe4fd5f70b887fcfa425bf456f923cdeafcf6 not found: ID does not exist" Feb 20 15:08:21.124518 master-0 kubenswrapper[28120]: I0220 15:08:21.124460 28120 scope.go:117] "RemoveContainer" containerID="8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d" Feb 20 15:08:21.124967 master-0 kubenswrapper[28120]: I0220 15:08:21.124719 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d"} err="failed to get container status \"8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d\": rpc error: code = NotFound desc = could not find container \"8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d\": container with ID starting with 8b1c9d2e11a469a61e039094315dca8e169936c650bb25bbb10a807feb98f61d not found: ID does not exist" Feb 20 15:08:21.124967 master-0 kubenswrapper[28120]: I0220 15:08:21.124745 28120 scope.go:117] "RemoveContainer" containerID="eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b" Feb 20 15:08:21.125266 master-0 kubenswrapper[28120]: I0220 15:08:21.125018 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b"} err="failed to get container status \"eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b\": rpc error: code = NotFound desc = could not find container \"eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b\": container with ID starting with eb293aa830f45350531858a7350f6d4df7045598e6d9d9482b6007984f93d78b not found: ID does not exist" Feb 20 15:08:21.125266 master-0 kubenswrapper[28120]: I0220 15:08:21.125044 28120 scope.go:117] "RemoveContainer" containerID="3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d" Feb 20 15:08:21.125751 master-0 kubenswrapper[28120]: I0220 15:08:21.125313 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d"} err="failed to get container status \"3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d\": rpc error: code = NotFound desc = could not find container \"3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d\": container with ID starting with 3292c69333abb00e2ea70101d95622853698a393eb8ebb6a3d4f91608421515d not found: ID does not exist" Feb 20 15:08:21.125751 master-0 kubenswrapper[28120]: I0220 15:08:21.125330 28120 scope.go:117] "RemoveContainer" containerID="be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1" Feb 20 15:08:21.125751 master-0 kubenswrapper[28120]: I0220 15:08:21.125619 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1"} err="failed to get container status \"be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1\": rpc error: code = NotFound desc = could not find container \"be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1\": container with ID starting with be0223a48543fce8c5cf95adf936c50e7b224a09a8c96c371c9d92676b06f8f1 not found: ID does not exist" Feb 20 15:08:21.125751 master-0 kubenswrapper[28120]: I0220 15:08:21.125644 28120 scope.go:117] "RemoveContainer" containerID="0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01" Feb 20 15:08:21.126314 master-0 kubenswrapper[28120]: I0220 15:08:21.125952 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01"} err="failed to get container status \"0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01\": rpc error: code = NotFound desc = could not find container \"0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01\": container with ID starting with 0f194cf73d731bdd937503715dfb49dcb48e0846685119002e6763cfe4bb3c01 not found: ID does not exist" Feb 20 15:08:21.126314 master-0 kubenswrapper[28120]: I0220 15:08:21.125987 28120 scope.go:117] "RemoveContainer" containerID="5ad622774ead6e685f9fb4dfc0db016a0fd7e7b66e65e0e546c696a6aa340199" Feb 20 15:08:21.126314 master-0 kubenswrapper[28120]: I0220 15:08:21.126246 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad622774ead6e685f9fb4dfc0db016a0fd7e7b66e65e0e546c696a6aa340199"} err="failed to get container status \"5ad622774ead6e685f9fb4dfc0db016a0fd7e7b66e65e0e546c696a6aa340199\": rpc error: code = NotFound desc = could not find container \"5ad622774ead6e685f9fb4dfc0db016a0fd7e7b66e65e0e546c696a6aa340199\": container with ID starting with 5ad622774ead6e685f9fb4dfc0db016a0fd7e7b66e65e0e546c696a6aa340199 not found: ID does not exist" Feb 20 15:08:21.178990 master-0 kubenswrapper[28120]: I0220 15:08:21.177780 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e71e3186-e943-441c-bf31-39dc7c8a9947-config-out\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.178990 master-0 kubenswrapper[28120]: I0220 15:08:21.177912 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.178990 master-0 kubenswrapper[28120]: I0220 15:08:21.177994 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7blx\" (UniqueName: \"kubernetes.io/projected/e71e3186-e943-441c-bf31-39dc7c8a9947-kube-api-access-c7blx\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.178990 master-0 kubenswrapper[28120]: I0220 15:08:21.178087 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e71e3186-e943-441c-bf31-39dc7c8a9947-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.178990 master-0 kubenswrapper[28120]: I0220 15:08:21.178127 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e71e3186-e943-441c-bf31-39dc7c8a9947-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.178990 master-0 kubenswrapper[28120]: I0220 15:08:21.178148 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e71e3186-e943-441c-bf31-39dc7c8a9947-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.178990 master-0 kubenswrapper[28120]: I0220 15:08:21.178170 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e71e3186-e943-441c-bf31-39dc7c8a9947-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.178990 master-0 kubenswrapper[28120]: I0220 15:08:21.178232 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-config-volume\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.178990 master-0 kubenswrapper[28120]: I0220 15:08:21.178289 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.178990 master-0 kubenswrapper[28120]: I0220 15:08:21.178312 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-web-config\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.178990 master-0 kubenswrapper[28120]: I0220 15:08:21.178737 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.179794 master-0 kubenswrapper[28120]: I0220 15:08:21.179118 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.281401 master-0 kubenswrapper[28120]: I0220 15:08:21.281171 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-config-volume\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.281401 master-0 kubenswrapper[28120]: I0220 15:08:21.281268 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.281401 master-0 kubenswrapper[28120]: I0220 15:08:21.281306 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-web-config\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.281880 master-0 kubenswrapper[28120]: I0220 15:08:21.281416 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.281880 master-0 kubenswrapper[28120]: I0220 15:08:21.281505 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.281880 master-0 kubenswrapper[28120]: I0220 15:08:21.281633 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e71e3186-e943-441c-bf31-39dc7c8a9947-config-out\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.281880 master-0 kubenswrapper[28120]: I0220 15:08:21.281677 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.281880 master-0 kubenswrapper[28120]: I0220 15:08:21.281728 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7blx\" (UniqueName: \"kubernetes.io/projected/e71e3186-e943-441c-bf31-39dc7c8a9947-kube-api-access-c7blx\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.281880 master-0 kubenswrapper[28120]: I0220 15:08:21.281791 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e71e3186-e943-441c-bf31-39dc7c8a9947-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.281880 master-0 kubenswrapper[28120]: I0220 15:08:21.281827 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e71e3186-e943-441c-bf31-39dc7c8a9947-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.281880 master-0 kubenswrapper[28120]: I0220 15:08:21.281858 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e71e3186-e943-441c-bf31-39dc7c8a9947-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.281880 master-0 kubenswrapper[28120]: I0220 15:08:21.281891 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e71e3186-e943-441c-bf31-39dc7c8a9947-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.283904 master-0 kubenswrapper[28120]: I0220 15:08:21.283848 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e71e3186-e943-441c-bf31-39dc7c8a9947-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.284122 master-0 kubenswrapper[28120]: I0220 15:08:21.284066 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e71e3186-e943-441c-bf31-39dc7c8a9947-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.284451 master-0 kubenswrapper[28120]: I0220 15:08:21.284372 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e71e3186-e943-441c-bf31-39dc7c8a9947-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.286566 master-0 kubenswrapper[28120]: I0220 15:08:21.286347 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e71e3186-e943-441c-bf31-39dc7c8a9947-config-out\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.286566 master-0 kubenswrapper[28120]: I0220 15:08:21.286493 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.287324 master-0 kubenswrapper[28120]: I0220 15:08:21.287233 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e71e3186-e943-441c-bf31-39dc7c8a9947-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.287964 master-0 kubenswrapper[28120]: I0220 15:08:21.287887 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-web-config\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.289147 master-0 kubenswrapper[28120]: I0220 15:08:21.289074 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.289235 master-0 kubenswrapper[28120]: I0220 15:08:21.289195 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-config-volume\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.290235 master-0 kubenswrapper[28120]: I0220 15:08:21.290184 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.290404 master-0 kubenswrapper[28120]: I0220 15:08:21.290353 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e71e3186-e943-441c-bf31-39dc7c8a9947-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.316506 master-0 kubenswrapper[28120]: I0220 15:08:21.316431 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7blx\" (UniqueName: \"kubernetes.io/projected/e71e3186-e943-441c-bf31-39dc7c8a9947-kube-api-access-c7blx\") pod \"alertmanager-main-0\" (UID: \"e71e3186-e943-441c-bf31-39dc7c8a9947\") " pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.374415 master-0 kubenswrapper[28120]: I0220 15:08:21.374327 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 20 15:08:21.866127 master-0 kubenswrapper[28120]: W0220 15:08:21.866051 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode71e3186_e943_441c_bf31_39dc7c8a9947.slice/crio-5b00071612e815f669cfa60898aa1e6b218a7524988f4ce4370486cefbf6d312 WatchSource:0}: Error finding container 5b00071612e815f669cfa60898aa1e6b218a7524988f4ce4370486cefbf6d312: Status 404 returned error can't find the container with id 5b00071612e815f669cfa60898aa1e6b218a7524988f4ce4370486cefbf6d312 Feb 20 15:08:21.878263 master-0 kubenswrapper[28120]: I0220 15:08:21.878204 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 20 15:08:21.932013 master-0 kubenswrapper[28120]: I0220 15:08:21.931911 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e71e3186-e943-441c-bf31-39dc7c8a9947","Type":"ContainerStarted","Data":"5b00071612e815f669cfa60898aa1e6b218a7524988f4ce4370486cefbf6d312"} Feb 20 15:08:22.075110 master-0 kubenswrapper[28120]: I0220 15:08:22.075037 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48af5081-6d64-454b-979a-ee1bc7065bc4" path="/var/lib/kubelet/pods/48af5081-6d64-454b-979a-ee1bc7065bc4/volumes" Feb 20 15:08:22.944830 master-0 kubenswrapper[28120]: I0220 15:08:22.944766 28120 generic.go:334] "Generic (PLEG): container finished" podID="e71e3186-e943-441c-bf31-39dc7c8a9947" containerID="763718294c8bed9507b52fba8568355aebe3cf52896d8b7f0a2a8417c1065c8d" exitCode=0 Feb 20 15:08:22.945515 master-0 kubenswrapper[28120]: I0220 15:08:22.945460 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e71e3186-e943-441c-bf31-39dc7c8a9947","Type":"ContainerDied","Data":"763718294c8bed9507b52fba8568355aebe3cf52896d8b7f0a2a8417c1065c8d"} Feb 20 15:08:23.963423 master-0 kubenswrapper[28120]: I0220 15:08:23.963363 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e71e3186-e943-441c-bf31-39dc7c8a9947","Type":"ContainerStarted","Data":"9f16ee57749abb94a59803900b83fbfa70bb4e10a3d80e396810e47f779037e7"} Feb 20 15:08:23.964232 master-0 kubenswrapper[28120]: I0220 15:08:23.963434 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e71e3186-e943-441c-bf31-39dc7c8a9947","Type":"ContainerStarted","Data":"0c1710b8f9e87a019acb9248e10d939c25dfd4c11af305edcb2480b12b484307"} Feb 20 15:08:23.964232 master-0 kubenswrapper[28120]: I0220 15:08:23.963454 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e71e3186-e943-441c-bf31-39dc7c8a9947","Type":"ContainerStarted","Data":"b1be36ce45130532f69b354df8f695e1fee765fe4c934d2076a469129d46eedd"} Feb 20 15:08:23.964232 master-0 kubenswrapper[28120]: I0220 15:08:23.963471 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e71e3186-e943-441c-bf31-39dc7c8a9947","Type":"ContainerStarted","Data":"d6b54cc3079ea5306ba8f5632538039b113a2e6b3e664d31574f4f3c77ba96c5"} Feb 20 15:08:23.964232 master-0 kubenswrapper[28120]: I0220 15:08:23.963486 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e71e3186-e943-441c-bf31-39dc7c8a9947","Type":"ContainerStarted","Data":"a197aa946dd3f9fedaaf942c4bd33affc97fb9ca43bc10b99f85e0f4b834c889"} Feb 20 15:08:24.986004 master-0 kubenswrapper[28120]: I0220 15:08:24.985889 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e71e3186-e943-441c-bf31-39dc7c8a9947","Type":"ContainerStarted","Data":"e4775ca0bd1a84b98f26d65dfdb19e34df23761a7bdd3e7490bfbb7918385b83"} Feb 20 15:08:25.039631 master-0 kubenswrapper[28120]: I0220 15:08:25.039339 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=5.039309986 podStartE2EDuration="5.039309986s" podCreationTimestamp="2026-02-20 15:08:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:08:25.031144061 +0000 UTC m=+443.291937694" watchObservedRunningTime="2026-02-20 15:08:25.039309986 +0000 UTC m=+443.300103589" Feb 20 15:08:27.014229 master-0 kubenswrapper[28120]: I0220 15:08:27.014162 28120 generic.go:334] "Generic (PLEG): container finished" podID="bdd203e0-3dd9-4e9d-81f1-46f60d235e38" containerID="6de3357e6e18954512d073202b91b501ca58384ea08b18ec75d08c4929c63531" exitCode=0 Feb 20 15:08:27.014829 master-0 kubenswrapper[28120]: I0220 15:08:27.014292 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" event={"ID":"bdd203e0-3dd9-4e9d-81f1-46f60d235e38","Type":"ContainerDied","Data":"6de3357e6e18954512d073202b91b501ca58384ea08b18ec75d08c4929c63531"} Feb 20 15:08:27.179253 master-0 kubenswrapper[28120]: I0220 15:08:27.179150 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:08:27.303011 master-0 kubenswrapper[28120]: I0220 15:08:27.302792 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-configmap-kubelet-serving-ca-bundle\") pod \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " Feb 20 15:08:27.303011 master-0 kubenswrapper[28120]: I0220 15:08:27.302920 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-audit-log\") pod \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " Feb 20 15:08:27.303352 master-0 kubenswrapper[28120]: I0220 15:08:27.303019 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-metrics-server-audit-profiles\") pod \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " Feb 20 15:08:27.303352 master-0 kubenswrapper[28120]: I0220 15:08:27.303102 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-client-ca-bundle\") pod \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " Feb 20 15:08:27.303352 master-0 kubenswrapper[28120]: I0220 15:08:27.303149 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zppr\" (UniqueName: \"kubernetes.io/projected/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-kube-api-access-9zppr\") pod \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " Feb 20 15:08:27.303352 master-0 kubenswrapper[28120]: I0220 15:08:27.303220 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-client-certs\") pod \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " Feb 20 15:08:27.303352 master-0 kubenswrapper[28120]: I0220 15:08:27.303266 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-server-tls\") pod \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\" (UID: \"bdd203e0-3dd9-4e9d-81f1-46f60d235e38\") " Feb 20 15:08:27.306540 master-0 kubenswrapper[28120]: I0220 15:08:27.306482 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-audit-log" (OuterVolumeSpecName: "audit-log") pod "bdd203e0-3dd9-4e9d-81f1-46f60d235e38" (UID: "bdd203e0-3dd9-4e9d-81f1-46f60d235e38"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:08:27.306685 master-0 kubenswrapper[28120]: I0220 15:08:27.306578 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "bdd203e0-3dd9-4e9d-81f1-46f60d235e38" (UID: "bdd203e0-3dd9-4e9d-81f1-46f60d235e38"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:27.306834 master-0 kubenswrapper[28120]: I0220 15:08:27.306770 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "bdd203e0-3dd9-4e9d-81f1-46f60d235e38" (UID: "bdd203e0-3dd9-4e9d-81f1-46f60d235e38"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:27.308321 master-0 kubenswrapper[28120]: I0220 15:08:27.308267 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "bdd203e0-3dd9-4e9d-81f1-46f60d235e38" (UID: "bdd203e0-3dd9-4e9d-81f1-46f60d235e38"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:27.308820 master-0 kubenswrapper[28120]: I0220 15:08:27.308772 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "bdd203e0-3dd9-4e9d-81f1-46f60d235e38" (UID: "bdd203e0-3dd9-4e9d-81f1-46f60d235e38"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:27.308911 master-0 kubenswrapper[28120]: I0220 15:08:27.308846 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-kube-api-access-9zppr" (OuterVolumeSpecName: "kube-api-access-9zppr") pod "bdd203e0-3dd9-4e9d-81f1-46f60d235e38" (UID: "bdd203e0-3dd9-4e9d-81f1-46f60d235e38"). InnerVolumeSpecName "kube-api-access-9zppr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:08:27.310679 master-0 kubenswrapper[28120]: I0220 15:08:27.310626 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "bdd203e0-3dd9-4e9d-81f1-46f60d235e38" (UID: "bdd203e0-3dd9-4e9d-81f1-46f60d235e38"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:27.406189 master-0 kubenswrapper[28120]: I0220 15:08:27.406071 28120 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:27.406189 master-0 kubenswrapper[28120]: I0220 15:08:27.406169 28120 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-audit-log\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:27.406762 master-0 kubenswrapper[28120]: I0220 15:08:27.406211 28120 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:27.406762 master-0 kubenswrapper[28120]: I0220 15:08:27.406244 28120 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:27.406762 master-0 kubenswrapper[28120]: I0220 15:08:27.406278 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zppr\" (UniqueName: \"kubernetes.io/projected/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-kube-api-access-9zppr\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:27.406762 master-0 kubenswrapper[28120]: I0220 15:08:27.406305 28120 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:27.406762 master-0 kubenswrapper[28120]: I0220 15:08:27.406331 28120 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bdd203e0-3dd9-4e9d-81f1-46f60d235e38-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:28.027315 master-0 kubenswrapper[28120]: I0220 15:08:28.027235 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" event={"ID":"bdd203e0-3dd9-4e9d-81f1-46f60d235e38","Type":"ContainerDied","Data":"3209ad8e141d4f4023abb0b8711dc267473b98fd78163c32b9a46c610babe186"} Feb 20 15:08:28.027315 master-0 kubenswrapper[28120]: I0220 15:08:28.027322 28120 scope.go:117] "RemoveContainer" containerID="6de3357e6e18954512d073202b91b501ca58384ea08b18ec75d08c4929c63531" Feb 20 15:08:28.028251 master-0 kubenswrapper[28120]: I0220 15:08:28.027392 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-9bcdd7684-kz2z2" Feb 20 15:08:28.111413 master-0 kubenswrapper[28120]: I0220 15:08:28.111338 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-9bcdd7684-kz2z2"] Feb 20 15:08:28.121075 master-0 kubenswrapper[28120]: I0220 15:08:28.121013 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-9bcdd7684-kz2z2"] Feb 20 15:08:28.410052 master-0 kubenswrapper[28120]: I0220 15:08:28.409885 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:28.410052 master-0 kubenswrapper[28120]: I0220 15:08:28.409982 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:28.418045 master-0 kubenswrapper[28120]: I0220 15:08:28.417976 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:29.047015 master-0 kubenswrapper[28120]: I0220 15:08:29.046952 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:08:29.155043 master-0 kubenswrapper[28120]: I0220 15:08:29.149375 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-544f96cb59-lzxgw"] Feb 20 15:08:30.071497 master-0 kubenswrapper[28120]: I0220 15:08:30.071428 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdd203e0-3dd9-4e9d-81f1-46f60d235e38" path="/var/lib/kubelet/pods/bdd203e0-3dd9-4e9d-81f1-46f60d235e38/volumes" Feb 20 15:08:31.942667 master-0 kubenswrapper[28120]: I0220 15:08:31.942568 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-68cd6dbb78-rhjhv" podUID="bbe031c3-3ab8-42af-ab24-718d83d7d121" containerName="console" containerID="cri-o://b3ad512efb5dbbcc33d6138b656a0885487bca2c87d7c1ae457add1c2c74ff8e" gracePeriod=15 Feb 20 15:08:32.089373 master-0 kubenswrapper[28120]: I0220 15:08:32.089296 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-68cd6dbb78-rhjhv_bbe031c3-3ab8-42af-ab24-718d83d7d121/console/0.log" Feb 20 15:08:32.089567 master-0 kubenswrapper[28120]: I0220 15:08:32.089397 28120 generic.go:334] "Generic (PLEG): container finished" podID="bbe031c3-3ab8-42af-ab24-718d83d7d121" containerID="b3ad512efb5dbbcc33d6138b656a0885487bca2c87d7c1ae457add1c2c74ff8e" exitCode=2 Feb 20 15:08:32.092998 master-0 kubenswrapper[28120]: I0220 15:08:32.092906 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68cd6dbb78-rhjhv" event={"ID":"bbe031c3-3ab8-42af-ab24-718d83d7d121","Type":"ContainerDied","Data":"b3ad512efb5dbbcc33d6138b656a0885487bca2c87d7c1ae457add1c2c74ff8e"} Feb 20 15:08:33.100269 master-0 kubenswrapper[28120]: I0220 15:08:33.099974 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-68cd6dbb78-rhjhv_bbe031c3-3ab8-42af-ab24-718d83d7d121/console/0.log" Feb 20 15:08:33.101181 master-0 kubenswrapper[28120]: I0220 15:08:33.101139 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68cd6dbb78-rhjhv" event={"ID":"bbe031c3-3ab8-42af-ab24-718d83d7d121","Type":"ContainerDied","Data":"92f96c88d30a444197ed70e8bd675e0d553d6337d16c8d4ecd884eae81ff681e"} Feb 20 15:08:33.101374 master-0 kubenswrapper[28120]: I0220 15:08:33.101349 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f96c88d30a444197ed70e8bd675e0d553d6337d16c8d4ecd884eae81ff681e" Feb 20 15:08:33.412435 master-0 kubenswrapper[28120]: I0220 15:08:33.412374 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-68cd6dbb78-rhjhv_bbe031c3-3ab8-42af-ab24-718d83d7d121/console/0.log" Feb 20 15:08:33.412620 master-0 kubenswrapper[28120]: I0220 15:08:33.412575 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:08:33.521390 master-0 kubenswrapper[28120]: I0220 15:08:33.521291 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-serving-cert\") pod \"bbe031c3-3ab8-42af-ab24-718d83d7d121\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " Feb 20 15:08:33.521788 master-0 kubenswrapper[28120]: I0220 15:08:33.521496 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-trusted-ca-bundle\") pod \"bbe031c3-3ab8-42af-ab24-718d83d7d121\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " Feb 20 15:08:33.521788 master-0 kubenswrapper[28120]: I0220 15:08:33.521569 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-oauth-config\") pod \"bbe031c3-3ab8-42af-ab24-718d83d7d121\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " Feb 20 15:08:33.521788 master-0 kubenswrapper[28120]: I0220 15:08:33.521609 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-oauth-serving-cert\") pod \"bbe031c3-3ab8-42af-ab24-718d83d7d121\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " Feb 20 15:08:33.521788 master-0 kubenswrapper[28120]: I0220 15:08:33.521677 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-service-ca\") pod \"bbe031c3-3ab8-42af-ab24-718d83d7d121\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " Feb 20 15:08:33.522085 master-0 kubenswrapper[28120]: I0220 15:08:33.521864 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-config\") pod \"bbe031c3-3ab8-42af-ab24-718d83d7d121\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " Feb 20 15:08:33.522085 master-0 kubenswrapper[28120]: I0220 15:08:33.521970 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z4sc\" (UniqueName: \"kubernetes.io/projected/bbe031c3-3ab8-42af-ab24-718d83d7d121-kube-api-access-5z4sc\") pod \"bbe031c3-3ab8-42af-ab24-718d83d7d121\" (UID: \"bbe031c3-3ab8-42af-ab24-718d83d7d121\") " Feb 20 15:08:33.522605 master-0 kubenswrapper[28120]: I0220 15:08:33.522483 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bbe031c3-3ab8-42af-ab24-718d83d7d121" (UID: "bbe031c3-3ab8-42af-ab24-718d83d7d121"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:33.522857 master-0 kubenswrapper[28120]: I0220 15:08:33.522503 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bbe031c3-3ab8-42af-ab24-718d83d7d121" (UID: "bbe031c3-3ab8-42af-ab24-718d83d7d121"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:33.522857 master-0 kubenswrapper[28120]: I0220 15:08:33.522816 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-service-ca" (OuterVolumeSpecName: "service-ca") pod "bbe031c3-3ab8-42af-ab24-718d83d7d121" (UID: "bbe031c3-3ab8-42af-ab24-718d83d7d121"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:33.523203 master-0 kubenswrapper[28120]: I0220 15:08:33.522917 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-config" (OuterVolumeSpecName: "console-config") pod "bbe031c3-3ab8-42af-ab24-718d83d7d121" (UID: "bbe031c3-3ab8-42af-ab24-718d83d7d121"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:33.525469 master-0 kubenswrapper[28120]: I0220 15:08:33.525402 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe031c3-3ab8-42af-ab24-718d83d7d121-kube-api-access-5z4sc" (OuterVolumeSpecName: "kube-api-access-5z4sc") pod "bbe031c3-3ab8-42af-ab24-718d83d7d121" (UID: "bbe031c3-3ab8-42af-ab24-718d83d7d121"). InnerVolumeSpecName "kube-api-access-5z4sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:08:33.526222 master-0 kubenswrapper[28120]: I0220 15:08:33.526172 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bbe031c3-3ab8-42af-ab24-718d83d7d121" (UID: "bbe031c3-3ab8-42af-ab24-718d83d7d121"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:33.526738 master-0 kubenswrapper[28120]: I0220 15:08:33.526670 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bbe031c3-3ab8-42af-ab24-718d83d7d121" (UID: "bbe031c3-3ab8-42af-ab24-718d83d7d121"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:33.624695 master-0 kubenswrapper[28120]: I0220 15:08:33.624536 28120 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:33.624695 master-0 kubenswrapper[28120]: I0220 15:08:33.624621 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z4sc\" (UniqueName: \"kubernetes.io/projected/bbe031c3-3ab8-42af-ab24-718d83d7d121-kube-api-access-5z4sc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:33.624695 master-0 kubenswrapper[28120]: I0220 15:08:33.624645 28120 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:33.624695 master-0 kubenswrapper[28120]: I0220 15:08:33.624666 28120 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:33.624695 master-0 kubenswrapper[28120]: I0220 15:08:33.624686 28120 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbe031c3-3ab8-42af-ab24-718d83d7d121-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:33.625190 master-0 kubenswrapper[28120]: I0220 15:08:33.624703 28120 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:33.625190 master-0 kubenswrapper[28120]: I0220 15:08:33.624725 28120 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbe031c3-3ab8-42af-ab24-718d83d7d121-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:34.109771 master-0 kubenswrapper[28120]: I0220 15:08:34.109713 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68cd6dbb78-rhjhv" Feb 20 15:08:34.146019 master-0 kubenswrapper[28120]: I0220 15:08:34.145945 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-68cd6dbb78-rhjhv"] Feb 20 15:08:34.156506 master-0 kubenswrapper[28120]: I0220 15:08:34.156426 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-68cd6dbb78-rhjhv"] Feb 20 15:08:36.066943 master-0 kubenswrapper[28120]: I0220 15:08:36.066825 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbe031c3-3ab8-42af-ab24-718d83d7d121" path="/var/lib/kubelet/pods/bbe031c3-3ab8-42af-ab24-718d83d7d121/volumes" Feb 20 15:08:46.563414 master-0 kubenswrapper[28120]: I0220 15:08:46.529509 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 20 15:08:46.563414 master-0 kubenswrapper[28120]: I0220 15:08:46.530216 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="prometheus" containerID="cri-o://0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d" gracePeriod=600 Feb 20 15:08:46.563414 master-0 kubenswrapper[28120]: I0220 15:08:46.530272 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="kube-rbac-proxy" containerID="cri-o://ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33" gracePeriod=600 Feb 20 15:08:46.563414 master-0 kubenswrapper[28120]: I0220 15:08:46.530341 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="kube-rbac-proxy-web" containerID="cri-o://62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5" gracePeriod=600 Feb 20 15:08:46.563414 master-0 kubenswrapper[28120]: I0220 15:08:46.530298 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="thanos-sidecar" containerID="cri-o://c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e" gracePeriod=600 Feb 20 15:08:46.563414 master-0 kubenswrapper[28120]: I0220 15:08:46.530353 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="config-reloader" containerID="cri-o://cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8" gracePeriod=600 Feb 20 15:08:46.563414 master-0 kubenswrapper[28120]: I0220 15:08:46.530456 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="kube-rbac-proxy-thanos" containerID="cri-o://451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704" gracePeriod=600 Feb 20 15:08:47.005529 master-0 kubenswrapper[28120]: I0220 15:08:47.005443 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.168448 master-0 kubenswrapper[28120]: I0220 15:08:47.168357 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.168448 master-0 kubenswrapper[28120]: I0220 15:08:47.168429 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.168448 master-0 kubenswrapper[28120]: I0220 15:08:47.168452 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-k8s-rulefiles-0\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.168448 master-0 kubenswrapper[28120]: I0220 15:08:47.168472 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpdzb\" (UniqueName: \"kubernetes.io/projected/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-kube-api-access-fpdzb\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.169280 master-0 kubenswrapper[28120]: I0220 15:08:47.168599 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-config\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.169280 master-0 kubenswrapper[28120]: I0220 15:08:47.168691 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-kube-rbac-proxy\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.169280 master-0 kubenswrapper[28120]: I0220 15:08:47.168729 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-metrics-client-certs\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.169280 master-0 kubenswrapper[28120]: I0220 15:08:47.168778 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-config-out\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.169280 master-0 kubenswrapper[28120]: I0220 15:08:47.168819 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-k8s-db\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.169280 master-0 kubenswrapper[28120]: I0220 15:08:47.168857 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-grpc-tls\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.169280 master-0 kubenswrapper[28120]: I0220 15:08:47.168907 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-serving-certs-ca-bundle\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.169280 master-0 kubenswrapper[28120]: I0220 15:08:47.168954 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-kubelet-serving-ca-bundle\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.169280 master-0 kubenswrapper[28120]: I0220 15:08:47.169017 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-web-config\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.169280 master-0 kubenswrapper[28120]: I0220 15:08:47.169056 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-tls\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.169280 master-0 kubenswrapper[28120]: I0220 15:08:47.169117 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-thanos-prometheus-http-client-file\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.169280 master-0 kubenswrapper[28120]: I0220 15:08:47.169167 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-tls-assets\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.169280 master-0 kubenswrapper[28120]: I0220 15:08:47.169219 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-metrics-client-ca\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.169280 master-0 kubenswrapper[28120]: I0220 15:08:47.169242 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-trusted-ca-bundle\") pod \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\" (UID: \"7f79c358-60c9-4d3e-a830-ce6cab8e39d6\") " Feb 20 15:08:47.170241 master-0 kubenswrapper[28120]: I0220 15:08:47.169574 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:47.170241 master-0 kubenswrapper[28120]: I0220 15:08:47.169814 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:47.170241 master-0 kubenswrapper[28120]: I0220 15:08:47.169834 28120 reconciler_common.go:293] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.171979 master-0 kubenswrapper[28120]: I0220 15:08:47.171145 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:47.171979 master-0 kubenswrapper[28120]: I0220 15:08:47.171450 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:08:47.171979 master-0 kubenswrapper[28120]: I0220 15:08:47.171815 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:47.172799 master-0 kubenswrapper[28120]: I0220 15:08:47.172693 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:47.172799 master-0 kubenswrapper[28120]: I0220 15:08:47.172717 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:47.173583 master-0 kubenswrapper[28120]: I0220 15:08:47.172824 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:47.173583 master-0 kubenswrapper[28120]: I0220 15:08:47.172980 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:47.173583 master-0 kubenswrapper[28120]: I0220 15:08:47.173382 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-config-out" (OuterVolumeSpecName: "config-out") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:08:47.174034 master-0 kubenswrapper[28120]: I0220 15:08:47.173977 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:47.174193 master-0 kubenswrapper[28120]: I0220 15:08:47.174149 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-kube-api-access-fpdzb" (OuterVolumeSpecName: "kube-api-access-fpdzb") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "kube-api-access-fpdzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:08:47.174272 master-0 kubenswrapper[28120]: I0220 15:08:47.174244 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:47.174525 master-0 kubenswrapper[28120]: I0220 15:08:47.174487 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:08:47.174667 master-0 kubenswrapper[28120]: I0220 15:08:47.174621 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:47.175347 master-0 kubenswrapper[28120]: I0220 15:08:47.175269 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:47.175444 master-0 kubenswrapper[28120]: I0220 15:08:47.175369 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-config" (OuterVolumeSpecName: "config") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:47.228168 master-0 kubenswrapper[28120]: I0220 15:08:47.227880 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-web-config" (OuterVolumeSpecName: "web-config") pod "7f79c358-60c9-4d3e-a830-ce6cab8e39d6" (UID: "7f79c358-60c9-4d3e-a830-ce6cab8e39d6"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:47.229977 master-0 kubenswrapper[28120]: I0220 15:08:47.229651 28120 generic.go:334] "Generic (PLEG): container finished" podID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerID="451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704" exitCode=0 Feb 20 15:08:47.229977 master-0 kubenswrapper[28120]: I0220 15:08:47.229708 28120 generic.go:334] "Generic (PLEG): container finished" podID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerID="ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33" exitCode=0 Feb 20 15:08:47.229977 master-0 kubenswrapper[28120]: I0220 15:08:47.229735 28120 generic.go:334] "Generic (PLEG): container finished" podID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerID="62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5" exitCode=0 Feb 20 15:08:47.229977 master-0 kubenswrapper[28120]: I0220 15:08:47.229762 28120 generic.go:334] "Generic (PLEG): container finished" podID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerID="c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e" exitCode=0 Feb 20 15:08:47.229977 master-0 kubenswrapper[28120]: I0220 15:08:47.229794 28120 generic.go:334] "Generic (PLEG): container finished" podID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerID="cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8" exitCode=0 Feb 20 15:08:47.229977 master-0 kubenswrapper[28120]: I0220 15:08:47.229810 28120 generic.go:334] "Generic (PLEG): container finished" podID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerID="0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d" exitCode=0 Feb 20 15:08:47.229977 master-0 kubenswrapper[28120]: I0220 15:08:47.229848 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7f79c358-60c9-4d3e-a830-ce6cab8e39d6","Type":"ContainerDied","Data":"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704"} Feb 20 15:08:47.229977 master-0 kubenswrapper[28120]: I0220 15:08:47.229896 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7f79c358-60c9-4d3e-a830-ce6cab8e39d6","Type":"ContainerDied","Data":"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33"} Feb 20 15:08:47.229977 master-0 kubenswrapper[28120]: I0220 15:08:47.229954 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7f79c358-60c9-4d3e-a830-ce6cab8e39d6","Type":"ContainerDied","Data":"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5"} Feb 20 15:08:47.229977 master-0 kubenswrapper[28120]: I0220 15:08:47.229986 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7f79c358-60c9-4d3e-a830-ce6cab8e39d6","Type":"ContainerDied","Data":"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e"} Feb 20 15:08:47.230798 master-0 kubenswrapper[28120]: I0220 15:08:47.230022 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7f79c358-60c9-4d3e-a830-ce6cab8e39d6","Type":"ContainerDied","Data":"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8"} Feb 20 15:08:47.230798 master-0 kubenswrapper[28120]: I0220 15:08:47.230046 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7f79c358-60c9-4d3e-a830-ce6cab8e39d6","Type":"ContainerDied","Data":"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d"} Feb 20 15:08:47.230798 master-0 kubenswrapper[28120]: I0220 15:08:47.230067 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7f79c358-60c9-4d3e-a830-ce6cab8e39d6","Type":"ContainerDied","Data":"35289c6cafcf1fc5af2eb07d3f95df14f1ae9ed75224929c6d3858e4ed0926ad"} Feb 20 15:08:47.230798 master-0 kubenswrapper[28120]: I0220 15:08:47.230098 28120 scope.go:117] "RemoveContainer" containerID="451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704" Feb 20 15:08:47.230798 master-0 kubenswrapper[28120]: I0220 15:08:47.230342 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.259262 master-0 kubenswrapper[28120]: I0220 15:08:47.259197 28120 scope.go:117] "RemoveContainer" containerID="ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33" Feb 20 15:08:47.272007 master-0 kubenswrapper[28120]: I0220 15:08:47.271947 28120 reconciler_common.go:293] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.272007 master-0 kubenswrapper[28120]: I0220 15:08:47.271997 28120 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.272007 master-0 kubenswrapper[28120]: I0220 15:08:47.272015 28120 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-config-out\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.272298 master-0 kubenswrapper[28120]: I0220 15:08:47.272032 28120 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-k8s-db\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.272298 master-0 kubenswrapper[28120]: I0220 15:08:47.272049 28120 reconciler_common.go:293] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-grpc-tls\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.272298 master-0 kubenswrapper[28120]: I0220 15:08:47.272061 28120 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.272298 master-0 kubenswrapper[28120]: I0220 15:08:47.272074 28120 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-web-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.272298 master-0 kubenswrapper[28120]: I0220 15:08:47.272090 28120 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-tls\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.272298 master-0 kubenswrapper[28120]: I0220 15:08:47.272102 28120 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-thanos-prometheus-http-client-file\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.272298 master-0 kubenswrapper[28120]: I0220 15:08:47.272114 28120 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-tls-assets\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.272298 master-0 kubenswrapper[28120]: I0220 15:08:47.272125 28120 reconciler_common.go:293] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-configmap-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.272298 master-0 kubenswrapper[28120]: I0220 15:08:47.272136 28120 reconciler_common.go:293] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.272298 master-0 kubenswrapper[28120]: I0220 15:08:47.272151 28120 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.272298 master-0 kubenswrapper[28120]: I0220 15:08:47.272167 28120 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-prometheus-k8s-rulefiles-0\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.272298 master-0 kubenswrapper[28120]: I0220 15:08:47.272179 28120 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.272298 master-0 kubenswrapper[28120]: I0220 15:08:47.272191 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpdzb\" (UniqueName: \"kubernetes.io/projected/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-kube-api-access-fpdzb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.272298 master-0 kubenswrapper[28120]: I0220 15:08:47.272205 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7f79c358-60c9-4d3e-a830-ce6cab8e39d6-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:47.280737 master-0 kubenswrapper[28120]: I0220 15:08:47.280544 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 20 15:08:47.281228 master-0 kubenswrapper[28120]: I0220 15:08:47.281173 28120 scope.go:117] "RemoveContainer" containerID="62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5" Feb 20 15:08:47.286980 master-0 kubenswrapper[28120]: I0220 15:08:47.286915 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 20 15:08:47.304311 master-0 kubenswrapper[28120]: I0220 15:08:47.304271 28120 scope.go:117] "RemoveContainer" containerID="c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e" Feb 20 15:08:47.314768 master-0 kubenswrapper[28120]: I0220 15:08:47.314700 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 20 15:08:47.315063 master-0 kubenswrapper[28120]: E0220 15:08:47.315003 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="thanos-sidecar" Feb 20 15:08:47.315063 master-0 kubenswrapper[28120]: I0220 15:08:47.315017 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="thanos-sidecar" Feb 20 15:08:47.315063 master-0 kubenswrapper[28120]: E0220 15:08:47.315027 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="kube-rbac-proxy-thanos" Feb 20 15:08:47.315063 master-0 kubenswrapper[28120]: I0220 15:08:47.315033 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="kube-rbac-proxy-thanos" Feb 20 15:08:47.315063 master-0 kubenswrapper[28120]: E0220 15:08:47.315048 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd203e0-3dd9-4e9d-81f1-46f60d235e38" containerName="metrics-server" Feb 20 15:08:47.315063 master-0 kubenswrapper[28120]: I0220 15:08:47.315054 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd203e0-3dd9-4e9d-81f1-46f60d235e38" containerName="metrics-server" Feb 20 15:08:47.315063 master-0 kubenswrapper[28120]: E0220 15:08:47.315066 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="prometheus" Feb 20 15:08:47.315063 master-0 kubenswrapper[28120]: I0220 15:08:47.315072 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="prometheus" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: E0220 15:08:47.315081 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="kube-rbac-proxy-web" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: I0220 15:08:47.315088 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="kube-rbac-proxy-web" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: E0220 15:08:47.315106 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="kube-rbac-proxy" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: I0220 15:08:47.315111 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="kube-rbac-proxy" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: E0220 15:08:47.315127 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe031c3-3ab8-42af-ab24-718d83d7d121" containerName="console" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: I0220 15:08:47.315133 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe031c3-3ab8-42af-ab24-718d83d7d121" containerName="console" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: E0220 15:08:47.315142 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="config-reloader" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: I0220 15:08:47.315147 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="config-reloader" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: E0220 15:08:47.315156 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="init-config-reloader" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: I0220 15:08:47.315162 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="init-config-reloader" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: I0220 15:08:47.315273 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdd203e0-3dd9-4e9d-81f1-46f60d235e38" containerName="metrics-server" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: I0220 15:08:47.315292 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe031c3-3ab8-42af-ab24-718d83d7d121" containerName="console" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: I0220 15:08:47.315305 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="kube-rbac-proxy" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: I0220 15:08:47.315318 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="kube-rbac-proxy-web" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: I0220 15:08:47.315325 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="prometheus" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: I0220 15:08:47.315342 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="config-reloader" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: I0220 15:08:47.315351 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="thanos-sidecar" Feb 20 15:08:47.315470 master-0 kubenswrapper[28120]: I0220 15:08:47.315358 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" containerName="kube-rbac-proxy-thanos" Feb 20 15:08:47.317862 master-0 kubenswrapper[28120]: I0220 15:08:47.317813 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.321579 master-0 kubenswrapper[28120]: I0220 15:08:47.321487 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 20 15:08:47.326200 master-0 kubenswrapper[28120]: I0220 15:08:47.326145 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 20 15:08:47.327833 master-0 kubenswrapper[28120]: I0220 15:08:47.327790 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 20 15:08:47.329257 master-0 kubenswrapper[28120]: I0220 15:08:47.329234 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 20 15:08:47.329895 master-0 kubenswrapper[28120]: I0220 15:08:47.329809 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 20 15:08:47.330326 master-0 kubenswrapper[28120]: I0220 15:08:47.330304 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 20 15:08:47.330623 master-0 kubenswrapper[28120]: I0220 15:08:47.330601 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 20 15:08:47.331918 master-0 kubenswrapper[28120]: I0220 15:08:47.328778 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 20 15:08:47.332473 master-0 kubenswrapper[28120]: I0220 15:08:47.332452 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 20 15:08:47.332900 master-0 kubenswrapper[28120]: I0220 15:08:47.332880 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-38nuos4fhcl5a" Feb 20 15:08:47.334137 master-0 kubenswrapper[28120]: I0220 15:08:47.334097 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-5k6h7" Feb 20 15:08:47.337509 master-0 kubenswrapper[28120]: I0220 15:08:47.337473 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 20 15:08:47.338999 master-0 kubenswrapper[28120]: I0220 15:08:47.338948 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 20 15:08:47.340407 master-0 kubenswrapper[28120]: I0220 15:08:47.340247 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 20 15:08:47.340497 master-0 kubenswrapper[28120]: I0220 15:08:47.340457 28120 scope.go:117] "RemoveContainer" containerID="cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8" Feb 20 15:08:47.364549 master-0 kubenswrapper[28120]: I0220 15:08:47.364405 28120 scope.go:117] "RemoveContainer" containerID="0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d" Feb 20 15:08:47.381359 master-0 kubenswrapper[28120]: I0220 15:08:47.381319 28120 scope.go:117] "RemoveContainer" containerID="c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de" Feb 20 15:08:47.398377 master-0 kubenswrapper[28120]: I0220 15:08:47.398334 28120 scope.go:117] "RemoveContainer" containerID="451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704" Feb 20 15:08:47.398723 master-0 kubenswrapper[28120]: E0220 15:08:47.398687 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704\": container with ID starting with 451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704 not found: ID does not exist" containerID="451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704" Feb 20 15:08:47.398794 master-0 kubenswrapper[28120]: I0220 15:08:47.398722 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704"} err="failed to get container status \"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704\": rpc error: code = NotFound desc = could not find container \"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704\": container with ID starting with 451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704 not found: ID does not exist" Feb 20 15:08:47.398794 master-0 kubenswrapper[28120]: I0220 15:08:47.398749 28120 scope.go:117] "RemoveContainer" containerID="ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33" Feb 20 15:08:47.399175 master-0 kubenswrapper[28120]: E0220 15:08:47.399131 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33\": container with ID starting with ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33 not found: ID does not exist" containerID="ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33" Feb 20 15:08:47.399232 master-0 kubenswrapper[28120]: I0220 15:08:47.399171 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33"} err="failed to get container status \"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33\": rpc error: code = NotFound desc = could not find container \"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33\": container with ID starting with ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33 not found: ID does not exist" Feb 20 15:08:47.399232 master-0 kubenswrapper[28120]: I0220 15:08:47.399194 28120 scope.go:117] "RemoveContainer" containerID="62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5" Feb 20 15:08:47.400376 master-0 kubenswrapper[28120]: E0220 15:08:47.400338 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5\": container with ID starting with 62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5 not found: ID does not exist" containerID="62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5" Feb 20 15:08:47.400449 master-0 kubenswrapper[28120]: I0220 15:08:47.400367 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5"} err="failed to get container status \"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5\": rpc error: code = NotFound desc = could not find container \"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5\": container with ID starting with 62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5 not found: ID does not exist" Feb 20 15:08:47.400449 master-0 kubenswrapper[28120]: I0220 15:08:47.400390 28120 scope.go:117] "RemoveContainer" containerID="c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e" Feb 20 15:08:47.400895 master-0 kubenswrapper[28120]: E0220 15:08:47.400835 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e\": container with ID starting with c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e not found: ID does not exist" containerID="c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e" Feb 20 15:08:47.400957 master-0 kubenswrapper[28120]: I0220 15:08:47.400901 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e"} err="failed to get container status \"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e\": rpc error: code = NotFound desc = could not find container \"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e\": container with ID starting with c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e not found: ID does not exist" Feb 20 15:08:47.400957 master-0 kubenswrapper[28120]: I0220 15:08:47.400918 28120 scope.go:117] "RemoveContainer" containerID="cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8" Feb 20 15:08:47.401404 master-0 kubenswrapper[28120]: E0220 15:08:47.401350 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8\": container with ID starting with cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8 not found: ID does not exist" containerID="cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8" Feb 20 15:08:47.401463 master-0 kubenswrapper[28120]: I0220 15:08:47.401406 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8"} err="failed to get container status \"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8\": rpc error: code = NotFound desc = could not find container \"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8\": container with ID starting with cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8 not found: ID does not exist" Feb 20 15:08:47.401463 master-0 kubenswrapper[28120]: I0220 15:08:47.401434 28120 scope.go:117] "RemoveContainer" containerID="0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d" Feb 20 15:08:47.401828 master-0 kubenswrapper[28120]: E0220 15:08:47.401772 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d\": container with ID starting with 0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d not found: ID does not exist" containerID="0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d" Feb 20 15:08:47.401894 master-0 kubenswrapper[28120]: I0220 15:08:47.401847 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d"} err="failed to get container status \"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d\": rpc error: code = NotFound desc = could not find container \"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d\": container with ID starting with 0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d not found: ID does not exist" Feb 20 15:08:47.401992 master-0 kubenswrapper[28120]: I0220 15:08:47.401890 28120 scope.go:117] "RemoveContainer" containerID="c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de" Feb 20 15:08:47.402237 master-0 kubenswrapper[28120]: E0220 15:08:47.402197 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de\": container with ID starting with c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de not found: ID does not exist" containerID="c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de" Feb 20 15:08:47.402281 master-0 kubenswrapper[28120]: I0220 15:08:47.402235 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de"} err="failed to get container status \"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de\": rpc error: code = NotFound desc = could not find container \"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de\": container with ID starting with c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de not found: ID does not exist" Feb 20 15:08:47.402281 master-0 kubenswrapper[28120]: I0220 15:08:47.402256 28120 scope.go:117] "RemoveContainer" containerID="451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704" Feb 20 15:08:47.402848 master-0 kubenswrapper[28120]: I0220 15:08:47.402820 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704"} err="failed to get container status \"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704\": rpc error: code = NotFound desc = could not find container \"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704\": container with ID starting with 451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704 not found: ID does not exist" Feb 20 15:08:47.402848 master-0 kubenswrapper[28120]: I0220 15:08:47.402840 28120 scope.go:117] "RemoveContainer" containerID="ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33" Feb 20 15:08:47.403115 master-0 kubenswrapper[28120]: I0220 15:08:47.403087 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33"} err="failed to get container status \"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33\": rpc error: code = NotFound desc = could not find container \"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33\": container with ID starting with ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33 not found: ID does not exist" Feb 20 15:08:47.403115 master-0 kubenswrapper[28120]: I0220 15:08:47.403105 28120 scope.go:117] "RemoveContainer" containerID="62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5" Feb 20 15:08:47.403400 master-0 kubenswrapper[28120]: I0220 15:08:47.403373 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5"} err="failed to get container status \"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5\": rpc error: code = NotFound desc = could not find container \"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5\": container with ID starting with 62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5 not found: ID does not exist" Feb 20 15:08:47.403400 master-0 kubenswrapper[28120]: I0220 15:08:47.403391 28120 scope.go:117] "RemoveContainer" containerID="c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e" Feb 20 15:08:47.403693 master-0 kubenswrapper[28120]: I0220 15:08:47.403665 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e"} err="failed to get container status \"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e\": rpc error: code = NotFound desc = could not find container \"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e\": container with ID starting with c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e not found: ID does not exist" Feb 20 15:08:47.403693 master-0 kubenswrapper[28120]: I0220 15:08:47.403685 28120 scope.go:117] "RemoveContainer" containerID="cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8" Feb 20 15:08:47.404011 master-0 kubenswrapper[28120]: I0220 15:08:47.403970 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8"} err="failed to get container status \"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8\": rpc error: code = NotFound desc = could not find container \"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8\": container with ID starting with cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8 not found: ID does not exist" Feb 20 15:08:47.404053 master-0 kubenswrapper[28120]: I0220 15:08:47.404008 28120 scope.go:117] "RemoveContainer" containerID="0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d" Feb 20 15:08:47.404393 master-0 kubenswrapper[28120]: I0220 15:08:47.404364 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d"} err="failed to get container status \"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d\": rpc error: code = NotFound desc = could not find container \"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d\": container with ID starting with 0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d not found: ID does not exist" Feb 20 15:08:47.404393 master-0 kubenswrapper[28120]: I0220 15:08:47.404383 28120 scope.go:117] "RemoveContainer" containerID="c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de" Feb 20 15:08:47.404620 master-0 kubenswrapper[28120]: I0220 15:08:47.404593 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de"} err="failed to get container status \"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de\": rpc error: code = NotFound desc = could not find container \"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de\": container with ID starting with c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de not found: ID does not exist" Feb 20 15:08:47.404620 master-0 kubenswrapper[28120]: I0220 15:08:47.404611 28120 scope.go:117] "RemoveContainer" containerID="451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704" Feb 20 15:08:47.404980 master-0 kubenswrapper[28120]: I0220 15:08:47.404948 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704"} err="failed to get container status \"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704\": rpc error: code = NotFound desc = could not find container \"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704\": container with ID starting with 451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704 not found: ID does not exist" Feb 20 15:08:47.405019 master-0 kubenswrapper[28120]: I0220 15:08:47.404981 28120 scope.go:117] "RemoveContainer" containerID="ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33" Feb 20 15:08:47.405376 master-0 kubenswrapper[28120]: I0220 15:08:47.405345 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33"} err="failed to get container status \"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33\": rpc error: code = NotFound desc = could not find container \"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33\": container with ID starting with ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33 not found: ID does not exist" Feb 20 15:08:47.405376 master-0 kubenswrapper[28120]: I0220 15:08:47.405366 28120 scope.go:117] "RemoveContainer" containerID="62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5" Feb 20 15:08:47.405668 master-0 kubenswrapper[28120]: I0220 15:08:47.405630 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5"} err="failed to get container status \"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5\": rpc error: code = NotFound desc = could not find container \"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5\": container with ID starting with 62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5 not found: ID does not exist" Feb 20 15:08:47.405702 master-0 kubenswrapper[28120]: I0220 15:08:47.405665 28120 scope.go:117] "RemoveContainer" containerID="c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e" Feb 20 15:08:47.406032 master-0 kubenswrapper[28120]: I0220 15:08:47.405989 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e"} err="failed to get container status \"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e\": rpc error: code = NotFound desc = could not find container \"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e\": container with ID starting with c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e not found: ID does not exist" Feb 20 15:08:47.406070 master-0 kubenswrapper[28120]: I0220 15:08:47.406028 28120 scope.go:117] "RemoveContainer" containerID="cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8" Feb 20 15:08:47.406336 master-0 kubenswrapper[28120]: I0220 15:08:47.406307 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8"} err="failed to get container status \"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8\": rpc error: code = NotFound desc = could not find container \"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8\": container with ID starting with cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8 not found: ID does not exist" Feb 20 15:08:47.406336 master-0 kubenswrapper[28120]: I0220 15:08:47.406326 28120 scope.go:117] "RemoveContainer" containerID="0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d" Feb 20 15:08:47.406584 master-0 kubenswrapper[28120]: I0220 15:08:47.406555 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d"} err="failed to get container status \"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d\": rpc error: code = NotFound desc = could not find container \"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d\": container with ID starting with 0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d not found: ID does not exist" Feb 20 15:08:47.406584 master-0 kubenswrapper[28120]: I0220 15:08:47.406576 28120 scope.go:117] "RemoveContainer" containerID="c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de" Feb 20 15:08:47.406827 master-0 kubenswrapper[28120]: I0220 15:08:47.406788 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de"} err="failed to get container status \"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de\": rpc error: code = NotFound desc = could not find container \"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de\": container with ID starting with c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de not found: ID does not exist" Feb 20 15:08:47.406860 master-0 kubenswrapper[28120]: I0220 15:08:47.406824 28120 scope.go:117] "RemoveContainer" containerID="451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704" Feb 20 15:08:47.407088 master-0 kubenswrapper[28120]: I0220 15:08:47.407059 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704"} err="failed to get container status \"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704\": rpc error: code = NotFound desc = could not find container \"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704\": container with ID starting with 451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704 not found: ID does not exist" Feb 20 15:08:47.407088 master-0 kubenswrapper[28120]: I0220 15:08:47.407079 28120 scope.go:117] "RemoveContainer" containerID="ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33" Feb 20 15:08:47.407458 master-0 kubenswrapper[28120]: I0220 15:08:47.407395 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33"} err="failed to get container status \"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33\": rpc error: code = NotFound desc = could not find container \"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33\": container with ID starting with ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33 not found: ID does not exist" Feb 20 15:08:47.407493 master-0 kubenswrapper[28120]: I0220 15:08:47.407455 28120 scope.go:117] "RemoveContainer" containerID="62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5" Feb 20 15:08:47.407768 master-0 kubenswrapper[28120]: I0220 15:08:47.407736 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5"} err="failed to get container status \"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5\": rpc error: code = NotFound desc = could not find container \"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5\": container with ID starting with 62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5 not found: ID does not exist" Feb 20 15:08:47.407768 master-0 kubenswrapper[28120]: I0220 15:08:47.407758 28120 scope.go:117] "RemoveContainer" containerID="c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e" Feb 20 15:08:47.407996 master-0 kubenswrapper[28120]: I0220 15:08:47.407969 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e"} err="failed to get container status \"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e\": rpc error: code = NotFound desc = could not find container \"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e\": container with ID starting with c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e not found: ID does not exist" Feb 20 15:08:47.407996 master-0 kubenswrapper[28120]: I0220 15:08:47.407987 28120 scope.go:117] "RemoveContainer" containerID="cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8" Feb 20 15:08:47.408228 master-0 kubenswrapper[28120]: I0220 15:08:47.408190 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8"} err="failed to get container status \"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8\": rpc error: code = NotFound desc = could not find container \"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8\": container with ID starting with cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8 not found: ID does not exist" Feb 20 15:08:47.408228 master-0 kubenswrapper[28120]: I0220 15:08:47.408222 28120 scope.go:117] "RemoveContainer" containerID="0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d" Feb 20 15:08:47.408476 master-0 kubenswrapper[28120]: I0220 15:08:47.408449 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d"} err="failed to get container status \"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d\": rpc error: code = NotFound desc = could not find container \"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d\": container with ID starting with 0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d not found: ID does not exist" Feb 20 15:08:47.408476 master-0 kubenswrapper[28120]: I0220 15:08:47.408466 28120 scope.go:117] "RemoveContainer" containerID="c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de" Feb 20 15:08:47.408696 master-0 kubenswrapper[28120]: I0220 15:08:47.408655 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de"} err="failed to get container status \"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de\": rpc error: code = NotFound desc = could not find container \"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de\": container with ID starting with c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de not found: ID does not exist" Feb 20 15:08:47.408735 master-0 kubenswrapper[28120]: I0220 15:08:47.408692 28120 scope.go:117] "RemoveContainer" containerID="451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704" Feb 20 15:08:47.408910 master-0 kubenswrapper[28120]: I0220 15:08:47.408891 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704"} err="failed to get container status \"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704\": rpc error: code = NotFound desc = could not find container \"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704\": container with ID starting with 451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704 not found: ID does not exist" Feb 20 15:08:47.408910 master-0 kubenswrapper[28120]: I0220 15:08:47.408907 28120 scope.go:117] "RemoveContainer" containerID="ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33" Feb 20 15:08:47.409263 master-0 kubenswrapper[28120]: I0220 15:08:47.409231 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33"} err="failed to get container status \"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33\": rpc error: code = NotFound desc = could not find container \"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33\": container with ID starting with ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33 not found: ID does not exist" Feb 20 15:08:47.409303 master-0 kubenswrapper[28120]: I0220 15:08:47.409275 28120 scope.go:117] "RemoveContainer" containerID="62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5" Feb 20 15:08:47.409632 master-0 kubenswrapper[28120]: I0220 15:08:47.409604 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5"} err="failed to get container status \"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5\": rpc error: code = NotFound desc = could not find container \"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5\": container with ID starting with 62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5 not found: ID does not exist" Feb 20 15:08:47.409632 master-0 kubenswrapper[28120]: I0220 15:08:47.409622 28120 scope.go:117] "RemoveContainer" containerID="c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e" Feb 20 15:08:47.409893 master-0 kubenswrapper[28120]: I0220 15:08:47.409828 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e"} err="failed to get container status \"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e\": rpc error: code = NotFound desc = could not find container \"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e\": container with ID starting with c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e not found: ID does not exist" Feb 20 15:08:47.409893 master-0 kubenswrapper[28120]: I0220 15:08:47.409889 28120 scope.go:117] "RemoveContainer" containerID="cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8" Feb 20 15:08:47.410119 master-0 kubenswrapper[28120]: I0220 15:08:47.410090 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8"} err="failed to get container status \"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8\": rpc error: code = NotFound desc = could not find container \"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8\": container with ID starting with cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8 not found: ID does not exist" Feb 20 15:08:47.410119 master-0 kubenswrapper[28120]: I0220 15:08:47.410110 28120 scope.go:117] "RemoveContainer" containerID="0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d" Feb 20 15:08:47.410430 master-0 kubenswrapper[28120]: I0220 15:08:47.410397 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d"} err="failed to get container status \"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d\": rpc error: code = NotFound desc = could not find container \"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d\": container with ID starting with 0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d not found: ID does not exist" Feb 20 15:08:47.410430 master-0 kubenswrapper[28120]: I0220 15:08:47.410420 28120 scope.go:117] "RemoveContainer" containerID="c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de" Feb 20 15:08:47.410692 master-0 kubenswrapper[28120]: I0220 15:08:47.410660 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de"} err="failed to get container status \"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de\": rpc error: code = NotFound desc = could not find container \"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de\": container with ID starting with c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de not found: ID does not exist" Feb 20 15:08:47.410692 master-0 kubenswrapper[28120]: I0220 15:08:47.410683 28120 scope.go:117] "RemoveContainer" containerID="451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704" Feb 20 15:08:47.410983 master-0 kubenswrapper[28120]: I0220 15:08:47.410951 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704"} err="failed to get container status \"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704\": rpc error: code = NotFound desc = could not find container \"451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704\": container with ID starting with 451b95e5c2728ea1f9c86d69a6e4803fbd2355aa611b89e3e76490180ee6b704 not found: ID does not exist" Feb 20 15:08:47.410983 master-0 kubenswrapper[28120]: I0220 15:08:47.410976 28120 scope.go:117] "RemoveContainer" containerID="ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33" Feb 20 15:08:47.411282 master-0 kubenswrapper[28120]: I0220 15:08:47.411252 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33"} err="failed to get container status \"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33\": rpc error: code = NotFound desc = could not find container \"ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33\": container with ID starting with ba65c0883dce15444ac8e126700c2a1bfb351791dfc23374e45837ca4fa8ea33 not found: ID does not exist" Feb 20 15:08:47.411282 master-0 kubenswrapper[28120]: I0220 15:08:47.411272 28120 scope.go:117] "RemoveContainer" containerID="62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5" Feb 20 15:08:47.411650 master-0 kubenswrapper[28120]: I0220 15:08:47.411609 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5"} err="failed to get container status \"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5\": rpc error: code = NotFound desc = could not find container \"62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5\": container with ID starting with 62f0db54a35160985c782303d9d29e4b74bf7b7fa0a9334b7417df3b3e4e6ec5 not found: ID does not exist" Feb 20 15:08:47.411688 master-0 kubenswrapper[28120]: I0220 15:08:47.411646 28120 scope.go:117] "RemoveContainer" containerID="c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e" Feb 20 15:08:47.411977 master-0 kubenswrapper[28120]: I0220 15:08:47.411951 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e"} err="failed to get container status \"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e\": rpc error: code = NotFound desc = could not find container \"c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e\": container with ID starting with c491df11337cd1ba9a886ffad72ba8eed0a98ff9cacecb59f7818be12910898e not found: ID does not exist" Feb 20 15:08:47.411977 master-0 kubenswrapper[28120]: I0220 15:08:47.411970 28120 scope.go:117] "RemoveContainer" containerID="cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8" Feb 20 15:08:47.412237 master-0 kubenswrapper[28120]: I0220 15:08:47.412209 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8"} err="failed to get container status \"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8\": rpc error: code = NotFound desc = could not find container \"cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8\": container with ID starting with cb7d5aefe3c8c32bbfb2a0839642425b0d5335c4e69251ff8cdbdf1cd04c77a8 not found: ID does not exist" Feb 20 15:08:47.412237 master-0 kubenswrapper[28120]: I0220 15:08:47.412227 28120 scope.go:117] "RemoveContainer" containerID="0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d" Feb 20 15:08:47.412459 master-0 kubenswrapper[28120]: I0220 15:08:47.412430 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d"} err="failed to get container status \"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d\": rpc error: code = NotFound desc = could not find container \"0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d\": container with ID starting with 0500e89a7ad5697db492fded4c2300d9093b6d94adb83dd382dcfee488dcef8d not found: ID does not exist" Feb 20 15:08:47.412459 master-0 kubenswrapper[28120]: I0220 15:08:47.412450 28120 scope.go:117] "RemoveContainer" containerID="c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de" Feb 20 15:08:47.412680 master-0 kubenswrapper[28120]: I0220 15:08:47.412652 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de"} err="failed to get container status \"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de\": rpc error: code = NotFound desc = could not find container \"c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de\": container with ID starting with c27474e4d4506155f7dc02c13924d2e96827dd2c5dc97d9394d86e372fc816de not found: ID does not exist" Feb 20 15:08:47.475224 master-0 kubenswrapper[28120]: I0220 15:08:47.475111 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.475442 master-0 kubenswrapper[28120]: I0220 15:08:47.475270 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.475442 master-0 kubenswrapper[28120]: I0220 15:08:47.475326 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.475442 master-0 kubenswrapper[28120]: I0220 15:08:47.475362 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.475442 master-0 kubenswrapper[28120]: I0220 15:08:47.475425 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-config-out\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.475722 master-0 kubenswrapper[28120]: I0220 15:08:47.475461 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.475722 master-0 kubenswrapper[28120]: I0220 15:08:47.475504 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-config\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.475722 master-0 kubenswrapper[28120]: I0220 15:08:47.475563 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.475722 master-0 kubenswrapper[28120]: I0220 15:08:47.475606 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.475722 master-0 kubenswrapper[28120]: I0220 15:08:47.475673 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.476111 master-0 kubenswrapper[28120]: I0220 15:08:47.475730 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.476111 master-0 kubenswrapper[28120]: I0220 15:08:47.475770 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5b8f\" (UniqueName: \"kubernetes.io/projected/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-kube-api-access-q5b8f\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.476111 master-0 kubenswrapper[28120]: I0220 15:08:47.475804 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.476111 master-0 kubenswrapper[28120]: I0220 15:08:47.475840 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.476111 master-0 kubenswrapper[28120]: I0220 15:08:47.475885 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.476111 master-0 kubenswrapper[28120]: I0220 15:08:47.475978 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-web-config\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.476473 master-0 kubenswrapper[28120]: I0220 15:08:47.476177 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.476473 master-0 kubenswrapper[28120]: I0220 15:08:47.476262 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.578058 master-0 kubenswrapper[28120]: I0220 15:08:47.577966 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-config-out\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.578058 master-0 kubenswrapper[28120]: I0220 15:08:47.578048 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.578987 master-0 kubenswrapper[28120]: I0220 15:08:47.578344 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-config\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.578987 master-0 kubenswrapper[28120]: I0220 15:08:47.578576 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.578987 master-0 kubenswrapper[28120]: I0220 15:08:47.578661 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.579257 master-0 kubenswrapper[28120]: I0220 15:08:47.579135 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.579466 master-0 kubenswrapper[28120]: I0220 15:08:47.579411 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.579466 master-0 kubenswrapper[28120]: I0220 15:08:47.579428 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.579728 master-0 kubenswrapper[28120]: I0220 15:08:47.579486 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5b8f\" (UniqueName: \"kubernetes.io/projected/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-kube-api-access-q5b8f\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.579728 master-0 kubenswrapper[28120]: I0220 15:08:47.579519 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.579728 master-0 kubenswrapper[28120]: I0220 15:08:47.579550 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.579728 master-0 kubenswrapper[28120]: I0220 15:08:47.579585 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.579728 master-0 kubenswrapper[28120]: I0220 15:08:47.579630 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-web-config\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.581438 master-0 kubenswrapper[28120]: I0220 15:08:47.579857 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.581438 master-0 kubenswrapper[28120]: I0220 15:08:47.579961 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.581438 master-0 kubenswrapper[28120]: I0220 15:08:47.580064 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.581438 master-0 kubenswrapper[28120]: I0220 15:08:47.580427 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.581438 master-0 kubenswrapper[28120]: I0220 15:08:47.580466 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.581438 master-0 kubenswrapper[28120]: I0220 15:08:47.580497 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.582069 master-0 kubenswrapper[28120]: I0220 15:08:47.581738 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.582069 master-0 kubenswrapper[28120]: I0220 15:08:47.581972 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.582491 master-0 kubenswrapper[28120]: I0220 15:08:47.582405 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.583240 master-0 kubenswrapper[28120]: I0220 15:08:47.583185 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-web-config\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.583553 master-0 kubenswrapper[28120]: I0220 15:08:47.583421 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.583747 master-0 kubenswrapper[28120]: I0220 15:08:47.583710 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.584540 master-0 kubenswrapper[28120]: I0220 15:08:47.584490 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-config\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.586634 master-0 kubenswrapper[28120]: I0220 15:08:47.586588 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.587070 master-0 kubenswrapper[28120]: I0220 15:08:47.587004 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.587275 master-0 kubenswrapper[28120]: I0220 15:08:47.587211 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.589215 master-0 kubenswrapper[28120]: I0220 15:08:47.588891 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.591914 master-0 kubenswrapper[28120]: I0220 15:08:47.591842 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.593017 master-0 kubenswrapper[28120]: I0220 15:08:47.592966 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.593142 master-0 kubenswrapper[28120]: I0220 15:08:47.593071 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-config-out\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.598352 master-0 kubenswrapper[28120]: I0220 15:08:47.598290 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.603207 master-0 kubenswrapper[28120]: I0220 15:08:47.603126 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.607846 master-0 kubenswrapper[28120]: I0220 15:08:47.607781 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5b8f\" (UniqueName: \"kubernetes.io/projected/5c384a56-5cf0-4d9e-8289-56a45bdbac3e-kube-api-access-q5b8f\") pod \"prometheus-k8s-0\" (UID: \"5c384a56-5cf0-4d9e-8289-56a45bdbac3e\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:47.648865 master-0 kubenswrapper[28120]: I0220 15:08:47.648464 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:48.073072 master-0 kubenswrapper[28120]: I0220 15:08:48.072694 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f79c358-60c9-4d3e-a830-ce6cab8e39d6" path="/var/lib/kubelet/pods/7f79c358-60c9-4d3e-a830-ce6cab8e39d6/volumes" Feb 20 15:08:48.126073 master-0 kubenswrapper[28120]: I0220 15:08:48.125961 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 20 15:08:48.239832 master-0 kubenswrapper[28120]: I0220 15:08:48.239759 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c384a56-5cf0-4d9e-8289-56a45bdbac3e","Type":"ContainerStarted","Data":"756de31c08c67570922e206b3d95e07b638180699e2eb2cb4ff7d17c5e0ef835"} Feb 20 15:08:49.254244 master-0 kubenswrapper[28120]: I0220 15:08:49.254118 28120 generic.go:334] "Generic (PLEG): container finished" podID="5c384a56-5cf0-4d9e-8289-56a45bdbac3e" containerID="be2c6fb7c1a6bfd30d4d023ce9faf625ddc70f9855418e6124b01ae6a9fb7674" exitCode=0 Feb 20 15:08:49.255460 master-0 kubenswrapper[28120]: I0220 15:08:49.254266 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c384a56-5cf0-4d9e-8289-56a45bdbac3e","Type":"ContainerDied","Data":"be2c6fb7c1a6bfd30d4d023ce9faf625ddc70f9855418e6124b01ae6a9fb7674"} Feb 20 15:08:50.271830 master-0 kubenswrapper[28120]: I0220 15:08:50.271754 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c384a56-5cf0-4d9e-8289-56a45bdbac3e","Type":"ContainerStarted","Data":"5625ee9002969d59e75801801ba2abfa4abbf3c5edacb4a92d770c88abd1d13f"} Feb 20 15:08:50.271830 master-0 kubenswrapper[28120]: I0220 15:08:50.271801 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c384a56-5cf0-4d9e-8289-56a45bdbac3e","Type":"ContainerStarted","Data":"b0c6fd5259e76f86d57483b92336c143b46d576f0680190afb240818a768145c"} Feb 20 15:08:50.271830 master-0 kubenswrapper[28120]: I0220 15:08:50.271812 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c384a56-5cf0-4d9e-8289-56a45bdbac3e","Type":"ContainerStarted","Data":"877ca53d718f8897a2e0c55ac1f68dfcecc07192f3fe23fc9ef5e6b78fa85eb1"} Feb 20 15:08:50.271830 master-0 kubenswrapper[28120]: I0220 15:08:50.271820 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c384a56-5cf0-4d9e-8289-56a45bdbac3e","Type":"ContainerStarted","Data":"1081c3037a87f3ad087e3b999d9c2b02588650b0ca5d5b198dccdcc851bb0472"} Feb 20 15:08:50.271830 master-0 kubenswrapper[28120]: I0220 15:08:50.271829 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c384a56-5cf0-4d9e-8289-56a45bdbac3e","Type":"ContainerStarted","Data":"7f3cfbc07aba3e002462d939c89aef8f23700de6303198d87074f364be151dbd"} Feb 20 15:08:50.271830 master-0 kubenswrapper[28120]: I0220 15:08:50.271838 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c384a56-5cf0-4d9e-8289-56a45bdbac3e","Type":"ContainerStarted","Data":"aab417abb332cb4cdf2fa12c6e2312a9548cc31b48290039540174bd69a1b856"} Feb 20 15:08:50.331247 master-0 kubenswrapper[28120]: I0220 15:08:50.331013 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.3309849480000002 podStartE2EDuration="3.330984948s" podCreationTimestamp="2026-02-20 15:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:08:50.319239153 +0000 UTC m=+468.580032746" watchObservedRunningTime="2026-02-20 15:08:50.330984948 +0000 UTC m=+468.591778551" Feb 20 15:08:52.649745 master-0 kubenswrapper[28120]: I0220 15:08:52.649650 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:08:54.196047 master-0 kubenswrapper[28120]: I0220 15:08:54.195953 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-544f96cb59-lzxgw" podUID="5c2b6b66-eba8-4316-bb3f-beed2b53f173" containerName="console" containerID="cri-o://630ff5e27d347be173d2110677d188651f8072bd313ccb9e664c37c8eeedc990" gracePeriod=15 Feb 20 15:08:54.804062 master-0 kubenswrapper[28120]: I0220 15:08:54.803988 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-544f96cb59-lzxgw_5c2b6b66-eba8-4316-bb3f-beed2b53f173/console/0.log" Feb 20 15:08:54.804380 master-0 kubenswrapper[28120]: I0220 15:08:54.804097 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:08:54.904416 master-0 kubenswrapper[28120]: I0220 15:08:54.904287 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-config\") pod \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " Feb 20 15:08:54.904731 master-0 kubenswrapper[28120]: I0220 15:08:54.904431 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-oauth-serving-cert\") pod \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " Feb 20 15:08:54.904731 master-0 kubenswrapper[28120]: I0220 15:08:54.904526 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qj5v\" (UniqueName: \"kubernetes.io/projected/5c2b6b66-eba8-4316-bb3f-beed2b53f173-kube-api-access-4qj5v\") pod \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " Feb 20 15:08:54.904731 master-0 kubenswrapper[28120]: I0220 15:08:54.904642 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-serving-cert\") pod \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " Feb 20 15:08:54.904731 master-0 kubenswrapper[28120]: I0220 15:08:54.904708 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-service-ca\") pod \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " Feb 20 15:08:54.905210 master-0 kubenswrapper[28120]: I0220 15:08:54.904789 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-trusted-ca-bundle\") pod \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " Feb 20 15:08:54.905210 master-0 kubenswrapper[28120]: I0220 15:08:54.904908 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-oauth-config\") pod \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\" (UID: \"5c2b6b66-eba8-4316-bb3f-beed2b53f173\") " Feb 20 15:08:54.905210 master-0 kubenswrapper[28120]: I0220 15:08:54.904965 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5c2b6b66-eba8-4316-bb3f-beed2b53f173" (UID: "5c2b6b66-eba8-4316-bb3f-beed2b53f173"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:54.905650 master-0 kubenswrapper[28120]: I0220 15:08:54.905572 28120 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:54.905786 master-0 kubenswrapper[28120]: I0220 15:08:54.905641 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5c2b6b66-eba8-4316-bb3f-beed2b53f173" (UID: "5c2b6b66-eba8-4316-bb3f-beed2b53f173"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:54.905904 master-0 kubenswrapper[28120]: I0220 15:08:54.905854 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-service-ca" (OuterVolumeSpecName: "service-ca") pod "5c2b6b66-eba8-4316-bb3f-beed2b53f173" (UID: "5c2b6b66-eba8-4316-bb3f-beed2b53f173"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:54.907336 master-0 kubenswrapper[28120]: I0220 15:08:54.907272 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-config" (OuterVolumeSpecName: "console-config") pod "5c2b6b66-eba8-4316-bb3f-beed2b53f173" (UID: "5c2b6b66-eba8-4316-bb3f-beed2b53f173"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:08:54.908963 master-0 kubenswrapper[28120]: I0220 15:08:54.908886 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5c2b6b66-eba8-4316-bb3f-beed2b53f173" (UID: "5c2b6b66-eba8-4316-bb3f-beed2b53f173"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:54.909736 master-0 kubenswrapper[28120]: I0220 15:08:54.909655 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c2b6b66-eba8-4316-bb3f-beed2b53f173-kube-api-access-4qj5v" (OuterVolumeSpecName: "kube-api-access-4qj5v") pod "5c2b6b66-eba8-4316-bb3f-beed2b53f173" (UID: "5c2b6b66-eba8-4316-bb3f-beed2b53f173"). InnerVolumeSpecName "kube-api-access-4qj5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:08:54.910142 master-0 kubenswrapper[28120]: I0220 15:08:54.910086 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5c2b6b66-eba8-4316-bb3f-beed2b53f173" (UID: "5c2b6b66-eba8-4316-bb3f-beed2b53f173"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:08:55.006985 master-0 kubenswrapper[28120]: I0220 15:08:55.006934 28120 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:55.006985 master-0 kubenswrapper[28120]: I0220 15:08:55.006968 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qj5v\" (UniqueName: \"kubernetes.io/projected/5c2b6b66-eba8-4316-bb3f-beed2b53f173-kube-api-access-4qj5v\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:55.006985 master-0 kubenswrapper[28120]: I0220 15:08:55.006980 28120 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:55.006985 master-0 kubenswrapper[28120]: I0220 15:08:55.006989 28120 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:55.006985 master-0 kubenswrapper[28120]: I0220 15:08:55.006997 28120 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c2b6b66-eba8-4316-bb3f-beed2b53f173-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:55.006985 master-0 kubenswrapper[28120]: I0220 15:08:55.007005 28120 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c2b6b66-eba8-4316-bb3f-beed2b53f173-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:08:55.323205 master-0 kubenswrapper[28120]: I0220 15:08:55.323071 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-544f96cb59-lzxgw_5c2b6b66-eba8-4316-bb3f-beed2b53f173/console/0.log" Feb 20 15:08:55.323205 master-0 kubenswrapper[28120]: I0220 15:08:55.323148 28120 generic.go:334] "Generic (PLEG): container finished" podID="5c2b6b66-eba8-4316-bb3f-beed2b53f173" containerID="630ff5e27d347be173d2110677d188651f8072bd313ccb9e664c37c8eeedc990" exitCode=2 Feb 20 15:08:55.323205 master-0 kubenswrapper[28120]: I0220 15:08:55.323178 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-544f96cb59-lzxgw" event={"ID":"5c2b6b66-eba8-4316-bb3f-beed2b53f173","Type":"ContainerDied","Data":"630ff5e27d347be173d2110677d188651f8072bd313ccb9e664c37c8eeedc990"} Feb 20 15:08:55.323916 master-0 kubenswrapper[28120]: I0220 15:08:55.323222 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-544f96cb59-lzxgw" event={"ID":"5c2b6b66-eba8-4316-bb3f-beed2b53f173","Type":"ContainerDied","Data":"d6d9c20e2d07c266849de452b2e503c1aa0433e6fb5dbd805fefce1a0fadada3"} Feb 20 15:08:55.323916 master-0 kubenswrapper[28120]: I0220 15:08:55.323281 28120 scope.go:117] "RemoveContainer" containerID="630ff5e27d347be173d2110677d188651f8072bd313ccb9e664c37c8eeedc990" Feb 20 15:08:55.323916 master-0 kubenswrapper[28120]: I0220 15:08:55.323302 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544f96cb59-lzxgw" Feb 20 15:08:55.354805 master-0 kubenswrapper[28120]: I0220 15:08:55.354753 28120 scope.go:117] "RemoveContainer" containerID="630ff5e27d347be173d2110677d188651f8072bd313ccb9e664c37c8eeedc990" Feb 20 15:08:55.355544 master-0 kubenswrapper[28120]: E0220 15:08:55.355488 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"630ff5e27d347be173d2110677d188651f8072bd313ccb9e664c37c8eeedc990\": container with ID starting with 630ff5e27d347be173d2110677d188651f8072bd313ccb9e664c37c8eeedc990 not found: ID does not exist" containerID="630ff5e27d347be173d2110677d188651f8072bd313ccb9e664c37c8eeedc990" Feb 20 15:08:55.355619 master-0 kubenswrapper[28120]: I0220 15:08:55.355544 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"630ff5e27d347be173d2110677d188651f8072bd313ccb9e664c37c8eeedc990"} err="failed to get container status \"630ff5e27d347be173d2110677d188651f8072bd313ccb9e664c37c8eeedc990\": rpc error: code = NotFound desc = could not find container \"630ff5e27d347be173d2110677d188651f8072bd313ccb9e664c37c8eeedc990\": container with ID starting with 630ff5e27d347be173d2110677d188651f8072bd313ccb9e664c37c8eeedc990 not found: ID does not exist" Feb 20 15:08:55.379564 master-0 kubenswrapper[28120]: I0220 15:08:55.378505 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-544f96cb59-lzxgw"] Feb 20 15:08:55.387745 master-0 kubenswrapper[28120]: I0220 15:08:55.387659 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-544f96cb59-lzxgw"] Feb 20 15:08:56.073761 master-0 kubenswrapper[28120]: I0220 15:08:56.073648 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c2b6b66-eba8-4316-bb3f-beed2b53f173" path="/var/lib/kubelet/pods/5c2b6b66-eba8-4316-bb3f-beed2b53f173/volumes" Feb 20 15:08:59.535115 master-0 kubenswrapper[28120]: I0220 15:08:59.535008 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Feb 20 15:08:59.536246 master-0 kubenswrapper[28120]: E0220 15:08:59.535549 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2b6b66-eba8-4316-bb3f-beed2b53f173" containerName="console" Feb 20 15:08:59.536246 master-0 kubenswrapper[28120]: I0220 15:08:59.535584 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2b6b66-eba8-4316-bb3f-beed2b53f173" containerName="console" Feb 20 15:08:59.536246 master-0 kubenswrapper[28120]: I0220 15:08:59.536064 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c2b6b66-eba8-4316-bb3f-beed2b53f173" containerName="console" Feb 20 15:08:59.537076 master-0 kubenswrapper[28120]: I0220 15:08:59.537018 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Feb 20 15:08:59.540301 master-0 kubenswrapper[28120]: I0220 15:08:59.540219 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-j2nvb" Feb 20 15:08:59.540839 master-0 kubenswrapper[28120]: I0220 15:08:59.540764 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 20 15:08:59.547713 master-0 kubenswrapper[28120]: I0220 15:08:59.547591 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Feb 20 15:08:59.690333 master-0 kubenswrapper[28120]: I0220 15:08:59.690224 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5687718e-7747-46ea-a16a-e999b29a8ce1-kube-api-access\") pod \"installer-5-master-0\" (UID: \"5687718e-7747-46ea-a16a-e999b29a8ce1\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 20 15:08:59.690333 master-0 kubenswrapper[28120]: I0220 15:08:59.690328 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5687718e-7747-46ea-a16a-e999b29a8ce1-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"5687718e-7747-46ea-a16a-e999b29a8ce1\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 20 15:08:59.690834 master-0 kubenswrapper[28120]: I0220 15:08:59.690393 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5687718e-7747-46ea-a16a-e999b29a8ce1-var-lock\") pod \"installer-5-master-0\" (UID: \"5687718e-7747-46ea-a16a-e999b29a8ce1\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 20 15:08:59.792741 master-0 kubenswrapper[28120]: I0220 15:08:59.792539 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5687718e-7747-46ea-a16a-e999b29a8ce1-kube-api-access\") pod \"installer-5-master-0\" (UID: \"5687718e-7747-46ea-a16a-e999b29a8ce1\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 20 15:08:59.792741 master-0 kubenswrapper[28120]: I0220 15:08:59.792719 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5687718e-7747-46ea-a16a-e999b29a8ce1-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"5687718e-7747-46ea-a16a-e999b29a8ce1\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 20 15:08:59.793250 master-0 kubenswrapper[28120]: I0220 15:08:59.792962 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5687718e-7747-46ea-a16a-e999b29a8ce1-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"5687718e-7747-46ea-a16a-e999b29a8ce1\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 20 15:08:59.793250 master-0 kubenswrapper[28120]: I0220 15:08:59.793069 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5687718e-7747-46ea-a16a-e999b29a8ce1-var-lock\") pod \"installer-5-master-0\" (UID: \"5687718e-7747-46ea-a16a-e999b29a8ce1\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 20 15:08:59.793517 master-0 kubenswrapper[28120]: I0220 15:08:59.793224 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5687718e-7747-46ea-a16a-e999b29a8ce1-var-lock\") pod \"installer-5-master-0\" (UID: \"5687718e-7747-46ea-a16a-e999b29a8ce1\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 20 15:08:59.821312 master-0 kubenswrapper[28120]: I0220 15:08:59.821232 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5687718e-7747-46ea-a16a-e999b29a8ce1-kube-api-access\") pod \"installer-5-master-0\" (UID: \"5687718e-7747-46ea-a16a-e999b29a8ce1\") " pod="openshift-kube-controller-manager/installer-5-master-0" Feb 20 15:08:59.878053 master-0 kubenswrapper[28120]: I0220 15:08:59.877889 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Feb 20 15:09:00.211490 master-0 kubenswrapper[28120]: I0220 15:09:00.211435 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Feb 20 15:09:00.327625 master-0 kubenswrapper[28120]: I0220 15:09:00.327410 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-xfrxj"] Feb 20 15:09:00.332523 master-0 kubenswrapper[28120]: I0220 15:09:00.332415 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:09:00.337549 master-0 kubenswrapper[28120]: I0220 15:09:00.336412 28120 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Feb 20 15:09:00.337549 master-0 kubenswrapper[28120]: I0220 15:09:00.336727 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Feb 20 15:09:00.337549 master-0 kubenswrapper[28120]: I0220 15:09:00.337026 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Feb 20 15:09:00.337549 master-0 kubenswrapper[28120]: I0220 15:09:00.337113 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Feb 20 15:09:00.339111 master-0 kubenswrapper[28120]: I0220 15:09:00.338869 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-xfrxj"] Feb 20 15:09:00.377250 master-0 kubenswrapper[28120]: I0220 15:09:00.377192 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"5687718e-7747-46ea-a16a-e999b29a8ce1","Type":"ContainerStarted","Data":"4b7218d301c035c549573596f82cc036ddee609b09bceb62ef0ac342e1197095"} Feb 20 15:09:00.508897 master-0 kubenswrapper[28120]: I0220 15:09:00.508819 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcltd\" (UniqueName: \"kubernetes.io/projected/d1cb8849-9478-4af0-a159-0c003d9ceaed-kube-api-access-fcltd\") pod \"sushy-emulator-78f6d7d749-xfrxj\" (UID: \"d1cb8849-9478-4af0-a159-0c003d9ceaed\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:09:00.509137 master-0 kubenswrapper[28120]: I0220 15:09:00.509059 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/d1cb8849-9478-4af0-a159-0c003d9ceaed-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-xfrxj\" (UID: \"d1cb8849-9478-4af0-a159-0c003d9ceaed\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:09:00.509137 master-0 kubenswrapper[28120]: I0220 15:09:00.509127 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/d1cb8849-9478-4af0-a159-0c003d9ceaed-os-client-config\") pod \"sushy-emulator-78f6d7d749-xfrxj\" (UID: \"d1cb8849-9478-4af0-a159-0c003d9ceaed\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:09:00.610462 master-0 kubenswrapper[28120]: I0220 15:09:00.610298 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcltd\" (UniqueName: \"kubernetes.io/projected/d1cb8849-9478-4af0-a159-0c003d9ceaed-kube-api-access-fcltd\") pod \"sushy-emulator-78f6d7d749-xfrxj\" (UID: \"d1cb8849-9478-4af0-a159-0c003d9ceaed\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:09:00.611277 master-0 kubenswrapper[28120]: I0220 15:09:00.610496 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/d1cb8849-9478-4af0-a159-0c003d9ceaed-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-xfrxj\" (UID: \"d1cb8849-9478-4af0-a159-0c003d9ceaed\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:09:00.611277 master-0 kubenswrapper[28120]: I0220 15:09:00.610555 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/d1cb8849-9478-4af0-a159-0c003d9ceaed-os-client-config\") pod \"sushy-emulator-78f6d7d749-xfrxj\" (UID: \"d1cb8849-9478-4af0-a159-0c003d9ceaed\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:09:00.612820 master-0 kubenswrapper[28120]: I0220 15:09:00.612749 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/d1cb8849-9478-4af0-a159-0c003d9ceaed-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-xfrxj\" (UID: \"d1cb8849-9478-4af0-a159-0c003d9ceaed\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:09:00.616285 master-0 kubenswrapper[28120]: I0220 15:09:00.616228 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/d1cb8849-9478-4af0-a159-0c003d9ceaed-os-client-config\") pod \"sushy-emulator-78f6d7d749-xfrxj\" (UID: \"d1cb8849-9478-4af0-a159-0c003d9ceaed\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:09:00.630341 master-0 kubenswrapper[28120]: I0220 15:09:00.630274 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcltd\" (UniqueName: \"kubernetes.io/projected/d1cb8849-9478-4af0-a159-0c003d9ceaed-kube-api-access-fcltd\") pod \"sushy-emulator-78f6d7d749-xfrxj\" (UID: \"d1cb8849-9478-4af0-a159-0c003d9ceaed\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:09:00.675971 master-0 kubenswrapper[28120]: I0220 15:09:00.675891 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:09:01.186596 master-0 kubenswrapper[28120]: I0220 15:09:01.186499 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-xfrxj"] Feb 20 15:09:01.389778 master-0 kubenswrapper[28120]: I0220 15:09:01.389697 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"5687718e-7747-46ea-a16a-e999b29a8ce1","Type":"ContainerStarted","Data":"638706ea24e2439a1d07a2d15a8300b4b7bed91ff8f9049191aaa0d57601ee30"} Feb 20 15:09:01.391659 master-0 kubenswrapper[28120]: I0220 15:09:01.391596 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" event={"ID":"d1cb8849-9478-4af0-a159-0c003d9ceaed","Type":"ContainerStarted","Data":"cef1151ce2478d3c096b8abd6f53a028decaf578aea9229e7e6f14f368f77825"} Feb 20 15:09:01.422414 master-0 kubenswrapper[28120]: I0220 15:09:01.422271 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-5-master-0" podStartSLOduration=2.422245742 podStartE2EDuration="2.422245742s" podCreationTimestamp="2026-02-20 15:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:09:01.415685228 +0000 UTC m=+479.676478831" watchObservedRunningTime="2026-02-20 15:09:01.422245742 +0000 UTC m=+479.683039335" Feb 20 15:09:08.454771 master-0 kubenswrapper[28120]: I0220 15:09:08.454703 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" event={"ID":"d1cb8849-9478-4af0-a159-0c003d9ceaed","Type":"ContainerStarted","Data":"bf302f9251ae8798bcffe1721ada8edf55c2f2759d249fb91e81c73fb9703636"} Feb 20 15:09:08.484653 master-0 kubenswrapper[28120]: I0220 15:09:08.484560 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" podStartSLOduration=2.085342244 podStartE2EDuration="8.484531286s" podCreationTimestamp="2026-02-20 15:09:00 +0000 UTC" firstStartedPulling="2026-02-20 15:09:01.194381721 +0000 UTC m=+479.455175314" lastFinishedPulling="2026-02-20 15:09:07.593570743 +0000 UTC m=+485.854364356" observedRunningTime="2026-02-20 15:09:08.480535676 +0000 UTC m=+486.741329249" watchObservedRunningTime="2026-02-20 15:09:08.484531286 +0000 UTC m=+486.745324889" Feb 20 15:09:10.676910 master-0 kubenswrapper[28120]: I0220 15:09:10.676802 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:09:10.676910 master-0 kubenswrapper[28120]: I0220 15:09:10.676910 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:09:10.686585 master-0 kubenswrapper[28120]: I0220 15:09:10.686288 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:09:11.501619 master-0 kubenswrapper[28120]: I0220 15:09:11.501549 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:09:31.123182 master-0 kubenswrapper[28120]: I0220 15:09:31.123088 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-654d8bd879-d6gbw"] Feb 20 15:09:31.126672 master-0 kubenswrapper[28120]: I0220 15:09:31.124472 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-654d8bd879-d6gbw" Feb 20 15:09:31.142888 master-0 kubenswrapper[28120]: I0220 15:09:31.142817 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-654d8bd879-d6gbw"] Feb 20 15:09:31.276323 master-0 kubenswrapper[28120]: I0220 15:09:31.276217 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhb49\" (UniqueName: \"kubernetes.io/projected/90069bb8-8d60-4776-aa60-110b5b4ffdb3-kube-api-access-zhb49\") pod \"nova-console-poller-654d8bd879-d6gbw\" (UID: \"90069bb8-8d60-4776-aa60-110b5b4ffdb3\") " pod="sushy-emulator/nova-console-poller-654d8bd879-d6gbw" Feb 20 15:09:31.276644 master-0 kubenswrapper[28120]: I0220 15:09:31.276398 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/90069bb8-8d60-4776-aa60-110b5b4ffdb3-os-client-config\") pod \"nova-console-poller-654d8bd879-d6gbw\" (UID: \"90069bb8-8d60-4776-aa60-110b5b4ffdb3\") " pod="sushy-emulator/nova-console-poller-654d8bd879-d6gbw" Feb 20 15:09:31.378845 master-0 kubenswrapper[28120]: I0220 15:09:31.378647 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhb49\" (UniqueName: \"kubernetes.io/projected/90069bb8-8d60-4776-aa60-110b5b4ffdb3-kube-api-access-zhb49\") pod \"nova-console-poller-654d8bd879-d6gbw\" (UID: \"90069bb8-8d60-4776-aa60-110b5b4ffdb3\") " pod="sushy-emulator/nova-console-poller-654d8bd879-d6gbw" Feb 20 15:09:31.378845 master-0 kubenswrapper[28120]: I0220 15:09:31.378795 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/90069bb8-8d60-4776-aa60-110b5b4ffdb3-os-client-config\") pod \"nova-console-poller-654d8bd879-d6gbw\" (UID: \"90069bb8-8d60-4776-aa60-110b5b4ffdb3\") " pod="sushy-emulator/nova-console-poller-654d8bd879-d6gbw" Feb 20 15:09:31.385083 master-0 kubenswrapper[28120]: I0220 15:09:31.385020 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/90069bb8-8d60-4776-aa60-110b5b4ffdb3-os-client-config\") pod \"nova-console-poller-654d8bd879-d6gbw\" (UID: \"90069bb8-8d60-4776-aa60-110b5b4ffdb3\") " pod="sushy-emulator/nova-console-poller-654d8bd879-d6gbw" Feb 20 15:09:31.407242 master-0 kubenswrapper[28120]: I0220 15:09:31.407173 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhb49\" (UniqueName: \"kubernetes.io/projected/90069bb8-8d60-4776-aa60-110b5b4ffdb3-kube-api-access-zhb49\") pod \"nova-console-poller-654d8bd879-d6gbw\" (UID: \"90069bb8-8d60-4776-aa60-110b5b4ffdb3\") " pod="sushy-emulator/nova-console-poller-654d8bd879-d6gbw" Feb 20 15:09:31.457133 master-0 kubenswrapper[28120]: I0220 15:09:31.457060 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-654d8bd879-d6gbw" Feb 20 15:09:32.020384 master-0 kubenswrapper[28120]: I0220 15:09:32.020297 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-654d8bd879-d6gbw"] Feb 20 15:09:32.739445 master-0 kubenswrapper[28120]: I0220 15:09:32.739378 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-654d8bd879-d6gbw" event={"ID":"90069bb8-8d60-4776-aa60-110b5b4ffdb3","Type":"ContainerStarted","Data":"e897ed20cc553003acd170da2f6bf3aa981e0a30c0039057dc1caded821fadd0"} Feb 20 15:09:33.630973 master-0 kubenswrapper[28120]: I0220 15:09:33.630886 28120 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 15:09:33.631262 master-0 kubenswrapper[28120]: I0220 15:09:33.631212 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="cluster-policy-controller" containerID="cri-o://7cad8acf7dbd02c3f457405f1eed4616fd7216aef734c1327428ee2be5b7437f" gracePeriod=30 Feb 20 15:09:33.631386 master-0 kubenswrapper[28120]: I0220 15:09:33.631348 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://820e2cac15d8737b74874cfbaa6d9476ab99c3ff31d4202e866c117704183192" gracePeriod=30 Feb 20 15:09:33.631511 master-0 kubenswrapper[28120]: I0220 15:09:33.631408 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://f8983494495af1cf1e5c78f42107fc4b49d0dc1f28d1662f214add882adc38d5" gracePeriod=30 Feb 20 15:09:33.631715 master-0 kubenswrapper[28120]: I0220 15:09:33.631663 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager" containerID="cri-o://c11585243e474d29c5abcfced9e9d122c303847e4a995025645e667d5a6f2999" gracePeriod=30 Feb 20 15:09:33.632908 master-0 kubenswrapper[28120]: I0220 15:09:33.632354 28120 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 15:09:33.632908 master-0 kubenswrapper[28120]: E0220 15:09:33.632691 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager" Feb 20 15:09:33.632908 master-0 kubenswrapper[28120]: I0220 15:09:33.632704 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager" Feb 20 15:09:33.632908 master-0 kubenswrapper[28120]: E0220 15:09:33.632733 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="cluster-policy-controller" Feb 20 15:09:33.632908 master-0 kubenswrapper[28120]: I0220 15:09:33.632740 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="cluster-policy-controller" Feb 20 15:09:33.632908 master-0 kubenswrapper[28120]: E0220 15:09:33.632751 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager-cert-syncer" Feb 20 15:09:33.632908 master-0 kubenswrapper[28120]: I0220 15:09:33.632757 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager-cert-syncer" Feb 20 15:09:33.632908 master-0 kubenswrapper[28120]: E0220 15:09:33.632768 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager" Feb 20 15:09:33.632908 master-0 kubenswrapper[28120]: I0220 15:09:33.632773 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager" Feb 20 15:09:33.632908 master-0 kubenswrapper[28120]: E0220 15:09:33.632787 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager" Feb 20 15:09:33.632908 master-0 kubenswrapper[28120]: I0220 15:09:33.632811 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager" Feb 20 15:09:33.632908 master-0 kubenswrapper[28120]: E0220 15:09:33.632839 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager-recovery-controller" Feb 20 15:09:33.632908 master-0 kubenswrapper[28120]: I0220 15:09:33.632845 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager-recovery-controller" Feb 20 15:09:33.633742 master-0 kubenswrapper[28120]: I0220 15:09:33.633039 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager" Feb 20 15:09:33.633742 master-0 kubenswrapper[28120]: I0220 15:09:33.633050 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager" Feb 20 15:09:33.633742 master-0 kubenswrapper[28120]: I0220 15:09:33.633074 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="cluster-policy-controller" Feb 20 15:09:33.633742 master-0 kubenswrapper[28120]: I0220 15:09:33.633082 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager-cert-syncer" Feb 20 15:09:33.633742 master-0 kubenswrapper[28120]: I0220 15:09:33.633127 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager-recovery-controller" Feb 20 15:09:33.633742 master-0 kubenswrapper[28120]: I0220 15:09:33.633440 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d9b64313fdfb9864d29171f85c889a" containerName="kube-controller-manager" Feb 20 15:09:33.752547 master-0 kubenswrapper[28120]: I0220 15:09:33.752512 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_84d9b64313fdfb9864d29171f85c889a/kube-controller-manager/1.log" Feb 20 15:09:33.753550 master-0 kubenswrapper[28120]: I0220 15:09:33.753516 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_84d9b64313fdfb9864d29171f85c889a/kube-controller-manager-cert-syncer/0.log" Feb 20 15:09:33.753878 master-0 kubenswrapper[28120]: I0220 15:09:33.753846 28120 generic.go:334] "Generic (PLEG): container finished" podID="84d9b64313fdfb9864d29171f85c889a" containerID="f8983494495af1cf1e5c78f42107fc4b49d0dc1f28d1662f214add882adc38d5" exitCode=0 Feb 20 15:09:33.817652 master-0 kubenswrapper[28120]: I0220 15:09:33.817607 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1fca24f2ad5af27c041ce543b1e490d0-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1fca24f2ad5af27c041ce543b1e490d0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:33.817791 master-0 kubenswrapper[28120]: I0220 15:09:33.817744 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1fca24f2ad5af27c041ce543b1e490d0-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1fca24f2ad5af27c041ce543b1e490d0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:33.919702 master-0 kubenswrapper[28120]: I0220 15:09:33.919565 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1fca24f2ad5af27c041ce543b1e490d0-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1fca24f2ad5af27c041ce543b1e490d0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:33.919702 master-0 kubenswrapper[28120]: I0220 15:09:33.919685 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1fca24f2ad5af27c041ce543b1e490d0-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1fca24f2ad5af27c041ce543b1e490d0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:33.919976 master-0 kubenswrapper[28120]: I0220 15:09:33.919737 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1fca24f2ad5af27c041ce543b1e490d0-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1fca24f2ad5af27c041ce543b1e490d0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:33.919976 master-0 kubenswrapper[28120]: I0220 15:09:33.919822 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1fca24f2ad5af27c041ce543b1e490d0-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1fca24f2ad5af27c041ce543b1e490d0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:34.768558 master-0 kubenswrapper[28120]: I0220 15:09:34.768472 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_84d9b64313fdfb9864d29171f85c889a/kube-controller-manager/1.log" Feb 20 15:09:34.770016 master-0 kubenswrapper[28120]: I0220 15:09:34.769969 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_84d9b64313fdfb9864d29171f85c889a/kube-controller-manager-cert-syncer/0.log" Feb 20 15:09:34.770774 master-0 kubenswrapper[28120]: I0220 15:09:34.770689 28120 generic.go:334] "Generic (PLEG): container finished" podID="84d9b64313fdfb9864d29171f85c889a" containerID="c11585243e474d29c5abcfced9e9d122c303847e4a995025645e667d5a6f2999" exitCode=0 Feb 20 15:09:34.770774 master-0 kubenswrapper[28120]: I0220 15:09:34.770745 28120 generic.go:334] "Generic (PLEG): container finished" podID="84d9b64313fdfb9864d29171f85c889a" containerID="820e2cac15d8737b74874cfbaa6d9476ab99c3ff31d4202e866c117704183192" exitCode=2 Feb 20 15:09:34.770774 master-0 kubenswrapper[28120]: I0220 15:09:34.770764 28120 generic.go:334] "Generic (PLEG): container finished" podID="84d9b64313fdfb9864d29171f85c889a" containerID="7cad8acf7dbd02c3f457405f1eed4616fd7216aef734c1327428ee2be5b7437f" exitCode=0 Feb 20 15:09:34.771154 master-0 kubenswrapper[28120]: I0220 15:09:34.770851 28120 scope.go:117] "RemoveContainer" containerID="39277d8e84273f51dbeefcc1cf459f92a492385ff6b0a6961af7b62f9a72c4a1" Feb 20 15:09:34.773786 master-0 kubenswrapper[28120]: I0220 15:09:34.773482 28120 generic.go:334] "Generic (PLEG): container finished" podID="5687718e-7747-46ea-a16a-e999b29a8ce1" containerID="638706ea24e2439a1d07a2d15a8300b4b7bed91ff8f9049191aaa0d57601ee30" exitCode=0 Feb 20 15:09:34.773786 master-0 kubenswrapper[28120]: I0220 15:09:34.773538 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"5687718e-7747-46ea-a16a-e999b29a8ce1","Type":"ContainerDied","Data":"638706ea24e2439a1d07a2d15a8300b4b7bed91ff8f9049191aaa0d57601ee30"} Feb 20 15:09:35.257060 master-0 kubenswrapper[28120]: I0220 15:09:35.257004 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_84d9b64313fdfb9864d29171f85c889a/kube-controller-manager-cert-syncer/0.log" Feb 20 15:09:35.258984 master-0 kubenswrapper[28120]: I0220 15:09:35.258886 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:35.262525 master-0 kubenswrapper[28120]: I0220 15:09:35.262456 28120 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="84d9b64313fdfb9864d29171f85c889a" podUID="1fca24f2ad5af27c041ce543b1e490d0" Feb 20 15:09:35.446440 master-0 kubenswrapper[28120]: I0220 15:09:35.446267 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/84d9b64313fdfb9864d29171f85c889a-cert-dir\") pod \"84d9b64313fdfb9864d29171f85c889a\" (UID: \"84d9b64313fdfb9864d29171f85c889a\") " Feb 20 15:09:35.446440 master-0 kubenswrapper[28120]: I0220 15:09:35.446394 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84d9b64313fdfb9864d29171f85c889a-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "84d9b64313fdfb9864d29171f85c889a" (UID: "84d9b64313fdfb9864d29171f85c889a"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:09:35.446697 master-0 kubenswrapper[28120]: I0220 15:09:35.446416 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/84d9b64313fdfb9864d29171f85c889a-resource-dir\") pod \"84d9b64313fdfb9864d29171f85c889a\" (UID: \"84d9b64313fdfb9864d29171f85c889a\") " Feb 20 15:09:35.446697 master-0 kubenswrapper[28120]: I0220 15:09:35.446583 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84d9b64313fdfb9864d29171f85c889a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "84d9b64313fdfb9864d29171f85c889a" (UID: "84d9b64313fdfb9864d29171f85c889a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:09:35.447539 master-0 kubenswrapper[28120]: I0220 15:09:35.447503 28120 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/84d9b64313fdfb9864d29171f85c889a-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:09:35.447621 master-0 kubenswrapper[28120]: I0220 15:09:35.447538 28120 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/84d9b64313fdfb9864d29171f85c889a-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:09:35.783630 master-0 kubenswrapper[28120]: I0220 15:09:35.783568 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_84d9b64313fdfb9864d29171f85c889a/kube-controller-manager-cert-syncer/0.log" Feb 20 15:09:35.784168 master-0 kubenswrapper[28120]: I0220 15:09:35.784069 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:35.789091 master-0 kubenswrapper[28120]: I0220 15:09:35.789047 28120 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="84d9b64313fdfb9864d29171f85c889a" podUID="1fca24f2ad5af27c041ce543b1e490d0" Feb 20 15:09:35.802716 master-0 kubenswrapper[28120]: I0220 15:09:35.802658 28120 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="84d9b64313fdfb9864d29171f85c889a" podUID="1fca24f2ad5af27c041ce543b1e490d0" Feb 20 15:09:36.066844 master-0 kubenswrapper[28120]: I0220 15:09:36.066661 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84d9b64313fdfb9864d29171f85c889a" path="/var/lib/kubelet/pods/84d9b64313fdfb9864d29171f85c889a/volumes" Feb 20 15:09:36.597640 master-0 kubenswrapper[28120]: I0220 15:09:36.597573 28120 scope.go:117] "RemoveContainer" containerID="c11585243e474d29c5abcfced9e9d122c303847e4a995025645e667d5a6f2999" Feb 20 15:09:36.655262 master-0 kubenswrapper[28120]: I0220 15:09:36.655175 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Feb 20 15:09:36.663659 master-0 kubenswrapper[28120]: I0220 15:09:36.663271 28120 scope.go:117] "RemoveContainer" containerID="39277d8e84273f51dbeefcc1cf459f92a492385ff6b0a6961af7b62f9a72c4a1" Feb 20 15:09:36.663776 master-0 kubenswrapper[28120]: E0220 15:09:36.663642 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39277d8e84273f51dbeefcc1cf459f92a492385ff6b0a6961af7b62f9a72c4a1\": container with ID starting with 39277d8e84273f51dbeefcc1cf459f92a492385ff6b0a6961af7b62f9a72c4a1 not found: ID does not exist" containerID="39277d8e84273f51dbeefcc1cf459f92a492385ff6b0a6961af7b62f9a72c4a1" Feb 20 15:09:36.663776 master-0 kubenswrapper[28120]: I0220 15:09:36.663699 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39277d8e84273f51dbeefcc1cf459f92a492385ff6b0a6961af7b62f9a72c4a1"} err="failed to get container status \"39277d8e84273f51dbeefcc1cf459f92a492385ff6b0a6961af7b62f9a72c4a1\": rpc error: code = NotFound desc = could not find container \"39277d8e84273f51dbeefcc1cf459f92a492385ff6b0a6961af7b62f9a72c4a1\": container with ID starting with 39277d8e84273f51dbeefcc1cf459f92a492385ff6b0a6961af7b62f9a72c4a1 not found: ID does not exist" Feb 20 15:09:36.663776 master-0 kubenswrapper[28120]: I0220 15:09:36.663732 28120 scope.go:117] "RemoveContainer" containerID="f8983494495af1cf1e5c78f42107fc4b49d0dc1f28d1662f214add882adc38d5" Feb 20 15:09:36.711434 master-0 kubenswrapper[28120]: I0220 15:09:36.694103 28120 scope.go:117] "RemoveContainer" containerID="820e2cac15d8737b74874cfbaa6d9476ab99c3ff31d4202e866c117704183192" Feb 20 15:09:36.734313 master-0 kubenswrapper[28120]: I0220 15:09:36.733201 28120 scope.go:117] "RemoveContainer" containerID="7cad8acf7dbd02c3f457405f1eed4616fd7216aef734c1327428ee2be5b7437f" Feb 20 15:09:36.768212 master-0 kubenswrapper[28120]: I0220 15:09:36.768093 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5687718e-7747-46ea-a16a-e999b29a8ce1-kubelet-dir\") pod \"5687718e-7747-46ea-a16a-e999b29a8ce1\" (UID: \"5687718e-7747-46ea-a16a-e999b29a8ce1\") " Feb 20 15:09:36.768554 master-0 kubenswrapper[28120]: I0220 15:09:36.768256 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5687718e-7747-46ea-a16a-e999b29a8ce1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5687718e-7747-46ea-a16a-e999b29a8ce1" (UID: "5687718e-7747-46ea-a16a-e999b29a8ce1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:09:36.768554 master-0 kubenswrapper[28120]: I0220 15:09:36.768544 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5687718e-7747-46ea-a16a-e999b29a8ce1-kube-api-access\") pod \"5687718e-7747-46ea-a16a-e999b29a8ce1\" (UID: \"5687718e-7747-46ea-a16a-e999b29a8ce1\") " Feb 20 15:09:36.768740 master-0 kubenswrapper[28120]: I0220 15:09:36.768608 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5687718e-7747-46ea-a16a-e999b29a8ce1-var-lock\") pod \"5687718e-7747-46ea-a16a-e999b29a8ce1\" (UID: \"5687718e-7747-46ea-a16a-e999b29a8ce1\") " Feb 20 15:09:36.769330 master-0 kubenswrapper[28120]: I0220 15:09:36.769034 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5687718e-7747-46ea-a16a-e999b29a8ce1-var-lock" (OuterVolumeSpecName: "var-lock") pod "5687718e-7747-46ea-a16a-e999b29a8ce1" (UID: "5687718e-7747-46ea-a16a-e999b29a8ce1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:09:36.769330 master-0 kubenswrapper[28120]: I0220 15:09:36.769262 28120 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5687718e-7747-46ea-a16a-e999b29a8ce1-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 20 15:09:36.769330 master-0 kubenswrapper[28120]: I0220 15:09:36.769290 28120 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5687718e-7747-46ea-a16a-e999b29a8ce1-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:09:36.771835 master-0 kubenswrapper[28120]: I0220 15:09:36.771759 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5687718e-7747-46ea-a16a-e999b29a8ce1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5687718e-7747-46ea-a16a-e999b29a8ce1" (UID: "5687718e-7747-46ea-a16a-e999b29a8ce1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:09:36.794071 master-0 kubenswrapper[28120]: I0220 15:09:36.793908 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"5687718e-7747-46ea-a16a-e999b29a8ce1","Type":"ContainerDied","Data":"4b7218d301c035c549573596f82cc036ddee609b09bceb62ef0ac342e1197095"} Feb 20 15:09:36.794071 master-0 kubenswrapper[28120]: I0220 15:09:36.794017 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b7218d301c035c549573596f82cc036ddee609b09bceb62ef0ac342e1197095" Feb 20 15:09:36.794071 master-0 kubenswrapper[28120]: I0220 15:09:36.793953 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Feb 20 15:09:36.871260 master-0 kubenswrapper[28120]: I0220 15:09:36.871185 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5687718e-7747-46ea-a16a-e999b29a8ce1-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 20 15:09:37.812662 master-0 kubenswrapper[28120]: I0220 15:09:37.812575 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-654d8bd879-d6gbw" event={"ID":"90069bb8-8d60-4776-aa60-110b5b4ffdb3","Type":"ContainerStarted","Data":"54e578a42cd6df904a94d7c00eef567883820a8b42b0574478fb3f58fa04a918"} Feb 20 15:09:37.813868 master-0 kubenswrapper[28120]: I0220 15:09:37.813827 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-654d8bd879-d6gbw" event={"ID":"90069bb8-8d60-4776-aa60-110b5b4ffdb3","Type":"ContainerStarted","Data":"1f7db87f5b37d95168821a3e8a24619eaad0982fcd55e8c6a834e108c88fc33a"} Feb 20 15:09:37.844068 master-0 kubenswrapper[28120]: I0220 15:09:37.843959 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-654d8bd879-d6gbw" podStartSLOduration=1.5614579530000001 podStartE2EDuration="6.843899943s" podCreationTimestamp="2026-02-20 15:09:31 +0000 UTC" firstStartedPulling="2026-02-20 15:09:32.027961411 +0000 UTC m=+510.288755014" lastFinishedPulling="2026-02-20 15:09:37.310403401 +0000 UTC m=+515.571197004" observedRunningTime="2026-02-20 15:09:37.835594775 +0000 UTC m=+516.096388358" watchObservedRunningTime="2026-02-20 15:09:37.843899943 +0000 UTC m=+516.104693546" Feb 20 15:09:45.055984 master-0 kubenswrapper[28120]: I0220 15:09:45.055850 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:45.078594 master-0 kubenswrapper[28120]: I0220 15:09:45.078532 28120 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="fa7bbe99-f465-4467-b6ac-9fe3904a177b" Feb 20 15:09:45.078594 master-0 kubenswrapper[28120]: I0220 15:09:45.078593 28120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="fa7bbe99-f465-4467-b6ac-9fe3904a177b" Feb 20 15:09:45.100323 master-0 kubenswrapper[28120]: I0220 15:09:45.100237 28120 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:45.113359 master-0 kubenswrapper[28120]: I0220 15:09:45.113256 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 15:09:45.118799 master-0 kubenswrapper[28120]: I0220 15:09:45.118508 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:45.126825 master-0 kubenswrapper[28120]: I0220 15:09:45.126725 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 15:09:45.138599 master-0 kubenswrapper[28120]: I0220 15:09:45.138520 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 20 15:09:45.902396 master-0 kubenswrapper[28120]: I0220 15:09:45.902321 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"1fca24f2ad5af27c041ce543b1e490d0","Type":"ContainerStarted","Data":"9b88629dd326eea6b4e2896adb5d477f872e4bc1fb393aa29838c2814aac79a2"} Feb 20 15:09:45.902396 master-0 kubenswrapper[28120]: I0220 15:09:45.902396 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"1fca24f2ad5af27c041ce543b1e490d0","Type":"ContainerStarted","Data":"73bb302cdde849d8036a183967880b2d993b8c3f2ba4ff4b82f5860f43060add"} Feb 20 15:09:45.902658 master-0 kubenswrapper[28120]: I0220 15:09:45.902419 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"1fca24f2ad5af27c041ce543b1e490d0","Type":"ContainerStarted","Data":"6116bea46ded5352e9cb2058200579c5cd3c365763811c199ea5ebad3cc90800"} Feb 20 15:09:46.924649 master-0 kubenswrapper[28120]: I0220 15:09:46.924579 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"1fca24f2ad5af27c041ce543b1e490d0","Type":"ContainerStarted","Data":"65d83afcff19f42bedf62a5af4ae30f9c6d5b38e62d8338651c4e241c31c4050"} Feb 20 15:09:46.924649 master-0 kubenswrapper[28120]: I0220 15:09:46.924625 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"1fca24f2ad5af27c041ce543b1e490d0","Type":"ContainerStarted","Data":"44068b8e2ae9d3b2cb41dbd81df20776911f94f19683eae96ccc34baeb14362b"} Feb 20 15:09:46.955734 master-0 kubenswrapper[28120]: I0220 15:09:46.955668 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=1.955650879 podStartE2EDuration="1.955650879s" podCreationTimestamp="2026-02-20 15:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:09:46.952958462 +0000 UTC m=+525.213752035" watchObservedRunningTime="2026-02-20 15:09:46.955650879 +0000 UTC m=+525.216444452" Feb 20 15:09:47.649219 master-0 kubenswrapper[28120]: I0220 15:09:47.649147 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:09:47.696284 master-0 kubenswrapper[28120]: I0220 15:09:47.696215 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:09:47.986718 master-0 kubenswrapper[28120]: I0220 15:09:47.986571 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 20 15:09:55.119169 master-0 kubenswrapper[28120]: I0220 15:09:55.119043 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:55.119169 master-0 kubenswrapper[28120]: I0220 15:09:55.119140 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:55.119169 master-0 kubenswrapper[28120]: I0220 15:09:55.119167 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:55.119169 master-0 kubenswrapper[28120]: I0220 15:09:55.119185 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:55.126436 master-0 kubenswrapper[28120]: I0220 15:09:55.126360 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:55.127190 master-0 kubenswrapper[28120]: I0220 15:09:55.127139 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:56.018846 master-0 kubenswrapper[28120]: I0220 15:09:56.018757 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:09:56.019603 master-0 kubenswrapper[28120]: I0220 15:09:56.019535 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 20 15:10:06.777370 master-0 kubenswrapper[28120]: I0220 15:10:06.777277 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt"] Feb 20 15:10:06.778144 master-0 kubenswrapper[28120]: E0220 15:10:06.777795 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5687718e-7747-46ea-a16a-e999b29a8ce1" containerName="installer" Feb 20 15:10:06.778144 master-0 kubenswrapper[28120]: I0220 15:10:06.777820 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5687718e-7747-46ea-a16a-e999b29a8ce1" containerName="installer" Feb 20 15:10:06.778254 master-0 kubenswrapper[28120]: I0220 15:10:06.778202 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="5687718e-7747-46ea-a16a-e999b29a8ce1" containerName="installer" Feb 20 15:10:06.779509 master-0 kubenswrapper[28120]: I0220 15:10:06.779471 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt" Feb 20 15:10:06.799787 master-0 kubenswrapper[28120]: I0220 15:10:06.799724 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt"] Feb 20 15:10:06.808586 master-0 kubenswrapper[28120]: I0220 15:10:06.808513 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/2c4990f5-d82f-4305-b31d-786b5ba8fa98-os-client-config\") pod \"nova-console-recorder-76bd8689ff-w6gzt\" (UID: \"2c4990f5-d82f-4305-b31d-786b5ba8fa98\") " pod="sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt" Feb 20 15:10:06.911679 master-0 kubenswrapper[28120]: I0220 15:10:06.911588 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/2c4990f5-d82f-4305-b31d-786b5ba8fa98-os-client-config\") pod \"nova-console-recorder-76bd8689ff-w6gzt\" (UID: \"2c4990f5-d82f-4305-b31d-786b5ba8fa98\") " pod="sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt" Feb 20 15:10:06.911986 master-0 kubenswrapper[28120]: I0220 15:10:06.911712 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfvst\" (UniqueName: \"kubernetes.io/projected/2c4990f5-d82f-4305-b31d-786b5ba8fa98-kube-api-access-zfvst\") pod \"nova-console-recorder-76bd8689ff-w6gzt\" (UID: \"2c4990f5-d82f-4305-b31d-786b5ba8fa98\") " pod="sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt" Feb 20 15:10:06.911986 master-0 kubenswrapper[28120]: I0220 15:10:06.911797 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/2c4990f5-d82f-4305-b31d-786b5ba8fa98-nova-console-recordings-pv\") pod \"nova-console-recorder-76bd8689ff-w6gzt\" (UID: \"2c4990f5-d82f-4305-b31d-786b5ba8fa98\") " pod="sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt" Feb 20 15:10:06.918242 master-0 kubenswrapper[28120]: I0220 15:10:06.918116 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/2c4990f5-d82f-4305-b31d-786b5ba8fa98-os-client-config\") pod \"nova-console-recorder-76bd8689ff-w6gzt\" (UID: \"2c4990f5-d82f-4305-b31d-786b5ba8fa98\") " pod="sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt" Feb 20 15:10:07.013185 master-0 kubenswrapper[28120]: I0220 15:10:07.013076 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfvst\" (UniqueName: \"kubernetes.io/projected/2c4990f5-d82f-4305-b31d-786b5ba8fa98-kube-api-access-zfvst\") pod \"nova-console-recorder-76bd8689ff-w6gzt\" (UID: \"2c4990f5-d82f-4305-b31d-786b5ba8fa98\") " pod="sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt" Feb 20 15:10:07.013426 master-0 kubenswrapper[28120]: I0220 15:10:07.013199 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/2c4990f5-d82f-4305-b31d-786b5ba8fa98-nova-console-recordings-pv\") pod \"nova-console-recorder-76bd8689ff-w6gzt\" (UID: \"2c4990f5-d82f-4305-b31d-786b5ba8fa98\") " pod="sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt" Feb 20 15:10:07.043696 master-0 kubenswrapper[28120]: I0220 15:10:07.043528 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfvst\" (UniqueName: \"kubernetes.io/projected/2c4990f5-d82f-4305-b31d-786b5ba8fa98-kube-api-access-zfvst\") pod \"nova-console-recorder-76bd8689ff-w6gzt\" (UID: \"2c4990f5-d82f-4305-b31d-786b5ba8fa98\") " pod="sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt" Feb 20 15:10:07.758942 master-0 kubenswrapper[28120]: I0220 15:10:07.758776 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/2c4990f5-d82f-4305-b31d-786b5ba8fa98-nova-console-recordings-pv\") pod \"nova-console-recorder-76bd8689ff-w6gzt\" (UID: \"2c4990f5-d82f-4305-b31d-786b5ba8fa98\") " pod="sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt" Feb 20 15:10:08.004090 master-0 kubenswrapper[28120]: I0220 15:10:08.003993 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt" Feb 20 15:10:08.547330 master-0 kubenswrapper[28120]: I0220 15:10:08.547242 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt"] Feb 20 15:10:08.563527 master-0 kubenswrapper[28120]: W0220 15:10:08.563372 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c4990f5_d82f_4305_b31d_786b5ba8fa98.slice/crio-0d46868fb41d9b4d19a6bdc21959004ae859b239038eeb3874b54324e21401d3 WatchSource:0}: Error finding container 0d46868fb41d9b4d19a6bdc21959004ae859b239038eeb3874b54324e21401d3: Status 404 returned error can't find the container with id 0d46868fb41d9b4d19a6bdc21959004ae859b239038eeb3874b54324e21401d3 Feb 20 15:10:09.156382 master-0 kubenswrapper[28120]: I0220 15:10:09.156281 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt" event={"ID":"2c4990f5-d82f-4305-b31d-786b5ba8fa98","Type":"ContainerStarted","Data":"0d46868fb41d9b4d19a6bdc21959004ae859b239038eeb3874b54324e21401d3"} Feb 20 15:10:18.272698 master-0 kubenswrapper[28120]: I0220 15:10:18.272623 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt" event={"ID":"2c4990f5-d82f-4305-b31d-786b5ba8fa98","Type":"ContainerStarted","Data":"9cdb501a1273c1f65adc808c13e44063191af067f40cbc863adef3845a2d7121"} Feb 20 15:10:18.272698 master-0 kubenswrapper[28120]: I0220 15:10:18.272685 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt" event={"ID":"2c4990f5-d82f-4305-b31d-786b5ba8fa98","Type":"ContainerStarted","Data":"c66f1db07c72d0713eed2096fba639c9f5b1bf7cbf5aeaf82a44a8467de5c5d7"} Feb 20 15:10:18.301125 master-0 kubenswrapper[28120]: I0220 15:10:18.301004 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-76bd8689ff-w6gzt" podStartSLOduration=3.15611273 podStartE2EDuration="12.300978435s" podCreationTimestamp="2026-02-20 15:10:06 +0000 UTC" firstStartedPulling="2026-02-20 15:10:08.568340657 +0000 UTC m=+546.829134230" lastFinishedPulling="2026-02-20 15:10:17.713206362 +0000 UTC m=+555.973999935" observedRunningTime="2026-02-20 15:10:18.297721854 +0000 UTC m=+556.558515447" watchObservedRunningTime="2026-02-20 15:10:18.300978435 +0000 UTC m=+556.561772038" Feb 20 15:10:43.893032 master-0 kubenswrapper[28120]: I0220 15:10:43.892944 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr"] Feb 20 15:10:43.896055 master-0 kubenswrapper[28120]: I0220 15:10:43.896008 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" Feb 20 15:10:43.898856 master-0 kubenswrapper[28120]: I0220 15:10:43.898816 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-trhr6" Feb 20 15:10:43.906186 master-0 kubenswrapper[28120]: I0220 15:10:43.906096 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr"] Feb 20 15:10:43.983435 master-0 kubenswrapper[28120]: I0220 15:10:43.982603 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbc88\" (UniqueName: \"kubernetes.io/projected/a87a1b41-6bfb-4839-a61d-54f40064a82a-kube-api-access-gbc88\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr\" (UID: \"a87a1b41-6bfb-4839-a61d-54f40064a82a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" Feb 20 15:10:43.983435 master-0 kubenswrapper[28120]: I0220 15:10:43.982895 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a87a1b41-6bfb-4839-a61d-54f40064a82a-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr\" (UID: \"a87a1b41-6bfb-4839-a61d-54f40064a82a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" Feb 20 15:10:43.983435 master-0 kubenswrapper[28120]: I0220 15:10:43.983257 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a87a1b41-6bfb-4839-a61d-54f40064a82a-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr\" (UID: \"a87a1b41-6bfb-4839-a61d-54f40064a82a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" Feb 20 15:10:44.084478 master-0 kubenswrapper[28120]: I0220 15:10:44.084422 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a87a1b41-6bfb-4839-a61d-54f40064a82a-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr\" (UID: \"a87a1b41-6bfb-4839-a61d-54f40064a82a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" Feb 20 15:10:44.084814 master-0 kubenswrapper[28120]: I0220 15:10:44.084791 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a87a1b41-6bfb-4839-a61d-54f40064a82a-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr\" (UID: \"a87a1b41-6bfb-4839-a61d-54f40064a82a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" Feb 20 15:10:44.085043 master-0 kubenswrapper[28120]: I0220 15:10:44.085016 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbc88\" (UniqueName: \"kubernetes.io/projected/a87a1b41-6bfb-4839-a61d-54f40064a82a-kube-api-access-gbc88\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr\" (UID: \"a87a1b41-6bfb-4839-a61d-54f40064a82a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" Feb 20 15:10:44.085465 master-0 kubenswrapper[28120]: I0220 15:10:44.085330 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a87a1b41-6bfb-4839-a61d-54f40064a82a-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr\" (UID: \"a87a1b41-6bfb-4839-a61d-54f40064a82a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" Feb 20 15:10:44.085465 master-0 kubenswrapper[28120]: I0220 15:10:44.085386 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a87a1b41-6bfb-4839-a61d-54f40064a82a-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr\" (UID: \"a87a1b41-6bfb-4839-a61d-54f40064a82a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" Feb 20 15:10:44.114432 master-0 kubenswrapper[28120]: I0220 15:10:44.114363 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbc88\" (UniqueName: \"kubernetes.io/projected/a87a1b41-6bfb-4839-a61d-54f40064a82a-kube-api-access-gbc88\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr\" (UID: \"a87a1b41-6bfb-4839-a61d-54f40064a82a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" Feb 20 15:10:44.225381 master-0 kubenswrapper[28120]: I0220 15:10:44.225192 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" Feb 20 15:10:44.774176 master-0 kubenswrapper[28120]: I0220 15:10:44.774099 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr"] Feb 20 15:10:45.537574 master-0 kubenswrapper[28120]: I0220 15:10:45.537512 28120 generic.go:334] "Generic (PLEG): container finished" podID="a87a1b41-6bfb-4839-a61d-54f40064a82a" containerID="50be4fa25dc6e260f31769d1fc94e6a50370a6c6b988ad2c92e8d5304529480c" exitCode=0 Feb 20 15:10:45.538404 master-0 kubenswrapper[28120]: I0220 15:10:45.537632 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" event={"ID":"a87a1b41-6bfb-4839-a61d-54f40064a82a","Type":"ContainerDied","Data":"50be4fa25dc6e260f31769d1fc94e6a50370a6c6b988ad2c92e8d5304529480c"} Feb 20 15:10:45.538578 master-0 kubenswrapper[28120]: I0220 15:10:45.538544 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" event={"ID":"a87a1b41-6bfb-4839-a61d-54f40064a82a","Type":"ContainerStarted","Data":"e35da02c7bc1c3f6b32fe7add59e45a536479e006f96bd66b140e3bcb0c871b9"} Feb 20 15:10:45.540197 master-0 kubenswrapper[28120]: I0220 15:10:45.540155 28120 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 15:10:47.573027 master-0 kubenswrapper[28120]: I0220 15:10:47.571436 28120 generic.go:334] "Generic (PLEG): container finished" podID="a87a1b41-6bfb-4839-a61d-54f40064a82a" containerID="a0dca96da7b20ca5e5405fb418d145b93d848def18a60dd044981d984dae08e0" exitCode=0 Feb 20 15:10:47.573027 master-0 kubenswrapper[28120]: I0220 15:10:47.571536 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" event={"ID":"a87a1b41-6bfb-4839-a61d-54f40064a82a","Type":"ContainerDied","Data":"a0dca96da7b20ca5e5405fb418d145b93d848def18a60dd044981d984dae08e0"} Feb 20 15:10:48.597094 master-0 kubenswrapper[28120]: I0220 15:10:48.597011 28120 generic.go:334] "Generic (PLEG): container finished" podID="a87a1b41-6bfb-4839-a61d-54f40064a82a" containerID="6aef0fda3824e7bb8454fae1206d143944327768e5363550d4157d05b3901873" exitCode=0 Feb 20 15:10:48.597094 master-0 kubenswrapper[28120]: I0220 15:10:48.597096 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" event={"ID":"a87a1b41-6bfb-4839-a61d-54f40064a82a","Type":"ContainerDied","Data":"6aef0fda3824e7bb8454fae1206d143944327768e5363550d4157d05b3901873"} Feb 20 15:10:50.057289 master-0 kubenswrapper[28120]: I0220 15:10:50.057226 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" Feb 20 15:10:50.111765 master-0 kubenswrapper[28120]: I0220 15:10:50.111673 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a87a1b41-6bfb-4839-a61d-54f40064a82a-util\") pod \"a87a1b41-6bfb-4839-a61d-54f40064a82a\" (UID: \"a87a1b41-6bfb-4839-a61d-54f40064a82a\") " Feb 20 15:10:50.111765 master-0 kubenswrapper[28120]: I0220 15:10:50.111762 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbc88\" (UniqueName: \"kubernetes.io/projected/a87a1b41-6bfb-4839-a61d-54f40064a82a-kube-api-access-gbc88\") pod \"a87a1b41-6bfb-4839-a61d-54f40064a82a\" (UID: \"a87a1b41-6bfb-4839-a61d-54f40064a82a\") " Feb 20 15:10:50.112180 master-0 kubenswrapper[28120]: I0220 15:10:50.111853 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a87a1b41-6bfb-4839-a61d-54f40064a82a-bundle\") pod \"a87a1b41-6bfb-4839-a61d-54f40064a82a\" (UID: \"a87a1b41-6bfb-4839-a61d-54f40064a82a\") " Feb 20 15:10:50.113027 master-0 kubenswrapper[28120]: I0220 15:10:50.112899 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a87a1b41-6bfb-4839-a61d-54f40064a82a-bundle" (OuterVolumeSpecName: "bundle") pod "a87a1b41-6bfb-4839-a61d-54f40064a82a" (UID: "a87a1b41-6bfb-4839-a61d-54f40064a82a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:10:50.113231 master-0 kubenswrapper[28120]: I0220 15:10:50.113173 28120 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a87a1b41-6bfb-4839-a61d-54f40064a82a-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:10:50.116524 master-0 kubenswrapper[28120]: I0220 15:10:50.116450 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a87a1b41-6bfb-4839-a61d-54f40064a82a-kube-api-access-gbc88" (OuterVolumeSpecName: "kube-api-access-gbc88") pod "a87a1b41-6bfb-4839-a61d-54f40064a82a" (UID: "a87a1b41-6bfb-4839-a61d-54f40064a82a"). InnerVolumeSpecName "kube-api-access-gbc88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:10:50.135950 master-0 kubenswrapper[28120]: I0220 15:10:50.135864 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a87a1b41-6bfb-4839-a61d-54f40064a82a-util" (OuterVolumeSpecName: "util") pod "a87a1b41-6bfb-4839-a61d-54f40064a82a" (UID: "a87a1b41-6bfb-4839-a61d-54f40064a82a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:10:50.213862 master-0 kubenswrapper[28120]: I0220 15:10:50.213784 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbc88\" (UniqueName: \"kubernetes.io/projected/a87a1b41-6bfb-4839-a61d-54f40064a82a-kube-api-access-gbc88\") on node \"master-0\" DevicePath \"\"" Feb 20 15:10:50.213862 master-0 kubenswrapper[28120]: I0220 15:10:50.213826 28120 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a87a1b41-6bfb-4839-a61d-54f40064a82a-util\") on node \"master-0\" DevicePath \"\"" Feb 20 15:10:50.622251 master-0 kubenswrapper[28120]: I0220 15:10:50.622181 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" event={"ID":"a87a1b41-6bfb-4839-a61d-54f40064a82a","Type":"ContainerDied","Data":"e35da02c7bc1c3f6b32fe7add59e45a536479e006f96bd66b140e3bcb0c871b9"} Feb 20 15:10:50.622251 master-0 kubenswrapper[28120]: I0220 15:10:50.622250 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e35da02c7bc1c3f6b32fe7add59e45a536479e006f96bd66b140e3bcb0c871b9" Feb 20 15:10:50.622662 master-0 kubenswrapper[28120]: I0220 15:10:50.622285 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4t7xcr" Feb 20 15:10:57.007751 master-0 kubenswrapper[28120]: I0220 15:10:57.007680 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-59dd4796d-tgc7t"] Feb 20 15:10:57.008531 master-0 kubenswrapper[28120]: E0220 15:10:57.008038 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87a1b41-6bfb-4839-a61d-54f40064a82a" containerName="extract" Feb 20 15:10:57.008531 master-0 kubenswrapper[28120]: I0220 15:10:57.008054 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87a1b41-6bfb-4839-a61d-54f40064a82a" containerName="extract" Feb 20 15:10:57.008531 master-0 kubenswrapper[28120]: E0220 15:10:57.008091 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87a1b41-6bfb-4839-a61d-54f40064a82a" containerName="pull" Feb 20 15:10:57.008531 master-0 kubenswrapper[28120]: I0220 15:10:57.008099 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87a1b41-6bfb-4839-a61d-54f40064a82a" containerName="pull" Feb 20 15:10:57.008531 master-0 kubenswrapper[28120]: E0220 15:10:57.008113 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87a1b41-6bfb-4839-a61d-54f40064a82a" containerName="util" Feb 20 15:10:57.008531 master-0 kubenswrapper[28120]: I0220 15:10:57.008121 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87a1b41-6bfb-4839-a61d-54f40064a82a" containerName="util" Feb 20 15:10:57.008531 master-0 kubenswrapper[28120]: I0220 15:10:57.008291 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="a87a1b41-6bfb-4839-a61d-54f40064a82a" containerName="extract" Feb 20 15:10:57.008891 master-0 kubenswrapper[28120]: I0220 15:10:57.008838 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.013763 master-0 kubenswrapper[28120]: I0220 15:10:57.013702 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Feb 20 15:10:57.013955 master-0 kubenswrapper[28120]: I0220 15:10:57.013906 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Feb 20 15:10:57.014211 master-0 kubenswrapper[28120]: I0220 15:10:57.014182 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Feb 20 15:10:57.014323 master-0 kubenswrapper[28120]: I0220 15:10:57.014302 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Feb 20 15:10:57.014423 master-0 kubenswrapper[28120]: I0220 15:10:57.014403 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Feb 20 15:10:57.028894 master-0 kubenswrapper[28120]: I0220 15:10:57.028800 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-59dd4796d-tgc7t"] Feb 20 15:10:57.066178 master-0 kubenswrapper[28120]: I0220 15:10:57.065440 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd796\" (UniqueName: \"kubernetes.io/projected/464205d4-c3cf-4841-ba5e-c44e768cbb82-kube-api-access-gd796\") pod \"lvms-operator-59dd4796d-tgc7t\" (UID: \"464205d4-c3cf-4841-ba5e-c44e768cbb82\") " pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.066178 master-0 kubenswrapper[28120]: I0220 15:10:57.065521 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/464205d4-c3cf-4841-ba5e-c44e768cbb82-socket-dir\") pod \"lvms-operator-59dd4796d-tgc7t\" (UID: \"464205d4-c3cf-4841-ba5e-c44e768cbb82\") " pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.066178 master-0 kubenswrapper[28120]: I0220 15:10:57.065558 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/464205d4-c3cf-4841-ba5e-c44e768cbb82-webhook-cert\") pod \"lvms-operator-59dd4796d-tgc7t\" (UID: \"464205d4-c3cf-4841-ba5e-c44e768cbb82\") " pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.066178 master-0 kubenswrapper[28120]: I0220 15:10:57.065614 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/464205d4-c3cf-4841-ba5e-c44e768cbb82-metrics-cert\") pod \"lvms-operator-59dd4796d-tgc7t\" (UID: \"464205d4-c3cf-4841-ba5e-c44e768cbb82\") " pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.066178 master-0 kubenswrapper[28120]: I0220 15:10:57.065649 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/464205d4-c3cf-4841-ba5e-c44e768cbb82-apiservice-cert\") pod \"lvms-operator-59dd4796d-tgc7t\" (UID: \"464205d4-c3cf-4841-ba5e-c44e768cbb82\") " pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.166864 master-0 kubenswrapper[28120]: I0220 15:10:57.166794 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/464205d4-c3cf-4841-ba5e-c44e768cbb82-apiservice-cert\") pod \"lvms-operator-59dd4796d-tgc7t\" (UID: \"464205d4-c3cf-4841-ba5e-c44e768cbb82\") " pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.167102 master-0 kubenswrapper[28120]: I0220 15:10:57.166983 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd796\" (UniqueName: \"kubernetes.io/projected/464205d4-c3cf-4841-ba5e-c44e768cbb82-kube-api-access-gd796\") pod \"lvms-operator-59dd4796d-tgc7t\" (UID: \"464205d4-c3cf-4841-ba5e-c44e768cbb82\") " pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.167102 master-0 kubenswrapper[28120]: I0220 15:10:57.167023 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/464205d4-c3cf-4841-ba5e-c44e768cbb82-socket-dir\") pod \"lvms-operator-59dd4796d-tgc7t\" (UID: \"464205d4-c3cf-4841-ba5e-c44e768cbb82\") " pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.167102 master-0 kubenswrapper[28120]: I0220 15:10:57.167049 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/464205d4-c3cf-4841-ba5e-c44e768cbb82-webhook-cert\") pod \"lvms-operator-59dd4796d-tgc7t\" (UID: \"464205d4-c3cf-4841-ba5e-c44e768cbb82\") " pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.167102 master-0 kubenswrapper[28120]: I0220 15:10:57.167085 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/464205d4-c3cf-4841-ba5e-c44e768cbb82-metrics-cert\") pod \"lvms-operator-59dd4796d-tgc7t\" (UID: \"464205d4-c3cf-4841-ba5e-c44e768cbb82\") " pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.168552 master-0 kubenswrapper[28120]: I0220 15:10:57.168478 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/464205d4-c3cf-4841-ba5e-c44e768cbb82-socket-dir\") pod \"lvms-operator-59dd4796d-tgc7t\" (UID: \"464205d4-c3cf-4841-ba5e-c44e768cbb82\") " pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.170888 master-0 kubenswrapper[28120]: I0220 15:10:57.170810 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/464205d4-c3cf-4841-ba5e-c44e768cbb82-apiservice-cert\") pod \"lvms-operator-59dd4796d-tgc7t\" (UID: \"464205d4-c3cf-4841-ba5e-c44e768cbb82\") " pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.171267 master-0 kubenswrapper[28120]: I0220 15:10:57.171209 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/464205d4-c3cf-4841-ba5e-c44e768cbb82-metrics-cert\") pod \"lvms-operator-59dd4796d-tgc7t\" (UID: \"464205d4-c3cf-4841-ba5e-c44e768cbb82\") " pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.172566 master-0 kubenswrapper[28120]: I0220 15:10:57.172512 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/464205d4-c3cf-4841-ba5e-c44e768cbb82-webhook-cert\") pod \"lvms-operator-59dd4796d-tgc7t\" (UID: \"464205d4-c3cf-4841-ba5e-c44e768cbb82\") " pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.187631 master-0 kubenswrapper[28120]: I0220 15:10:57.187575 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd796\" (UniqueName: \"kubernetes.io/projected/464205d4-c3cf-4841-ba5e-c44e768cbb82-kube-api-access-gd796\") pod \"lvms-operator-59dd4796d-tgc7t\" (UID: \"464205d4-c3cf-4841-ba5e-c44e768cbb82\") " pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.372099 master-0 kubenswrapper[28120]: I0220 15:10:57.371908 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:10:57.859178 master-0 kubenswrapper[28120]: I0220 15:10:57.859112 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-59dd4796d-tgc7t"] Feb 20 15:10:57.863007 master-0 kubenswrapper[28120]: W0220 15:10:57.862967 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod464205d4_c3cf_4841_ba5e_c44e768cbb82.slice/crio-d3252f370b68c2282c1abaea559c96a2724b4d53cd783e235e4a88eadc262ac2 WatchSource:0}: Error finding container d3252f370b68c2282c1abaea559c96a2724b4d53cd783e235e4a88eadc262ac2: Status 404 returned error can't find the container with id d3252f370b68c2282c1abaea559c96a2724b4d53cd783e235e4a88eadc262ac2 Feb 20 15:10:58.699444 master-0 kubenswrapper[28120]: I0220 15:10:58.699363 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" event={"ID":"464205d4-c3cf-4841-ba5e-c44e768cbb82","Type":"ContainerStarted","Data":"d3252f370b68c2282c1abaea559c96a2724b4d53cd783e235e4a88eadc262ac2"} Feb 20 15:11:03.747873 master-0 kubenswrapper[28120]: I0220 15:11:03.747761 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" event={"ID":"464205d4-c3cf-4841-ba5e-c44e768cbb82","Type":"ContainerStarted","Data":"53c25dafa919f7ae3a15d02582cff320db299757fa4789a1931b08d0deca8624"} Feb 20 15:11:03.748964 master-0 kubenswrapper[28120]: I0220 15:11:03.747987 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:11:03.778223 master-0 kubenswrapper[28120]: I0220 15:11:03.778112 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" podStartSLOduration=2.717312823 podStartE2EDuration="7.778088387s" podCreationTimestamp="2026-02-20 15:10:56 +0000 UTC" firstStartedPulling="2026-02-20 15:10:57.867233716 +0000 UTC m=+596.128027309" lastFinishedPulling="2026-02-20 15:11:02.92800931 +0000 UTC m=+601.188802873" observedRunningTime="2026-02-20 15:11:03.776163559 +0000 UTC m=+602.036957162" watchObservedRunningTime="2026-02-20 15:11:03.778088387 +0000 UTC m=+602.038881980" Feb 20 15:11:04.763494 master-0 kubenswrapper[28120]: I0220 15:11:04.763414 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-59dd4796d-tgc7t" Feb 20 15:11:08.859196 master-0 kubenswrapper[28120]: I0220 15:11:08.859092 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77"] Feb 20 15:11:08.862909 master-0 kubenswrapper[28120]: I0220 15:11:08.862837 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" Feb 20 15:11:08.868791 master-0 kubenswrapper[28120]: I0220 15:11:08.868701 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-trhr6" Feb 20 15:11:08.880552 master-0 kubenswrapper[28120]: I0220 15:11:08.880398 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77"] Feb 20 15:11:08.967984 master-0 kubenswrapper[28120]: I0220 15:11:08.967889 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea59f886-702d-4b34-b1b2-f3a868e11158-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77\" (UID: \"ea59f886-702d-4b34-b1b2-f3a868e11158\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" Feb 20 15:11:08.968306 master-0 kubenswrapper[28120]: I0220 15:11:08.968001 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvmmt\" (UniqueName: \"kubernetes.io/projected/ea59f886-702d-4b34-b1b2-f3a868e11158-kube-api-access-jvmmt\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77\" (UID: \"ea59f886-702d-4b34-b1b2-f3a868e11158\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" Feb 20 15:11:08.968539 master-0 kubenswrapper[28120]: I0220 15:11:08.968439 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea59f886-702d-4b34-b1b2-f3a868e11158-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77\" (UID: \"ea59f886-702d-4b34-b1b2-f3a868e11158\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" Feb 20 15:11:09.071362 master-0 kubenswrapper[28120]: I0220 15:11:09.071268 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea59f886-702d-4b34-b1b2-f3a868e11158-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77\" (UID: \"ea59f886-702d-4b34-b1b2-f3a868e11158\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" Feb 20 15:11:09.071708 master-0 kubenswrapper[28120]: I0220 15:11:09.071548 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvmmt\" (UniqueName: \"kubernetes.io/projected/ea59f886-702d-4b34-b1b2-f3a868e11158-kube-api-access-jvmmt\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77\" (UID: \"ea59f886-702d-4b34-b1b2-f3a868e11158\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" Feb 20 15:11:09.071708 master-0 kubenswrapper[28120]: I0220 15:11:09.071659 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea59f886-702d-4b34-b1b2-f3a868e11158-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77\" (UID: \"ea59f886-702d-4b34-b1b2-f3a868e11158\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" Feb 20 15:11:09.072366 master-0 kubenswrapper[28120]: I0220 15:11:09.072274 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea59f886-702d-4b34-b1b2-f3a868e11158-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77\" (UID: \"ea59f886-702d-4b34-b1b2-f3a868e11158\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" Feb 20 15:11:09.072532 master-0 kubenswrapper[28120]: I0220 15:11:09.072412 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea59f886-702d-4b34-b1b2-f3a868e11158-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77\" (UID: \"ea59f886-702d-4b34-b1b2-f3a868e11158\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" Feb 20 15:11:09.103523 master-0 kubenswrapper[28120]: I0220 15:11:09.103450 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvmmt\" (UniqueName: \"kubernetes.io/projected/ea59f886-702d-4b34-b1b2-f3a868e11158-kube-api-access-jvmmt\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77\" (UID: \"ea59f886-702d-4b34-b1b2-f3a868e11158\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" Feb 20 15:11:09.194858 master-0 kubenswrapper[28120]: I0220 15:11:09.194635 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" Feb 20 15:11:09.646307 master-0 kubenswrapper[28120]: I0220 15:11:09.646241 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285"] Feb 20 15:11:09.648013 master-0 kubenswrapper[28120]: I0220 15:11:09.647976 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" Feb 20 15:11:09.661335 master-0 kubenswrapper[28120]: I0220 15:11:09.661277 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285"] Feb 20 15:11:09.730998 master-0 kubenswrapper[28120]: I0220 15:11:09.730941 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77"] Feb 20 15:11:09.732914 master-0 kubenswrapper[28120]: W0220 15:11:09.732867 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea59f886_702d_4b34_b1b2_f3a868e11158.slice/crio-9aa1d084200fcf0d68449d161b286c11f94e3804c1c9b523c1c40cd279b062e5 WatchSource:0}: Error finding container 9aa1d084200fcf0d68449d161b286c11f94e3804c1c9b523c1c40cd279b062e5: Status 404 returned error can't find the container with id 9aa1d084200fcf0d68449d161b286c11f94e3804c1c9b523c1c40cd279b062e5 Feb 20 15:11:09.798085 master-0 kubenswrapper[28120]: I0220 15:11:09.798021 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02375645-1a9f-442f-9a66-0b894d55dd6d-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285\" (UID: \"02375645-1a9f-442f-9a66-0b894d55dd6d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" Feb 20 15:11:09.798325 master-0 kubenswrapper[28120]: I0220 15:11:09.798182 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02375645-1a9f-442f-9a66-0b894d55dd6d-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285\" (UID: \"02375645-1a9f-442f-9a66-0b894d55dd6d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" Feb 20 15:11:09.798780 master-0 kubenswrapper[28120]: I0220 15:11:09.798738 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2k9b\" (UniqueName: \"kubernetes.io/projected/02375645-1a9f-442f-9a66-0b894d55dd6d-kube-api-access-n2k9b\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285\" (UID: \"02375645-1a9f-442f-9a66-0b894d55dd6d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" Feb 20 15:11:09.799859 master-0 kubenswrapper[28120]: I0220 15:11:09.799812 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" event={"ID":"ea59f886-702d-4b34-b1b2-f3a868e11158","Type":"ContainerStarted","Data":"9aa1d084200fcf0d68449d161b286c11f94e3804c1c9b523c1c40cd279b062e5"} Feb 20 15:11:09.900550 master-0 kubenswrapper[28120]: I0220 15:11:09.900434 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2k9b\" (UniqueName: \"kubernetes.io/projected/02375645-1a9f-442f-9a66-0b894d55dd6d-kube-api-access-n2k9b\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285\" (UID: \"02375645-1a9f-442f-9a66-0b894d55dd6d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" Feb 20 15:11:09.901020 master-0 kubenswrapper[28120]: I0220 15:11:09.900583 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02375645-1a9f-442f-9a66-0b894d55dd6d-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285\" (UID: \"02375645-1a9f-442f-9a66-0b894d55dd6d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" Feb 20 15:11:09.901020 master-0 kubenswrapper[28120]: I0220 15:11:09.900883 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02375645-1a9f-442f-9a66-0b894d55dd6d-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285\" (UID: \"02375645-1a9f-442f-9a66-0b894d55dd6d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" Feb 20 15:11:09.901434 master-0 kubenswrapper[28120]: I0220 15:11:09.901396 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02375645-1a9f-442f-9a66-0b894d55dd6d-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285\" (UID: \"02375645-1a9f-442f-9a66-0b894d55dd6d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" Feb 20 15:11:09.901579 master-0 kubenswrapper[28120]: I0220 15:11:09.901543 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02375645-1a9f-442f-9a66-0b894d55dd6d-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285\" (UID: \"02375645-1a9f-442f-9a66-0b894d55dd6d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" Feb 20 15:11:09.923203 master-0 kubenswrapper[28120]: I0220 15:11:09.923151 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2k9b\" (UniqueName: \"kubernetes.io/projected/02375645-1a9f-442f-9a66-0b894d55dd6d-kube-api-access-n2k9b\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285\" (UID: \"02375645-1a9f-442f-9a66-0b894d55dd6d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" Feb 20 15:11:09.975521 master-0 kubenswrapper[28120]: I0220 15:11:09.975456 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" Feb 20 15:11:11.202080 master-0 kubenswrapper[28120]: I0220 15:11:11.194314 28120 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-tlsdt container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 20 15:11:11.202080 master-0 kubenswrapper[28120]: I0220 15:11:11.194435 28120 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-7b65dc9fcb-tlsdt" podUID="5f55b652-bef8-4f50-9d1d-9d0a340c1dea" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 20 15:11:11.256082 master-0 kubenswrapper[28120]: I0220 15:11:11.255987 28120 generic.go:334] "Generic (PLEG): container finished" podID="ea59f886-702d-4b34-b1b2-f3a868e11158" containerID="82caec95b0eb630a774ae3146f28b44725539efe46d25f84d37eca34aaa1b98c" exitCode=0 Feb 20 15:11:11.256082 master-0 kubenswrapper[28120]: I0220 15:11:11.256050 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" event={"ID":"ea59f886-702d-4b34-b1b2-f3a868e11158","Type":"ContainerDied","Data":"82caec95b0eb630a774ae3146f28b44725539efe46d25f84d37eca34aaa1b98c"} Feb 20 15:11:11.441995 master-0 kubenswrapper[28120]: I0220 15:11:11.441839 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj"] Feb 20 15:11:11.444700 master-0 kubenswrapper[28120]: I0220 15:11:11.444673 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" Feb 20 15:11:11.470836 master-0 kubenswrapper[28120]: I0220 15:11:11.470785 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj"] Feb 20 15:11:11.510567 master-0 kubenswrapper[28120]: I0220 15:11:11.510479 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nrdx\" (UniqueName: \"kubernetes.io/projected/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-kube-api-access-4nrdx\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj\" (UID: \"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" Feb 20 15:11:11.510802 master-0 kubenswrapper[28120]: I0220 15:11:11.510593 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj\" (UID: \"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" Feb 20 15:11:11.510802 master-0 kubenswrapper[28120]: I0220 15:11:11.510628 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj\" (UID: \"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" Feb 20 15:11:11.620286 master-0 kubenswrapper[28120]: I0220 15:11:11.613583 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj\" (UID: \"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" Feb 20 15:11:11.620286 master-0 kubenswrapper[28120]: I0220 15:11:11.614018 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nrdx\" (UniqueName: \"kubernetes.io/projected/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-kube-api-access-4nrdx\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj\" (UID: \"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" Feb 20 15:11:11.620286 master-0 kubenswrapper[28120]: I0220 15:11:11.614296 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj\" (UID: \"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" Feb 20 15:11:11.620286 master-0 kubenswrapper[28120]: I0220 15:11:11.614907 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj\" (UID: \"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" Feb 20 15:11:11.620286 master-0 kubenswrapper[28120]: I0220 15:11:11.614955 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj\" (UID: \"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" Feb 20 15:11:11.641496 master-0 kubenswrapper[28120]: I0220 15:11:11.641398 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285"] Feb 20 15:11:11.643451 master-0 kubenswrapper[28120]: I0220 15:11:11.643392 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nrdx\" (UniqueName: \"kubernetes.io/projected/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-kube-api-access-4nrdx\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj\" (UID: \"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" Feb 20 15:11:11.644508 master-0 kubenswrapper[28120]: W0220 15:11:11.644432 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02375645_1a9f_442f_9a66_0b894d55dd6d.slice/crio-2633eef621c8464d4e817f6e1affb82841d987c89c03cf36bb3723a638e19a45 WatchSource:0}: Error finding container 2633eef621c8464d4e817f6e1affb82841d987c89c03cf36bb3723a638e19a45: Status 404 returned error can't find the container with id 2633eef621c8464d4e817f6e1affb82841d987c89c03cf36bb3723a638e19a45 Feb 20 15:11:11.770050 master-0 kubenswrapper[28120]: I0220 15:11:11.769972 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" Feb 20 15:11:12.275969 master-0 kubenswrapper[28120]: I0220 15:11:12.275731 28120 generic.go:334] "Generic (PLEG): container finished" podID="02375645-1a9f-442f-9a66-0b894d55dd6d" containerID="7b445a4f0691416e2fbd90f91d8e11574e1cbe853f338533383bf78282ec4d1c" exitCode=0 Feb 20 15:11:12.275969 master-0 kubenswrapper[28120]: I0220 15:11:12.275866 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" event={"ID":"02375645-1a9f-442f-9a66-0b894d55dd6d","Type":"ContainerDied","Data":"7b445a4f0691416e2fbd90f91d8e11574e1cbe853f338533383bf78282ec4d1c"} Feb 20 15:11:12.276596 master-0 kubenswrapper[28120]: I0220 15:11:12.275973 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" event={"ID":"02375645-1a9f-442f-9a66-0b894d55dd6d","Type":"ContainerStarted","Data":"2633eef621c8464d4e817f6e1affb82841d987c89c03cf36bb3723a638e19a45"} Feb 20 15:11:12.282061 master-0 kubenswrapper[28120]: I0220 15:11:12.281993 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj"] Feb 20 15:11:12.286499 master-0 kubenswrapper[28120]: W0220 15:11:12.286442 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb27eb343_e4fb_4edf_95f4_8dfe7a7cfd4e.slice/crio-704c6417fbb2e96f984b9becf320b3ebaf372637b2ad5402303d55974c8f6b8a WatchSource:0}: Error finding container 704c6417fbb2e96f984b9becf320b3ebaf372637b2ad5402303d55974c8f6b8a: Status 404 returned error can't find the container with id 704c6417fbb2e96f984b9becf320b3ebaf372637b2ad5402303d55974c8f6b8a Feb 20 15:11:13.290748 master-0 kubenswrapper[28120]: I0220 15:11:13.290554 28120 generic.go:334] "Generic (PLEG): container finished" podID="b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e" containerID="9828a691b593afa902de78fb39d768a9e95f559fd4f52ce1d3af5e2785eb4514" exitCode=0 Feb 20 15:11:13.290748 master-0 kubenswrapper[28120]: I0220 15:11:13.290665 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" event={"ID":"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e","Type":"ContainerDied","Data":"9828a691b593afa902de78fb39d768a9e95f559fd4f52ce1d3af5e2785eb4514"} Feb 20 15:11:13.291566 master-0 kubenswrapper[28120]: I0220 15:11:13.290803 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" event={"ID":"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e","Type":"ContainerStarted","Data":"704c6417fbb2e96f984b9becf320b3ebaf372637b2ad5402303d55974c8f6b8a"} Feb 20 15:11:14.302353 master-0 kubenswrapper[28120]: I0220 15:11:14.302288 28120 generic.go:334] "Generic (PLEG): container finished" podID="02375645-1a9f-442f-9a66-0b894d55dd6d" containerID="447a2d1e9e5b6a3f79b06e28220fbba18db2e5192c303e714daa80fed75ccf95" exitCode=0 Feb 20 15:11:14.302353 master-0 kubenswrapper[28120]: I0220 15:11:14.302339 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" event={"ID":"02375645-1a9f-442f-9a66-0b894d55dd6d","Type":"ContainerDied","Data":"447a2d1e9e5b6a3f79b06e28220fbba18db2e5192c303e714daa80fed75ccf95"} Feb 20 15:11:16.323257 master-0 kubenswrapper[28120]: I0220 15:11:16.323175 28120 generic.go:334] "Generic (PLEG): container finished" podID="02375645-1a9f-442f-9a66-0b894d55dd6d" containerID="a0472cf647ad4e3cdcc2fa5dbd10488d4181674463e3dbc98845075b388d73bf" exitCode=0 Feb 20 15:11:16.324147 master-0 kubenswrapper[28120]: I0220 15:11:16.323291 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" event={"ID":"02375645-1a9f-442f-9a66-0b894d55dd6d","Type":"ContainerDied","Data":"a0472cf647ad4e3cdcc2fa5dbd10488d4181674463e3dbc98845075b388d73bf"} Feb 20 15:11:16.326413 master-0 kubenswrapper[28120]: I0220 15:11:16.326351 28120 generic.go:334] "Generic (PLEG): container finished" podID="b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e" containerID="2b4a6317a5d3a20b33a60dda8ae3fb235ab0642b6fb495c0761d343d1ffafe50" exitCode=0 Feb 20 15:11:16.327378 master-0 kubenswrapper[28120]: I0220 15:11:16.327311 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" event={"ID":"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e","Type":"ContainerDied","Data":"2b4a6317a5d3a20b33a60dda8ae3fb235ab0642b6fb495c0761d343d1ffafe50"} Feb 20 15:11:16.332395 master-0 kubenswrapper[28120]: I0220 15:11:16.332351 28120 generic.go:334] "Generic (PLEG): container finished" podID="ea59f886-702d-4b34-b1b2-f3a868e11158" containerID="88f1b319952a02f4c39ec69665cacffe3b55355337d20c6738ead9a8bd44f6b1" exitCode=0 Feb 20 15:11:16.332628 master-0 kubenswrapper[28120]: I0220 15:11:16.332411 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" event={"ID":"ea59f886-702d-4b34-b1b2-f3a868e11158","Type":"ContainerDied","Data":"88f1b319952a02f4c39ec69665cacffe3b55355337d20c6738ead9a8bd44f6b1"} Feb 20 15:11:17.346509 master-0 kubenswrapper[28120]: I0220 15:11:17.346440 28120 generic.go:334] "Generic (PLEG): container finished" podID="ea59f886-702d-4b34-b1b2-f3a868e11158" containerID="64bd5e30f7eb31a412d31bfe448c558cbd5aa108a1ab658e4cedd2dc53b573d1" exitCode=0 Feb 20 15:11:17.347340 master-0 kubenswrapper[28120]: I0220 15:11:17.347267 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" event={"ID":"ea59f886-702d-4b34-b1b2-f3a868e11158","Type":"ContainerDied","Data":"64bd5e30f7eb31a412d31bfe448c558cbd5aa108a1ab658e4cedd2dc53b573d1"} Feb 20 15:11:17.351765 master-0 kubenswrapper[28120]: I0220 15:11:17.351370 28120 generic.go:334] "Generic (PLEG): container finished" podID="b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e" containerID="66c33f5831250c106498cc6f7cd0edd6f66aa01e0c6f853a38a9b560083297b9" exitCode=0 Feb 20 15:11:17.351943 master-0 kubenswrapper[28120]: I0220 15:11:17.351780 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" event={"ID":"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e","Type":"ContainerDied","Data":"66c33f5831250c106498cc6f7cd0edd6f66aa01e0c6f853a38a9b560083297b9"} Feb 20 15:11:17.850178 master-0 kubenswrapper[28120]: I0220 15:11:17.850126 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" Feb 20 15:11:17.939252 master-0 kubenswrapper[28120]: I0220 15:11:17.939140 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02375645-1a9f-442f-9a66-0b894d55dd6d-bundle\") pod \"02375645-1a9f-442f-9a66-0b894d55dd6d\" (UID: \"02375645-1a9f-442f-9a66-0b894d55dd6d\") " Feb 20 15:11:17.939545 master-0 kubenswrapper[28120]: I0220 15:11:17.939343 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2k9b\" (UniqueName: \"kubernetes.io/projected/02375645-1a9f-442f-9a66-0b894d55dd6d-kube-api-access-n2k9b\") pod \"02375645-1a9f-442f-9a66-0b894d55dd6d\" (UID: \"02375645-1a9f-442f-9a66-0b894d55dd6d\") " Feb 20 15:11:17.939545 master-0 kubenswrapper[28120]: I0220 15:11:17.939464 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02375645-1a9f-442f-9a66-0b894d55dd6d-util\") pod \"02375645-1a9f-442f-9a66-0b894d55dd6d\" (UID: \"02375645-1a9f-442f-9a66-0b894d55dd6d\") " Feb 20 15:11:17.941419 master-0 kubenswrapper[28120]: I0220 15:11:17.941335 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02375645-1a9f-442f-9a66-0b894d55dd6d-bundle" (OuterVolumeSpecName: "bundle") pod "02375645-1a9f-442f-9a66-0b894d55dd6d" (UID: "02375645-1a9f-442f-9a66-0b894d55dd6d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:11:17.944093 master-0 kubenswrapper[28120]: I0220 15:11:17.944002 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02375645-1a9f-442f-9a66-0b894d55dd6d-kube-api-access-n2k9b" (OuterVolumeSpecName: "kube-api-access-n2k9b") pod "02375645-1a9f-442f-9a66-0b894d55dd6d" (UID: "02375645-1a9f-442f-9a66-0b894d55dd6d"). InnerVolumeSpecName "kube-api-access-n2k9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:11:17.965212 master-0 kubenswrapper[28120]: I0220 15:11:17.965119 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02375645-1a9f-442f-9a66-0b894d55dd6d-util" (OuterVolumeSpecName: "util") pod "02375645-1a9f-442f-9a66-0b894d55dd6d" (UID: "02375645-1a9f-442f-9a66-0b894d55dd6d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:11:18.042585 master-0 kubenswrapper[28120]: I0220 15:11:18.042392 28120 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02375645-1a9f-442f-9a66-0b894d55dd6d-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:11:18.042585 master-0 kubenswrapper[28120]: I0220 15:11:18.042464 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2k9b\" (UniqueName: \"kubernetes.io/projected/02375645-1a9f-442f-9a66-0b894d55dd6d-kube-api-access-n2k9b\") on node \"master-0\" DevicePath \"\"" Feb 20 15:11:18.042585 master-0 kubenswrapper[28120]: I0220 15:11:18.042484 28120 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02375645-1a9f-442f-9a66-0b894d55dd6d-util\") on node \"master-0\" DevicePath \"\"" Feb 20 15:11:18.367035 master-0 kubenswrapper[28120]: I0220 15:11:18.366822 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" event={"ID":"02375645-1a9f-442f-9a66-0b894d55dd6d","Type":"ContainerDied","Data":"2633eef621c8464d4e817f6e1affb82841d987c89c03cf36bb3723a638e19a45"} Feb 20 15:11:18.367035 master-0 kubenswrapper[28120]: I0220 15:11:18.366918 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2633eef621c8464d4e817f6e1affb82841d987c89c03cf36bb3723a638e19a45" Feb 20 15:11:18.367860 master-0 kubenswrapper[28120]: I0220 15:11:18.367089 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kw285" Feb 20 15:11:18.909491 master-0 kubenswrapper[28120]: I0220 15:11:18.909407 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" Feb 20 15:11:18.924770 master-0 kubenswrapper[28120]: I0220 15:11:18.924696 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" Feb 20 15:11:18.960102 master-0 kubenswrapper[28120]: I0220 15:11:18.960005 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea59f886-702d-4b34-b1b2-f3a868e11158-util\") pod \"ea59f886-702d-4b34-b1b2-f3a868e11158\" (UID: \"ea59f886-702d-4b34-b1b2-f3a868e11158\") " Feb 20 15:11:18.960102 master-0 kubenswrapper[28120]: I0220 15:11:18.960110 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nrdx\" (UniqueName: \"kubernetes.io/projected/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-kube-api-access-4nrdx\") pod \"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e\" (UID: \"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e\") " Feb 20 15:11:18.960529 master-0 kubenswrapper[28120]: I0220 15:11:18.960177 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea59f886-702d-4b34-b1b2-f3a868e11158-bundle\") pod \"ea59f886-702d-4b34-b1b2-f3a868e11158\" (UID: \"ea59f886-702d-4b34-b1b2-f3a868e11158\") " Feb 20 15:11:18.960529 master-0 kubenswrapper[28120]: I0220 15:11:18.960282 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvmmt\" (UniqueName: \"kubernetes.io/projected/ea59f886-702d-4b34-b1b2-f3a868e11158-kube-api-access-jvmmt\") pod \"ea59f886-702d-4b34-b1b2-f3a868e11158\" (UID: \"ea59f886-702d-4b34-b1b2-f3a868e11158\") " Feb 20 15:11:18.960529 master-0 kubenswrapper[28120]: I0220 15:11:18.960326 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-util\") pod \"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e\" (UID: \"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e\") " Feb 20 15:11:18.960529 master-0 kubenswrapper[28120]: I0220 15:11:18.960370 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-bundle\") pod \"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e\" (UID: \"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e\") " Feb 20 15:11:18.961540 master-0 kubenswrapper[28120]: I0220 15:11:18.961467 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-bundle" (OuterVolumeSpecName: "bundle") pod "b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e" (UID: "b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:11:18.962587 master-0 kubenswrapper[28120]: I0220 15:11:18.962476 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea59f886-702d-4b34-b1b2-f3a868e11158-bundle" (OuterVolumeSpecName: "bundle") pod "ea59f886-702d-4b34-b1b2-f3a868e11158" (UID: "ea59f886-702d-4b34-b1b2-f3a868e11158"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:11:18.967675 master-0 kubenswrapper[28120]: I0220 15:11:18.967595 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea59f886-702d-4b34-b1b2-f3a868e11158-kube-api-access-jvmmt" (OuterVolumeSpecName: "kube-api-access-jvmmt") pod "ea59f886-702d-4b34-b1b2-f3a868e11158" (UID: "ea59f886-702d-4b34-b1b2-f3a868e11158"). InnerVolumeSpecName "kube-api-access-jvmmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:11:18.971025 master-0 kubenswrapper[28120]: I0220 15:11:18.970645 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-kube-api-access-4nrdx" (OuterVolumeSpecName: "kube-api-access-4nrdx") pod "b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e" (UID: "b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e"). InnerVolumeSpecName "kube-api-access-4nrdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:11:18.986431 master-0 kubenswrapper[28120]: I0220 15:11:18.986353 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-util" (OuterVolumeSpecName: "util") pod "b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e" (UID: "b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:11:18.990454 master-0 kubenswrapper[28120]: I0220 15:11:18.990384 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea59f886-702d-4b34-b1b2-f3a868e11158-util" (OuterVolumeSpecName: "util") pod "ea59f886-702d-4b34-b1b2-f3a868e11158" (UID: "ea59f886-702d-4b34-b1b2-f3a868e11158"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:11:19.062209 master-0 kubenswrapper[28120]: I0220 15:11:19.062144 28120 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea59f886-702d-4b34-b1b2-f3a868e11158-util\") on node \"master-0\" DevicePath \"\"" Feb 20 15:11:19.062209 master-0 kubenswrapper[28120]: I0220 15:11:19.062194 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nrdx\" (UniqueName: \"kubernetes.io/projected/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-kube-api-access-4nrdx\") on node \"master-0\" DevicePath \"\"" Feb 20 15:11:19.062476 master-0 kubenswrapper[28120]: I0220 15:11:19.062287 28120 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea59f886-702d-4b34-b1b2-f3a868e11158-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:11:19.062476 master-0 kubenswrapper[28120]: I0220 15:11:19.062303 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvmmt\" (UniqueName: \"kubernetes.io/projected/ea59f886-702d-4b34-b1b2-f3a868e11158-kube-api-access-jvmmt\") on node \"master-0\" DevicePath \"\"" Feb 20 15:11:19.062476 master-0 kubenswrapper[28120]: I0220 15:11:19.062315 28120 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-util\") on node \"master-0\" DevicePath \"\"" Feb 20 15:11:19.062476 master-0 kubenswrapper[28120]: I0220 15:11:19.062325 28120 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:11:19.381593 master-0 kubenswrapper[28120]: I0220 15:11:19.381398 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" event={"ID":"ea59f886-702d-4b34-b1b2-f3a868e11158","Type":"ContainerDied","Data":"9aa1d084200fcf0d68449d161b286c11f94e3804c1c9b523c1c40cd279b062e5"} Feb 20 15:11:19.381593 master-0 kubenswrapper[28120]: I0220 15:11:19.381495 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5shw77" Feb 20 15:11:19.382744 master-0 kubenswrapper[28120]: I0220 15:11:19.381550 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aa1d084200fcf0d68449d161b286c11f94e3804c1c9b523c1c40cd279b062e5" Feb 20 15:11:19.385590 master-0 kubenswrapper[28120]: I0220 15:11:19.385540 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" event={"ID":"b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e","Type":"ContainerDied","Data":"704c6417fbb2e96f984b9becf320b3ebaf372637b2ad5402303d55974c8f6b8a"} Feb 20 15:11:19.385590 master-0 kubenswrapper[28120]: I0220 15:11:19.385573 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="704c6417fbb2e96f984b9becf320b3ebaf372637b2ad5402303d55974c8f6b8a" Feb 20 15:11:19.385788 master-0 kubenswrapper[28120]: I0220 15:11:19.385630 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaxw5pj" Feb 20 15:11:19.646025 master-0 kubenswrapper[28120]: I0220 15:11:19.645814 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt"] Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: E0220 15:11:19.646585 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02375645-1a9f-442f-9a66-0b894d55dd6d" containerName="pull" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: I0220 15:11:19.646669 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="02375645-1a9f-442f-9a66-0b894d55dd6d" containerName="pull" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: E0220 15:11:19.646759 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea59f886-702d-4b34-b1b2-f3a868e11158" containerName="util" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: I0220 15:11:19.646775 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea59f886-702d-4b34-b1b2-f3a868e11158" containerName="util" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: E0220 15:11:19.646852 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea59f886-702d-4b34-b1b2-f3a868e11158" containerName="extract" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: I0220 15:11:19.646869 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea59f886-702d-4b34-b1b2-f3a868e11158" containerName="extract" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: E0220 15:11:19.646897 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e" containerName="util" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: I0220 15:11:19.646983 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e" containerName="util" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: E0220 15:11:19.647006 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02375645-1a9f-442f-9a66-0b894d55dd6d" containerName="extract" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: I0220 15:11:19.647066 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="02375645-1a9f-442f-9a66-0b894d55dd6d" containerName="extract" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: E0220 15:11:19.647096 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e" containerName="extract" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: I0220 15:11:19.647168 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e" containerName="extract" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: E0220 15:11:19.647201 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea59f886-702d-4b34-b1b2-f3a868e11158" containerName="pull" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: I0220 15:11:19.647213 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea59f886-702d-4b34-b1b2-f3a868e11158" containerName="pull" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: E0220 15:11:19.647241 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e" containerName="pull" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: I0220 15:11:19.647253 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e" containerName="pull" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: E0220 15:11:19.647322 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02375645-1a9f-442f-9a66-0b894d55dd6d" containerName="util" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: I0220 15:11:19.647337 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="02375645-1a9f-442f-9a66-0b894d55dd6d" containerName="util" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: I0220 15:11:19.647617 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27eb343-e4fb-4edf-95f4-8dfe7a7cfd4e" containerName="extract" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: I0220 15:11:19.647652 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea59f886-702d-4b34-b1b2-f3a868e11158" containerName="extract" Feb 20 15:11:19.648793 master-0 kubenswrapper[28120]: I0220 15:11:19.647670 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="02375645-1a9f-442f-9a66-0b894d55dd6d" containerName="extract" Feb 20 15:11:19.650726 master-0 kubenswrapper[28120]: I0220 15:11:19.650662 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" Feb 20 15:11:19.653113 master-0 kubenswrapper[28120]: I0220 15:11:19.653057 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-trhr6" Feb 20 15:11:19.663678 master-0 kubenswrapper[28120]: I0220 15:11:19.663062 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt"] Feb 20 15:11:19.671674 master-0 kubenswrapper[28120]: I0220 15:11:19.671220 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b82eb676-3ee7-434e-8eb3-116090ba1d8a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt\" (UID: \"b82eb676-3ee7-434e-8eb3-116090ba1d8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" Feb 20 15:11:19.671674 master-0 kubenswrapper[28120]: I0220 15:11:19.671396 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6f8j\" (UniqueName: \"kubernetes.io/projected/b82eb676-3ee7-434e-8eb3-116090ba1d8a-kube-api-access-t6f8j\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt\" (UID: \"b82eb676-3ee7-434e-8eb3-116090ba1d8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" Feb 20 15:11:19.671674 master-0 kubenswrapper[28120]: I0220 15:11:19.671541 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b82eb676-3ee7-434e-8eb3-116090ba1d8a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt\" (UID: \"b82eb676-3ee7-434e-8eb3-116090ba1d8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" Feb 20 15:11:19.773530 master-0 kubenswrapper[28120]: I0220 15:11:19.773447 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b82eb676-3ee7-434e-8eb3-116090ba1d8a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt\" (UID: \"b82eb676-3ee7-434e-8eb3-116090ba1d8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" Feb 20 15:11:19.773960 master-0 kubenswrapper[28120]: I0220 15:11:19.773752 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b82eb676-3ee7-434e-8eb3-116090ba1d8a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt\" (UID: \"b82eb676-3ee7-434e-8eb3-116090ba1d8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" Feb 20 15:11:19.774211 master-0 kubenswrapper[28120]: I0220 15:11:19.774159 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6f8j\" (UniqueName: \"kubernetes.io/projected/b82eb676-3ee7-434e-8eb3-116090ba1d8a-kube-api-access-t6f8j\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt\" (UID: \"b82eb676-3ee7-434e-8eb3-116090ba1d8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" Feb 20 15:11:19.774211 master-0 kubenswrapper[28120]: I0220 15:11:19.774187 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b82eb676-3ee7-434e-8eb3-116090ba1d8a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt\" (UID: \"b82eb676-3ee7-434e-8eb3-116090ba1d8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" Feb 20 15:11:19.774422 master-0 kubenswrapper[28120]: I0220 15:11:19.774375 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b82eb676-3ee7-434e-8eb3-116090ba1d8a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt\" (UID: \"b82eb676-3ee7-434e-8eb3-116090ba1d8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" Feb 20 15:11:19.794028 master-0 kubenswrapper[28120]: I0220 15:11:19.793737 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6f8j\" (UniqueName: \"kubernetes.io/projected/b82eb676-3ee7-434e-8eb3-116090ba1d8a-kube-api-access-t6f8j\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt\" (UID: \"b82eb676-3ee7-434e-8eb3-116090ba1d8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" Feb 20 15:11:19.973114 master-0 kubenswrapper[28120]: I0220 15:11:19.972884 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" Feb 20 15:11:20.478461 master-0 kubenswrapper[28120]: W0220 15:11:20.478238 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb82eb676_3ee7_434e_8eb3_116090ba1d8a.slice/crio-ef9833bedeca9a81dd2e42f48459b874b9f927ee2b5a5231492aec0b268ff49d WatchSource:0}: Error finding container ef9833bedeca9a81dd2e42f48459b874b9f927ee2b5a5231492aec0b268ff49d: Status 404 returned error can't find the container with id ef9833bedeca9a81dd2e42f48459b874b9f927ee2b5a5231492aec0b268ff49d Feb 20 15:11:20.480435 master-0 kubenswrapper[28120]: I0220 15:11:20.480369 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt"] Feb 20 15:11:21.409321 master-0 kubenswrapper[28120]: I0220 15:11:21.409205 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" event={"ID":"b82eb676-3ee7-434e-8eb3-116090ba1d8a","Type":"ContainerDied","Data":"fc5926fd45586bf242d5680b7e8f0197d419d73b13a8f4678b666b3bd01d9484"} Feb 20 15:11:21.409665 master-0 kubenswrapper[28120]: I0220 15:11:21.409013 28120 generic.go:334] "Generic (PLEG): container finished" podID="b82eb676-3ee7-434e-8eb3-116090ba1d8a" containerID="fc5926fd45586bf242d5680b7e8f0197d419d73b13a8f4678b666b3bd01d9484" exitCode=0 Feb 20 15:11:21.409665 master-0 kubenswrapper[28120]: I0220 15:11:21.409489 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" event={"ID":"b82eb676-3ee7-434e-8eb3-116090ba1d8a","Type":"ContainerStarted","Data":"ef9833bedeca9a81dd2e42f48459b874b9f927ee2b5a5231492aec0b268ff49d"} Feb 20 15:11:23.432630 master-0 kubenswrapper[28120]: I0220 15:11:23.432585 28120 generic.go:334] "Generic (PLEG): container finished" podID="b82eb676-3ee7-434e-8eb3-116090ba1d8a" containerID="5cd84b8e1b906a30d8766056860c5c38c9250f2a35bd6d48bd341b12add0399f" exitCode=0 Feb 20 15:11:23.433157 master-0 kubenswrapper[28120]: I0220 15:11:23.432736 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" event={"ID":"b82eb676-3ee7-434e-8eb3-116090ba1d8a","Type":"ContainerDied","Data":"5cd84b8e1b906a30d8766056860c5c38c9250f2a35bd6d48bd341b12add0399f"} Feb 20 15:11:24.445803 master-0 kubenswrapper[28120]: I0220 15:11:24.445740 28120 generic.go:334] "Generic (PLEG): container finished" podID="b82eb676-3ee7-434e-8eb3-116090ba1d8a" containerID="d508f8846734c8e6b84cdbc285b92396048b567afe721f67906a8e1b60c9ae08" exitCode=0 Feb 20 15:11:24.445803 master-0 kubenswrapper[28120]: I0220 15:11:24.445802 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" event={"ID":"b82eb676-3ee7-434e-8eb3-116090ba1d8a","Type":"ContainerDied","Data":"d508f8846734c8e6b84cdbc285b92396048b567afe721f67906a8e1b60c9ae08"} Feb 20 15:11:25.853086 master-0 kubenswrapper[28120]: I0220 15:11:25.853045 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" Feb 20 15:11:25.884423 master-0 kubenswrapper[28120]: I0220 15:11:25.884377 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b82eb676-3ee7-434e-8eb3-116090ba1d8a-util\") pod \"b82eb676-3ee7-434e-8eb3-116090ba1d8a\" (UID: \"b82eb676-3ee7-434e-8eb3-116090ba1d8a\") " Feb 20 15:11:25.884733 master-0 kubenswrapper[28120]: I0220 15:11:25.884467 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6f8j\" (UniqueName: \"kubernetes.io/projected/b82eb676-3ee7-434e-8eb3-116090ba1d8a-kube-api-access-t6f8j\") pod \"b82eb676-3ee7-434e-8eb3-116090ba1d8a\" (UID: \"b82eb676-3ee7-434e-8eb3-116090ba1d8a\") " Feb 20 15:11:25.884733 master-0 kubenswrapper[28120]: I0220 15:11:25.884495 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b82eb676-3ee7-434e-8eb3-116090ba1d8a-bundle\") pod \"b82eb676-3ee7-434e-8eb3-116090ba1d8a\" (UID: \"b82eb676-3ee7-434e-8eb3-116090ba1d8a\") " Feb 20 15:11:25.886617 master-0 kubenswrapper[28120]: I0220 15:11:25.886552 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b82eb676-3ee7-434e-8eb3-116090ba1d8a-bundle" (OuterVolumeSpecName: "bundle") pod "b82eb676-3ee7-434e-8eb3-116090ba1d8a" (UID: "b82eb676-3ee7-434e-8eb3-116090ba1d8a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:11:25.888364 master-0 kubenswrapper[28120]: I0220 15:11:25.888294 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82eb676-3ee7-434e-8eb3-116090ba1d8a-kube-api-access-t6f8j" (OuterVolumeSpecName: "kube-api-access-t6f8j") pod "b82eb676-3ee7-434e-8eb3-116090ba1d8a" (UID: "b82eb676-3ee7-434e-8eb3-116090ba1d8a"). InnerVolumeSpecName "kube-api-access-t6f8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:11:25.959440 master-0 kubenswrapper[28120]: I0220 15:11:25.959148 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b82eb676-3ee7-434e-8eb3-116090ba1d8a-util" (OuterVolumeSpecName: "util") pod "b82eb676-3ee7-434e-8eb3-116090ba1d8a" (UID: "b82eb676-3ee7-434e-8eb3-116090ba1d8a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:11:25.986521 master-0 kubenswrapper[28120]: I0220 15:11:25.986442 28120 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b82eb676-3ee7-434e-8eb3-116090ba1d8a-util\") on node \"master-0\" DevicePath \"\"" Feb 20 15:11:25.986521 master-0 kubenswrapper[28120]: I0220 15:11:25.986504 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6f8j\" (UniqueName: \"kubernetes.io/projected/b82eb676-3ee7-434e-8eb3-116090ba1d8a-kube-api-access-t6f8j\") on node \"master-0\" DevicePath \"\"" Feb 20 15:11:25.986521 master-0 kubenswrapper[28120]: I0220 15:11:25.986528 28120 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b82eb676-3ee7-434e-8eb3-116090ba1d8a-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:11:26.462774 master-0 kubenswrapper[28120]: I0220 15:11:26.462603 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" event={"ID":"b82eb676-3ee7-434e-8eb3-116090ba1d8a","Type":"ContainerDied","Data":"ef9833bedeca9a81dd2e42f48459b874b9f927ee2b5a5231492aec0b268ff49d"} Feb 20 15:11:26.462774 master-0 kubenswrapper[28120]: I0220 15:11:26.462649 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef9833bedeca9a81dd2e42f48459b874b9f927ee2b5a5231492aec0b268ff49d" Feb 20 15:11:26.462774 master-0 kubenswrapper[28120]: I0220 15:11:26.462710 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088zpjt" Feb 20 15:11:29.613970 master-0 kubenswrapper[28120]: I0220 15:11:29.612480 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-hxf8r"] Feb 20 15:11:29.613970 master-0 kubenswrapper[28120]: E0220 15:11:29.612817 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82eb676-3ee7-434e-8eb3-116090ba1d8a" containerName="extract" Feb 20 15:11:29.613970 master-0 kubenswrapper[28120]: I0220 15:11:29.612833 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82eb676-3ee7-434e-8eb3-116090ba1d8a" containerName="extract" Feb 20 15:11:29.613970 master-0 kubenswrapper[28120]: E0220 15:11:29.612858 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82eb676-3ee7-434e-8eb3-116090ba1d8a" containerName="pull" Feb 20 15:11:29.613970 master-0 kubenswrapper[28120]: I0220 15:11:29.612866 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82eb676-3ee7-434e-8eb3-116090ba1d8a" containerName="pull" Feb 20 15:11:29.613970 master-0 kubenswrapper[28120]: E0220 15:11:29.612901 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82eb676-3ee7-434e-8eb3-116090ba1d8a" containerName="util" Feb 20 15:11:29.613970 master-0 kubenswrapper[28120]: I0220 15:11:29.612909 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82eb676-3ee7-434e-8eb3-116090ba1d8a" containerName="util" Feb 20 15:11:29.613970 master-0 kubenswrapper[28120]: I0220 15:11:29.613131 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82eb676-3ee7-434e-8eb3-116090ba1d8a" containerName="extract" Feb 20 15:11:29.613970 master-0 kubenswrapper[28120]: I0220 15:11:29.613889 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-hxf8r" Feb 20 15:11:29.616610 master-0 kubenswrapper[28120]: I0220 15:11:29.616555 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 20 15:11:29.619099 master-0 kubenswrapper[28120]: I0220 15:11:29.619061 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 20 15:11:29.625415 master-0 kubenswrapper[28120]: I0220 15:11:29.625368 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-hxf8r"] Feb 20 15:11:29.749211 master-0 kubenswrapper[28120]: I0220 15:11:29.749124 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwqcb\" (UniqueName: \"kubernetes.io/projected/ce6bf5ab-c4a5-4636-b1c9-5237d8eea3d1-kube-api-access-jwqcb\") pod \"nmstate-operator-694c9596b7-hxf8r\" (UID: \"ce6bf5ab-c4a5-4636-b1c9-5237d8eea3d1\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-hxf8r" Feb 20 15:11:29.851010 master-0 kubenswrapper[28120]: I0220 15:11:29.850908 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwqcb\" (UniqueName: \"kubernetes.io/projected/ce6bf5ab-c4a5-4636-b1c9-5237d8eea3d1-kube-api-access-jwqcb\") pod \"nmstate-operator-694c9596b7-hxf8r\" (UID: \"ce6bf5ab-c4a5-4636-b1c9-5237d8eea3d1\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-hxf8r" Feb 20 15:11:29.869605 master-0 kubenswrapper[28120]: I0220 15:11:29.869454 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwqcb\" (UniqueName: \"kubernetes.io/projected/ce6bf5ab-c4a5-4636-b1c9-5237d8eea3d1-kube-api-access-jwqcb\") pod \"nmstate-operator-694c9596b7-hxf8r\" (UID: \"ce6bf5ab-c4a5-4636-b1c9-5237d8eea3d1\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-hxf8r" Feb 20 15:11:29.956754 master-0 kubenswrapper[28120]: I0220 15:11:29.956683 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-hxf8r" Feb 20 15:11:30.441640 master-0 kubenswrapper[28120]: I0220 15:11:30.441505 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-hxf8r"] Feb 20 15:11:30.458223 master-0 kubenswrapper[28120]: W0220 15:11:30.458154 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce6bf5ab_c4a5_4636_b1c9_5237d8eea3d1.slice/crio-34f2e795ed53782749e39802aa7452ac09c777c2eeed3ba0af085b1fb317387f WatchSource:0}: Error finding container 34f2e795ed53782749e39802aa7452ac09c777c2eeed3ba0af085b1fb317387f: Status 404 returned error can't find the container with id 34f2e795ed53782749e39802aa7452ac09c777c2eeed3ba0af085b1fb317387f Feb 20 15:11:30.496916 master-0 kubenswrapper[28120]: I0220 15:11:30.496848 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-hxf8r" event={"ID":"ce6bf5ab-c4a5-4636-b1c9-5237d8eea3d1","Type":"ContainerStarted","Data":"34f2e795ed53782749e39802aa7452ac09c777c2eeed3ba0af085b1fb317387f"} Feb 20 15:11:33.547651 master-0 kubenswrapper[28120]: I0220 15:11:33.547375 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-hxf8r" podStartSLOduration=1.697362686 podStartE2EDuration="4.547358199s" podCreationTimestamp="2026-02-20 15:11:29 +0000 UTC" firstStartedPulling="2026-02-20 15:11:30.464581531 +0000 UTC m=+628.725375104" lastFinishedPulling="2026-02-20 15:11:33.314577054 +0000 UTC m=+631.575370617" observedRunningTime="2026-02-20 15:11:33.54540994 +0000 UTC m=+631.806203503" watchObservedRunningTime="2026-02-20 15:11:33.547358199 +0000 UTC m=+631.808151762" Feb 20 15:11:34.541896 master-0 kubenswrapper[28120]: I0220 15:11:34.541839 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-hxf8r" event={"ID":"ce6bf5ab-c4a5-4636-b1c9-5237d8eea3d1","Type":"ContainerStarted","Data":"e572271f4478675d3f8cad44e0e54f6a1351dc152a5ce5e9a4e1cd18d5df60f6"} Feb 20 15:11:34.556813 master-0 kubenswrapper[28120]: I0220 15:11:34.556753 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk"] Feb 20 15:11:34.557731 master-0 kubenswrapper[28120]: I0220 15:11:34.557701 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk" Feb 20 15:11:34.559616 master-0 kubenswrapper[28120]: I0220 15:11:34.559587 28120 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 20 15:11:34.560578 master-0 kubenswrapper[28120]: I0220 15:11:34.559868 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 20 15:11:34.560578 master-0 kubenswrapper[28120]: I0220 15:11:34.559989 28120 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 20 15:11:34.560578 master-0 kubenswrapper[28120]: I0220 15:11:34.560260 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 20 15:11:34.582857 master-0 kubenswrapper[28120]: I0220 15:11:34.582789 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk"] Feb 20 15:11:34.733419 master-0 kubenswrapper[28120]: I0220 15:11:34.733345 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c306c05c-c427-431b-aefe-ecc5c0d84984-webhook-cert\") pod \"metallb-operator-controller-manager-7945d98f64-m5tgk\" (UID: \"c306c05c-c427-431b-aefe-ecc5c0d84984\") " pod="metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk" Feb 20 15:11:34.734029 master-0 kubenswrapper[28120]: I0220 15:11:34.733884 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2txq5\" (UniqueName: \"kubernetes.io/projected/c306c05c-c427-431b-aefe-ecc5c0d84984-kube-api-access-2txq5\") pod \"metallb-operator-controller-manager-7945d98f64-m5tgk\" (UID: \"c306c05c-c427-431b-aefe-ecc5c0d84984\") " pod="metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk" Feb 20 15:11:34.734121 master-0 kubenswrapper[28120]: I0220 15:11:34.734008 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c306c05c-c427-431b-aefe-ecc5c0d84984-apiservice-cert\") pod \"metallb-operator-controller-manager-7945d98f64-m5tgk\" (UID: \"c306c05c-c427-431b-aefe-ecc5c0d84984\") " pod="metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk" Feb 20 15:11:34.835489 master-0 kubenswrapper[28120]: I0220 15:11:34.835354 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c306c05c-c427-431b-aefe-ecc5c0d84984-webhook-cert\") pod \"metallb-operator-controller-manager-7945d98f64-m5tgk\" (UID: \"c306c05c-c427-431b-aefe-ecc5c0d84984\") " pod="metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk" Feb 20 15:11:34.835489 master-0 kubenswrapper[28120]: I0220 15:11:34.835441 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2txq5\" (UniqueName: \"kubernetes.io/projected/c306c05c-c427-431b-aefe-ecc5c0d84984-kube-api-access-2txq5\") pod \"metallb-operator-controller-manager-7945d98f64-m5tgk\" (UID: \"c306c05c-c427-431b-aefe-ecc5c0d84984\") " pod="metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk" Feb 20 15:11:34.835489 master-0 kubenswrapper[28120]: I0220 15:11:34.835475 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c306c05c-c427-431b-aefe-ecc5c0d84984-apiservice-cert\") pod \"metallb-operator-controller-manager-7945d98f64-m5tgk\" (UID: \"c306c05c-c427-431b-aefe-ecc5c0d84984\") " pod="metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk" Feb 20 15:11:34.838770 master-0 kubenswrapper[28120]: I0220 15:11:34.838717 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c306c05c-c427-431b-aefe-ecc5c0d84984-webhook-cert\") pod \"metallb-operator-controller-manager-7945d98f64-m5tgk\" (UID: \"c306c05c-c427-431b-aefe-ecc5c0d84984\") " pod="metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk" Feb 20 15:11:34.839048 master-0 kubenswrapper[28120]: I0220 15:11:34.839007 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c306c05c-c427-431b-aefe-ecc5c0d84984-apiservice-cert\") pod \"metallb-operator-controller-manager-7945d98f64-m5tgk\" (UID: \"c306c05c-c427-431b-aefe-ecc5c0d84984\") " pod="metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk" Feb 20 15:11:34.851026 master-0 kubenswrapper[28120]: I0220 15:11:34.850920 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2txq5\" (UniqueName: \"kubernetes.io/projected/c306c05c-c427-431b-aefe-ecc5c0d84984-kube-api-access-2txq5\") pod \"metallb-operator-controller-manager-7945d98f64-m5tgk\" (UID: \"c306c05c-c427-431b-aefe-ecc5c0d84984\") " pod="metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk" Feb 20 15:11:34.915881 master-0 kubenswrapper[28120]: I0220 15:11:34.915822 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk" Feb 20 15:11:34.977960 master-0 kubenswrapper[28120]: I0220 15:11:34.971981 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf"] Feb 20 15:11:34.977960 master-0 kubenswrapper[28120]: I0220 15:11:34.973592 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf" Feb 20 15:11:34.977960 master-0 kubenswrapper[28120]: I0220 15:11:34.977492 28120 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 20 15:11:34.993950 master-0 kubenswrapper[28120]: I0220 15:11:34.985153 28120 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 20 15:11:35.006891 master-0 kubenswrapper[28120]: I0220 15:11:35.006051 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf"] Feb 20 15:11:35.041945 master-0 kubenswrapper[28120]: I0220 15:11:35.038860 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5278bf8f-4b77-4681-b9a4-0b45aca2dc14-apiservice-cert\") pod \"metallb-operator-webhook-server-5c58c44cb6-vzkxf\" (UID: \"5278bf8f-4b77-4681-b9a4-0b45aca2dc14\") " pod="metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf" Feb 20 15:11:35.041945 master-0 kubenswrapper[28120]: I0220 15:11:35.039045 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5vj9\" (UniqueName: \"kubernetes.io/projected/5278bf8f-4b77-4681-b9a4-0b45aca2dc14-kube-api-access-c5vj9\") pod \"metallb-operator-webhook-server-5c58c44cb6-vzkxf\" (UID: \"5278bf8f-4b77-4681-b9a4-0b45aca2dc14\") " pod="metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf" Feb 20 15:11:35.041945 master-0 kubenswrapper[28120]: I0220 15:11:35.039090 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5278bf8f-4b77-4681-b9a4-0b45aca2dc14-webhook-cert\") pod \"metallb-operator-webhook-server-5c58c44cb6-vzkxf\" (UID: \"5278bf8f-4b77-4681-b9a4-0b45aca2dc14\") " pod="metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf" Feb 20 15:11:35.142357 master-0 kubenswrapper[28120]: I0220 15:11:35.142169 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5278bf8f-4b77-4681-b9a4-0b45aca2dc14-apiservice-cert\") pod \"metallb-operator-webhook-server-5c58c44cb6-vzkxf\" (UID: \"5278bf8f-4b77-4681-b9a4-0b45aca2dc14\") " pod="metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf" Feb 20 15:11:35.142357 master-0 kubenswrapper[28120]: I0220 15:11:35.142317 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5vj9\" (UniqueName: \"kubernetes.io/projected/5278bf8f-4b77-4681-b9a4-0b45aca2dc14-kube-api-access-c5vj9\") pod \"metallb-operator-webhook-server-5c58c44cb6-vzkxf\" (UID: \"5278bf8f-4b77-4681-b9a4-0b45aca2dc14\") " pod="metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf" Feb 20 15:11:35.142562 master-0 kubenswrapper[28120]: I0220 15:11:35.142358 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5278bf8f-4b77-4681-b9a4-0b45aca2dc14-webhook-cert\") pod \"metallb-operator-webhook-server-5c58c44cb6-vzkxf\" (UID: \"5278bf8f-4b77-4681-b9a4-0b45aca2dc14\") " pod="metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf" Feb 20 15:11:35.147269 master-0 kubenswrapper[28120]: I0220 15:11:35.147214 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5278bf8f-4b77-4681-b9a4-0b45aca2dc14-webhook-cert\") pod \"metallb-operator-webhook-server-5c58c44cb6-vzkxf\" (UID: \"5278bf8f-4b77-4681-b9a4-0b45aca2dc14\") " pod="metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf" Feb 20 15:11:35.154455 master-0 kubenswrapper[28120]: I0220 15:11:35.154401 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5278bf8f-4b77-4681-b9a4-0b45aca2dc14-apiservice-cert\") pod \"metallb-operator-webhook-server-5c58c44cb6-vzkxf\" (UID: \"5278bf8f-4b77-4681-b9a4-0b45aca2dc14\") " pod="metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf" Feb 20 15:11:35.182031 master-0 kubenswrapper[28120]: I0220 15:11:35.181999 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5vj9\" (UniqueName: \"kubernetes.io/projected/5278bf8f-4b77-4681-b9a4-0b45aca2dc14-kube-api-access-c5vj9\") pod \"metallb-operator-webhook-server-5c58c44cb6-vzkxf\" (UID: \"5278bf8f-4b77-4681-b9a4-0b45aca2dc14\") " pod="metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf" Feb 20 15:11:35.314961 master-0 kubenswrapper[28120]: I0220 15:11:35.313704 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf" Feb 20 15:11:35.465410 master-0 kubenswrapper[28120]: I0220 15:11:35.465357 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk"] Feb 20 15:11:35.468682 master-0 kubenswrapper[28120]: W0220 15:11:35.468643 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc306c05c_c427_431b_aefe_ecc5c0d84984.slice/crio-2e08b54878c578e9afa68bba246402a2972b6aae688f05cf6293cc5323281d8c WatchSource:0}: Error finding container 2e08b54878c578e9afa68bba246402a2972b6aae688f05cf6293cc5323281d8c: Status 404 returned error can't find the container with id 2e08b54878c578e9afa68bba246402a2972b6aae688f05cf6293cc5323281d8c Feb 20 15:11:35.573689 master-0 kubenswrapper[28120]: I0220 15:11:35.573633 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk" event={"ID":"c306c05c-c427-431b-aefe-ecc5c0d84984","Type":"ContainerStarted","Data":"2e08b54878c578e9afa68bba246402a2972b6aae688f05cf6293cc5323281d8c"} Feb 20 15:11:35.834100 master-0 kubenswrapper[28120]: I0220 15:11:35.834048 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf"] Feb 20 15:11:35.841701 master-0 kubenswrapper[28120]: W0220 15:11:35.841615 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5278bf8f_4b77_4681_b9a4_0b45aca2dc14.slice/crio-2a96b1ec6b542127a4c4265e87f6b23db0484d0dcb5b30e7b52a2fcf7cff6724 WatchSource:0}: Error finding container 2a96b1ec6b542127a4c4265e87f6b23db0484d0dcb5b30e7b52a2fcf7cff6724: Status 404 returned error can't find the container with id 2a96b1ec6b542127a4c4265e87f6b23db0484d0dcb5b30e7b52a2fcf7cff6724 Feb 20 15:11:36.253573 master-0 kubenswrapper[28120]: I0220 15:11:36.253514 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-n2mrm"] Feb 20 15:11:36.254609 master-0 kubenswrapper[28120]: I0220 15:11:36.254587 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-n2mrm" Feb 20 15:11:36.256517 master-0 kubenswrapper[28120]: I0220 15:11:36.256478 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 20 15:11:36.256616 master-0 kubenswrapper[28120]: I0220 15:11:36.256482 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 20 15:11:36.266730 master-0 kubenswrapper[28120]: I0220 15:11:36.266683 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-n2mrm"] Feb 20 15:11:36.361731 master-0 kubenswrapper[28120]: I0220 15:11:36.361473 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7mnw\" (UniqueName: \"kubernetes.io/projected/e0bd441e-e9a6-44b4-ade3-3b147a14a9c3-kube-api-access-g7mnw\") pod \"cert-manager-operator-controller-manager-66c8bdd694-n2mrm\" (UID: \"e0bd441e-e9a6-44b4-ade3-3b147a14a9c3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-n2mrm" Feb 20 15:11:36.361911 master-0 kubenswrapper[28120]: I0220 15:11:36.361792 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e0bd441e-e9a6-44b4-ade3-3b147a14a9c3-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-n2mrm\" (UID: \"e0bd441e-e9a6-44b4-ade3-3b147a14a9c3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-n2mrm" Feb 20 15:11:36.463572 master-0 kubenswrapper[28120]: I0220 15:11:36.463465 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7mnw\" (UniqueName: \"kubernetes.io/projected/e0bd441e-e9a6-44b4-ade3-3b147a14a9c3-kube-api-access-g7mnw\") pod \"cert-manager-operator-controller-manager-66c8bdd694-n2mrm\" (UID: \"e0bd441e-e9a6-44b4-ade3-3b147a14a9c3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-n2mrm" Feb 20 15:11:36.463805 master-0 kubenswrapper[28120]: I0220 15:11:36.463672 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e0bd441e-e9a6-44b4-ade3-3b147a14a9c3-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-n2mrm\" (UID: \"e0bd441e-e9a6-44b4-ade3-3b147a14a9c3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-n2mrm" Feb 20 15:11:36.464428 master-0 kubenswrapper[28120]: I0220 15:11:36.464380 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e0bd441e-e9a6-44b4-ade3-3b147a14a9c3-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-n2mrm\" (UID: \"e0bd441e-e9a6-44b4-ade3-3b147a14a9c3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-n2mrm" Feb 20 15:11:36.480199 master-0 kubenswrapper[28120]: I0220 15:11:36.480150 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7mnw\" (UniqueName: \"kubernetes.io/projected/e0bd441e-e9a6-44b4-ade3-3b147a14a9c3-kube-api-access-g7mnw\") pod \"cert-manager-operator-controller-manager-66c8bdd694-n2mrm\" (UID: \"e0bd441e-e9a6-44b4-ade3-3b147a14a9c3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-n2mrm" Feb 20 15:11:36.570004 master-0 kubenswrapper[28120]: I0220 15:11:36.569851 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-n2mrm" Feb 20 15:11:36.584167 master-0 kubenswrapper[28120]: I0220 15:11:36.584107 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf" event={"ID":"5278bf8f-4b77-4681-b9a4-0b45aca2dc14","Type":"ContainerStarted","Data":"2a96b1ec6b542127a4c4265e87f6b23db0484d0dcb5b30e7b52a2fcf7cff6724"} Feb 20 15:11:37.050227 master-0 kubenswrapper[28120]: I0220 15:11:37.049952 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-n2mrm"] Feb 20 15:11:37.062598 master-0 kubenswrapper[28120]: W0220 15:11:37.059872 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0bd441e_e9a6_44b4_ade3_3b147a14a9c3.slice/crio-8397e5bfa5ee2145f4bbeb80e2388f60e7d308ac25ddaa69b581069d2b79f004 WatchSource:0}: Error finding container 8397e5bfa5ee2145f4bbeb80e2388f60e7d308ac25ddaa69b581069d2b79f004: Status 404 returned error can't find the container with id 8397e5bfa5ee2145f4bbeb80e2388f60e7d308ac25ddaa69b581069d2b79f004 Feb 20 15:11:37.602142 master-0 kubenswrapper[28120]: I0220 15:11:37.602053 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-n2mrm" event={"ID":"e0bd441e-e9a6-44b4-ade3-3b147a14a9c3","Type":"ContainerStarted","Data":"8397e5bfa5ee2145f4bbeb80e2388f60e7d308ac25ddaa69b581069d2b79f004"} Feb 20 15:11:40.681046 master-0 kubenswrapper[28120]: I0220 15:11:40.680933 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk" event={"ID":"c306c05c-c427-431b-aefe-ecc5c0d84984","Type":"ContainerStarted","Data":"eff301c3f766530f45a82a97920f8b596024072109e3d20124904cc66f429119"} Feb 20 15:11:40.682193 master-0 kubenswrapper[28120]: I0220 15:11:40.682020 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk" Feb 20 15:11:40.707231 master-0 kubenswrapper[28120]: I0220 15:11:40.707130 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk" podStartSLOduration=2.073570758 podStartE2EDuration="6.707113948s" podCreationTimestamp="2026-02-20 15:11:34 +0000 UTC" firstStartedPulling="2026-02-20 15:11:35.472087548 +0000 UTC m=+633.732881111" lastFinishedPulling="2026-02-20 15:11:40.105630738 +0000 UTC m=+638.366424301" observedRunningTime="2026-02-20 15:11:40.702551644 +0000 UTC m=+638.963345207" watchObservedRunningTime="2026-02-20 15:11:40.707113948 +0000 UTC m=+638.967907501" Feb 20 15:11:43.725005 master-0 kubenswrapper[28120]: I0220 15:11:43.722794 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf" event={"ID":"5278bf8f-4b77-4681-b9a4-0b45aca2dc14","Type":"ContainerStarted","Data":"38410cd3c613a0d040871921d4a7053e656cad13d088318d0d04fd3b6866b99d"} Feb 20 15:11:43.725005 master-0 kubenswrapper[28120]: I0220 15:11:43.724003 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf" Feb 20 15:11:43.726291 master-0 kubenswrapper[28120]: I0220 15:11:43.726250 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-n2mrm" event={"ID":"e0bd441e-e9a6-44b4-ade3-3b147a14a9c3","Type":"ContainerStarted","Data":"b61aa7941e7fab127bacac69b7ea768873d5a0f516965bbf438fa37135bd0ac9"} Feb 20 15:11:43.756084 master-0 kubenswrapper[28120]: I0220 15:11:43.753965 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf" podStartSLOduration=2.078806858 podStartE2EDuration="9.75394681s" podCreationTimestamp="2026-02-20 15:11:34 +0000 UTC" firstStartedPulling="2026-02-20 15:11:35.845337705 +0000 UTC m=+634.106131268" lastFinishedPulling="2026-02-20 15:11:43.520477657 +0000 UTC m=+641.781271220" observedRunningTime="2026-02-20 15:11:43.750257588 +0000 UTC m=+642.011051151" watchObservedRunningTime="2026-02-20 15:11:43.75394681 +0000 UTC m=+642.014740403" Feb 20 15:11:43.787847 master-0 kubenswrapper[28120]: I0220 15:11:43.787751 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-n2mrm" podStartSLOduration=1.360118311 podStartE2EDuration="7.787732792s" podCreationTimestamp="2026-02-20 15:11:36 +0000 UTC" firstStartedPulling="2026-02-20 15:11:37.062879629 +0000 UTC m=+635.323673192" lastFinishedPulling="2026-02-20 15:11:43.4904941 +0000 UTC m=+641.751287673" observedRunningTime="2026-02-20 15:11:43.776478402 +0000 UTC m=+642.037271975" watchObservedRunningTime="2026-02-20 15:11:43.787732792 +0000 UTC m=+642.048526355" Feb 20 15:11:50.127831 master-0 kubenswrapper[28120]: I0220 15:11:50.127755 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-kh9wb"] Feb 20 15:11:50.130392 master-0 kubenswrapper[28120]: I0220 15:11:50.130312 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-kh9wb" Feb 20 15:11:50.132884 master-0 kubenswrapper[28120]: I0220 15:11:50.132628 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 20 15:11:50.132884 master-0 kubenswrapper[28120]: I0220 15:11:50.132670 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 20 15:11:50.139871 master-0 kubenswrapper[28120]: I0220 15:11:50.139812 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-kh9wb"] Feb 20 15:11:50.329308 master-0 kubenswrapper[28120]: I0220 15:11:50.329239 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-472x9\" (UniqueName: \"kubernetes.io/projected/2019cf71-807c-41bd-966a-cefcd866e3d6-kube-api-access-472x9\") pod \"cert-manager-webhook-6888856db4-kh9wb\" (UID: \"2019cf71-807c-41bd-966a-cefcd866e3d6\") " pod="cert-manager/cert-manager-webhook-6888856db4-kh9wb" Feb 20 15:11:50.329523 master-0 kubenswrapper[28120]: I0220 15:11:50.329313 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2019cf71-807c-41bd-966a-cefcd866e3d6-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-kh9wb\" (UID: \"2019cf71-807c-41bd-966a-cefcd866e3d6\") " pod="cert-manager/cert-manager-webhook-6888856db4-kh9wb" Feb 20 15:11:50.430689 master-0 kubenswrapper[28120]: I0220 15:11:50.430539 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-472x9\" (UniqueName: \"kubernetes.io/projected/2019cf71-807c-41bd-966a-cefcd866e3d6-kube-api-access-472x9\") pod \"cert-manager-webhook-6888856db4-kh9wb\" (UID: \"2019cf71-807c-41bd-966a-cefcd866e3d6\") " pod="cert-manager/cert-manager-webhook-6888856db4-kh9wb" Feb 20 15:11:50.431142 master-0 kubenswrapper[28120]: I0220 15:11:50.431090 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2019cf71-807c-41bd-966a-cefcd866e3d6-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-kh9wb\" (UID: \"2019cf71-807c-41bd-966a-cefcd866e3d6\") " pod="cert-manager/cert-manager-webhook-6888856db4-kh9wb" Feb 20 15:11:50.448323 master-0 kubenswrapper[28120]: I0220 15:11:50.448265 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-472x9\" (UniqueName: \"kubernetes.io/projected/2019cf71-807c-41bd-966a-cefcd866e3d6-kube-api-access-472x9\") pod \"cert-manager-webhook-6888856db4-kh9wb\" (UID: \"2019cf71-807c-41bd-966a-cefcd866e3d6\") " pod="cert-manager/cert-manager-webhook-6888856db4-kh9wb" Feb 20 15:11:50.461810 master-0 kubenswrapper[28120]: I0220 15:11:50.461743 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2019cf71-807c-41bd-966a-cefcd866e3d6-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-kh9wb\" (UID: \"2019cf71-807c-41bd-966a-cefcd866e3d6\") " pod="cert-manager/cert-manager-webhook-6888856db4-kh9wb" Feb 20 15:11:50.743952 master-0 kubenswrapper[28120]: I0220 15:11:50.743892 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-kh9wb" Feb 20 15:11:50.757970 master-0 kubenswrapper[28120]: I0220 15:11:50.757901 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-kfhfm"] Feb 20 15:11:50.758818 master-0 kubenswrapper[28120]: I0220 15:11:50.758788 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kfhfm" Feb 20 15:11:50.761003 master-0 kubenswrapper[28120]: I0220 15:11:50.760953 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 20 15:11:50.761273 master-0 kubenswrapper[28120]: I0220 15:11:50.761234 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 20 15:11:50.783401 master-0 kubenswrapper[28120]: I0220 15:11:50.783350 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-kfhfm"] Feb 20 15:11:50.811948 master-0 kubenswrapper[28120]: I0220 15:11:50.809068 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-ghsgk"] Feb 20 15:11:50.811948 master-0 kubenswrapper[28120]: I0220 15:11:50.810029 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-ghsgk" Feb 20 15:11:50.814632 master-0 kubenswrapper[28120]: I0220 15:11:50.814501 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-ghsgk"] Feb 20 15:11:50.935664 master-0 kubenswrapper[28120]: I0220 15:11:50.934989 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg"] Feb 20 15:11:50.936595 master-0 kubenswrapper[28120]: I0220 15:11:50.936300 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg" Feb 20 15:11:50.939849 master-0 kubenswrapper[28120]: I0220 15:11:50.939799 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 20 15:11:50.942609 master-0 kubenswrapper[28120]: I0220 15:11:50.942194 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xj4d\" (UniqueName: \"kubernetes.io/projected/e5970649-ccb8-4c18-9b5e-40edde5f0994-kube-api-access-6xj4d\") pod \"obo-prometheus-operator-68bc856cb9-kfhfm\" (UID: \"e5970649-ccb8-4c18-9b5e-40edde5f0994\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kfhfm" Feb 20 15:11:50.942609 master-0 kubenswrapper[28120]: I0220 15:11:50.942424 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7dls\" (UniqueName: \"kubernetes.io/projected/11dbd2fd-149c-4322-81f3-bd8adb42319c-kube-api-access-w7dls\") pod \"cert-manager-cainjector-5545bd876-ghsgk\" (UID: \"11dbd2fd-149c-4322-81f3-bd8adb42319c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-ghsgk" Feb 20 15:11:50.942609 master-0 kubenswrapper[28120]: I0220 15:11:50.942527 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/11dbd2fd-149c-4322-81f3-bd8adb42319c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-ghsgk\" (UID: \"11dbd2fd-149c-4322-81f3-bd8adb42319c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-ghsgk" Feb 20 15:11:50.964248 master-0 kubenswrapper[28120]: I0220 15:11:50.961745 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l"] Feb 20 15:11:50.964971 master-0 kubenswrapper[28120]: I0220 15:11:50.964584 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l" Feb 20 15:11:50.969073 master-0 kubenswrapper[28120]: I0220 15:11:50.969050 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg"] Feb 20 15:11:50.981988 master-0 kubenswrapper[28120]: I0220 15:11:50.981946 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l"] Feb 20 15:11:51.028752 master-0 kubenswrapper[28120]: I0220 15:11:51.028636 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-msvrz"] Feb 20 15:11:51.030994 master-0 kubenswrapper[28120]: I0220 15:11:51.029563 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-msvrz" Feb 20 15:11:51.034135 master-0 kubenswrapper[28120]: I0220 15:11:51.033947 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 20 15:11:51.038344 master-0 kubenswrapper[28120]: I0220 15:11:51.037796 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-msvrz"] Feb 20 15:11:51.044266 master-0 kubenswrapper[28120]: I0220 15:11:51.044210 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7dls\" (UniqueName: \"kubernetes.io/projected/11dbd2fd-149c-4322-81f3-bd8adb42319c-kube-api-access-w7dls\") pod \"cert-manager-cainjector-5545bd876-ghsgk\" (UID: \"11dbd2fd-149c-4322-81f3-bd8adb42319c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-ghsgk" Feb 20 15:11:51.044453 master-0 kubenswrapper[28120]: I0220 15:11:51.044356 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/11dbd2fd-149c-4322-81f3-bd8adb42319c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-ghsgk\" (UID: \"11dbd2fd-149c-4322-81f3-bd8adb42319c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-ghsgk" Feb 20 15:11:51.044453 master-0 kubenswrapper[28120]: I0220 15:11:51.044390 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xj4d\" (UniqueName: \"kubernetes.io/projected/e5970649-ccb8-4c18-9b5e-40edde5f0994-kube-api-access-6xj4d\") pod \"obo-prometheus-operator-68bc856cb9-kfhfm\" (UID: \"e5970649-ccb8-4c18-9b5e-40edde5f0994\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kfhfm" Feb 20 15:11:51.044531 master-0 kubenswrapper[28120]: I0220 15:11:51.044453 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca00bc4b-3578-4bb3-b9c0-5942d8723893-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg\" (UID: \"ca00bc4b-3578-4bb3-b9c0-5942d8723893\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg" Feb 20 15:11:51.044531 master-0 kubenswrapper[28120]: I0220 15:11:51.044492 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca00bc4b-3578-4bb3-b9c0-5942d8723893-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg\" (UID: \"ca00bc4b-3578-4bb3-b9c0-5942d8723893\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg" Feb 20 15:11:51.076822 master-0 kubenswrapper[28120]: I0220 15:11:51.076762 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7dls\" (UniqueName: \"kubernetes.io/projected/11dbd2fd-149c-4322-81f3-bd8adb42319c-kube-api-access-w7dls\") pod \"cert-manager-cainjector-5545bd876-ghsgk\" (UID: \"11dbd2fd-149c-4322-81f3-bd8adb42319c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-ghsgk" Feb 20 15:11:51.077072 master-0 kubenswrapper[28120]: I0220 15:11:51.076881 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xj4d\" (UniqueName: \"kubernetes.io/projected/e5970649-ccb8-4c18-9b5e-40edde5f0994-kube-api-access-6xj4d\") pod \"obo-prometheus-operator-68bc856cb9-kfhfm\" (UID: \"e5970649-ccb8-4c18-9b5e-40edde5f0994\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kfhfm" Feb 20 15:11:51.082014 master-0 kubenswrapper[28120]: I0220 15:11:51.081836 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/11dbd2fd-149c-4322-81f3-bd8adb42319c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-ghsgk\" (UID: \"11dbd2fd-149c-4322-81f3-bd8adb42319c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-ghsgk" Feb 20 15:11:51.149960 master-0 kubenswrapper[28120]: I0220 15:11:51.146672 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca00bc4b-3578-4bb3-b9c0-5942d8723893-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg\" (UID: \"ca00bc4b-3578-4bb3-b9c0-5942d8723893\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg" Feb 20 15:11:51.149960 master-0 kubenswrapper[28120]: I0220 15:11:51.146746 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f9410955-3f67-4030-9b3a-6485017ce3e5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l\" (UID: \"f9410955-3f67-4030-9b3a-6485017ce3e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l" Feb 20 15:11:51.149960 master-0 kubenswrapper[28120]: I0220 15:11:51.146776 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxtzj\" (UniqueName: \"kubernetes.io/projected/be0f8dc1-7bdf-4684-b539-0d61f093f0e2-kube-api-access-qxtzj\") pod \"observability-operator-59bdc8b94-msvrz\" (UID: \"be0f8dc1-7bdf-4684-b539-0d61f093f0e2\") " pod="openshift-operators/observability-operator-59bdc8b94-msvrz" Feb 20 15:11:51.149960 master-0 kubenswrapper[28120]: I0220 15:11:51.146801 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca00bc4b-3578-4bb3-b9c0-5942d8723893-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg\" (UID: \"ca00bc4b-3578-4bb3-b9c0-5942d8723893\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg" Feb 20 15:11:51.149960 master-0 kubenswrapper[28120]: I0220 15:11:51.146831 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9410955-3f67-4030-9b3a-6485017ce3e5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l\" (UID: \"f9410955-3f67-4030-9b3a-6485017ce3e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l" Feb 20 15:11:51.149960 master-0 kubenswrapper[28120]: I0220 15:11:51.146885 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/be0f8dc1-7bdf-4684-b539-0d61f093f0e2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-msvrz\" (UID: \"be0f8dc1-7bdf-4684-b539-0d61f093f0e2\") " pod="openshift-operators/observability-operator-59bdc8b94-msvrz" Feb 20 15:11:51.158961 master-0 kubenswrapper[28120]: I0220 15:11:51.153459 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kfhfm" Feb 20 15:11:51.158961 master-0 kubenswrapper[28120]: I0220 15:11:51.154357 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca00bc4b-3578-4bb3-b9c0-5942d8723893-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg\" (UID: \"ca00bc4b-3578-4bb3-b9c0-5942d8723893\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg" Feb 20 15:11:51.158961 master-0 kubenswrapper[28120]: I0220 15:11:51.157534 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca00bc4b-3578-4bb3-b9c0-5942d8723893-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg\" (UID: \"ca00bc4b-3578-4bb3-b9c0-5942d8723893\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg" Feb 20 15:11:51.195950 master-0 kubenswrapper[28120]: I0220 15:11:51.189202 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-ghsgk" Feb 20 15:11:51.248825 master-0 kubenswrapper[28120]: I0220 15:11:51.248768 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9410955-3f67-4030-9b3a-6485017ce3e5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l\" (UID: \"f9410955-3f67-4030-9b3a-6485017ce3e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l" Feb 20 15:11:51.249344 master-0 kubenswrapper[28120]: I0220 15:11:51.249300 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/be0f8dc1-7bdf-4684-b539-0d61f093f0e2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-msvrz\" (UID: \"be0f8dc1-7bdf-4684-b539-0d61f093f0e2\") " pod="openshift-operators/observability-operator-59bdc8b94-msvrz" Feb 20 15:11:51.249428 master-0 kubenswrapper[28120]: I0220 15:11:51.249396 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f9410955-3f67-4030-9b3a-6485017ce3e5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l\" (UID: \"f9410955-3f67-4030-9b3a-6485017ce3e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l" Feb 20 15:11:51.249472 master-0 kubenswrapper[28120]: I0220 15:11:51.249430 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxtzj\" (UniqueName: \"kubernetes.io/projected/be0f8dc1-7bdf-4684-b539-0d61f093f0e2-kube-api-access-qxtzj\") pod \"observability-operator-59bdc8b94-msvrz\" (UID: \"be0f8dc1-7bdf-4684-b539-0d61f093f0e2\") " pod="openshift-operators/observability-operator-59bdc8b94-msvrz" Feb 20 15:11:51.258111 master-0 kubenswrapper[28120]: I0220 15:11:51.253063 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg" Feb 20 15:11:51.258444 master-0 kubenswrapper[28120]: I0220 15:11:51.254713 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f9410955-3f67-4030-9b3a-6485017ce3e5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l\" (UID: \"f9410955-3f67-4030-9b3a-6485017ce3e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l" Feb 20 15:11:51.258444 master-0 kubenswrapper[28120]: I0220 15:11:51.255347 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9410955-3f67-4030-9b3a-6485017ce3e5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l\" (UID: \"f9410955-3f67-4030-9b3a-6485017ce3e5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l" Feb 20 15:11:51.258444 master-0 kubenswrapper[28120]: I0220 15:11:51.255581 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/be0f8dc1-7bdf-4684-b539-0d61f093f0e2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-msvrz\" (UID: \"be0f8dc1-7bdf-4684-b539-0d61f093f0e2\") " pod="openshift-operators/observability-operator-59bdc8b94-msvrz" Feb 20 15:11:51.258650 master-0 kubenswrapper[28120]: I0220 15:11:51.258596 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-67fxz"] Feb 20 15:11:51.260543 master-0 kubenswrapper[28120]: I0220 15:11:51.259524 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-67fxz" Feb 20 15:11:51.296471 master-0 kubenswrapper[28120]: I0220 15:11:51.279869 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-67fxz"] Feb 20 15:11:51.302534 master-0 kubenswrapper[28120]: I0220 15:11:51.302108 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l" Feb 20 15:11:51.307161 master-0 kubenswrapper[28120]: I0220 15:11:51.306601 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxtzj\" (UniqueName: \"kubernetes.io/projected/be0f8dc1-7bdf-4684-b539-0d61f093f0e2-kube-api-access-qxtzj\") pod \"observability-operator-59bdc8b94-msvrz\" (UID: \"be0f8dc1-7bdf-4684-b539-0d61f093f0e2\") " pod="openshift-operators/observability-operator-59bdc8b94-msvrz" Feb 20 15:11:51.307161 master-0 kubenswrapper[28120]: I0220 15:11:51.306847 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-kh9wb"] Feb 20 15:11:51.355959 master-0 kubenswrapper[28120]: I0220 15:11:51.354647 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-msvrz" Feb 20 15:11:51.461512 master-0 kubenswrapper[28120]: I0220 15:11:51.460797 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xckc\" (UniqueName: \"kubernetes.io/projected/bd3f2939-a415-4512-935a-614c7064cb87-kube-api-access-9xckc\") pod \"perses-operator-5bf474d74f-67fxz\" (UID: \"bd3f2939-a415-4512-935a-614c7064cb87\") " pod="openshift-operators/perses-operator-5bf474d74f-67fxz" Feb 20 15:11:51.461512 master-0 kubenswrapper[28120]: I0220 15:11:51.460908 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd3f2939-a415-4512-935a-614c7064cb87-openshift-service-ca\") pod \"perses-operator-5bf474d74f-67fxz\" (UID: \"bd3f2939-a415-4512-935a-614c7064cb87\") " pod="openshift-operators/perses-operator-5bf474d74f-67fxz" Feb 20 15:11:51.564019 master-0 kubenswrapper[28120]: I0220 15:11:51.563060 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd3f2939-a415-4512-935a-614c7064cb87-openshift-service-ca\") pod \"perses-operator-5bf474d74f-67fxz\" (UID: \"bd3f2939-a415-4512-935a-614c7064cb87\") " pod="openshift-operators/perses-operator-5bf474d74f-67fxz" Feb 20 15:11:51.564019 master-0 kubenswrapper[28120]: I0220 15:11:51.563148 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xckc\" (UniqueName: \"kubernetes.io/projected/bd3f2939-a415-4512-935a-614c7064cb87-kube-api-access-9xckc\") pod \"perses-operator-5bf474d74f-67fxz\" (UID: \"bd3f2939-a415-4512-935a-614c7064cb87\") " pod="openshift-operators/perses-operator-5bf474d74f-67fxz" Feb 20 15:11:51.565356 master-0 kubenswrapper[28120]: I0220 15:11:51.565325 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd3f2939-a415-4512-935a-614c7064cb87-openshift-service-ca\") pod \"perses-operator-5bf474d74f-67fxz\" (UID: \"bd3f2939-a415-4512-935a-614c7064cb87\") " pod="openshift-operators/perses-operator-5bf474d74f-67fxz" Feb 20 15:11:51.578968 master-0 kubenswrapper[28120]: I0220 15:11:51.578905 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xckc\" (UniqueName: \"kubernetes.io/projected/bd3f2939-a415-4512-935a-614c7064cb87-kube-api-access-9xckc\") pod \"perses-operator-5bf474d74f-67fxz\" (UID: \"bd3f2939-a415-4512-935a-614c7064cb87\") " pod="openshift-operators/perses-operator-5bf474d74f-67fxz" Feb 20 15:11:51.613451 master-0 kubenswrapper[28120]: I0220 15:11:51.609331 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-67fxz" Feb 20 15:11:51.751355 master-0 kubenswrapper[28120]: I0220 15:11:51.741146 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-ghsgk"] Feb 20 15:11:51.798113 master-0 kubenswrapper[28120]: I0220 15:11:51.798049 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-ghsgk" event={"ID":"11dbd2fd-149c-4322-81f3-bd8adb42319c","Type":"ContainerStarted","Data":"13981f67038006147c4ad0d8322f7efd7f07988c1ff916c306c20cbaa76b8564"} Feb 20 15:11:51.799429 master-0 kubenswrapper[28120]: I0220 15:11:51.799389 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-kh9wb" event={"ID":"2019cf71-807c-41bd-966a-cefcd866e3d6","Type":"ContainerStarted","Data":"a788ae50d4f5454a3c589f5e54d23cc953eea2e8b1bf284216a47dac334a5a0b"} Feb 20 15:11:51.815382 master-0 kubenswrapper[28120]: I0220 15:11:51.811375 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-kfhfm"] Feb 20 15:11:51.885616 master-0 kubenswrapper[28120]: I0220 15:11:51.885564 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l"] Feb 20 15:11:51.898768 master-0 kubenswrapper[28120]: W0220 15:11:51.898719 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9410955_3f67_4030_9b3a_6485017ce3e5.slice/crio-4f4e149eafc608b355e1aa97826f9e6ae32683a5abefae5374ef96afade8cc44 WatchSource:0}: Error finding container 4f4e149eafc608b355e1aa97826f9e6ae32683a5abefae5374ef96afade8cc44: Status 404 returned error can't find the container with id 4f4e149eafc608b355e1aa97826f9e6ae32683a5abefae5374ef96afade8cc44 Feb 20 15:11:51.910747 master-0 kubenswrapper[28120]: I0220 15:11:51.909906 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg"] Feb 20 15:11:52.018740 master-0 kubenswrapper[28120]: I0220 15:11:52.018692 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-msvrz"] Feb 20 15:11:52.018876 master-0 kubenswrapper[28120]: W0220 15:11:52.018833 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe0f8dc1_7bdf_4684_b539_0d61f093f0e2.slice/crio-21183523c6b2e99f269f7e40ad79e21b398086aa0038c8cd804a787565f72fea WatchSource:0}: Error finding container 21183523c6b2e99f269f7e40ad79e21b398086aa0038c8cd804a787565f72fea: Status 404 returned error can't find the container with id 21183523c6b2e99f269f7e40ad79e21b398086aa0038c8cd804a787565f72fea Feb 20 15:11:52.107272 master-0 kubenswrapper[28120]: I0220 15:11:52.107219 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-67fxz"] Feb 20 15:11:52.112255 master-0 kubenswrapper[28120]: W0220 15:11:52.112211 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd3f2939_a415_4512_935a_614c7064cb87.slice/crio-2235c65fb480c7f04fff2bd78c5160ec5490597617ca9faaf1aa5db596896c1f WatchSource:0}: Error finding container 2235c65fb480c7f04fff2bd78c5160ec5490597617ca9faaf1aa5db596896c1f: Status 404 returned error can't find the container with id 2235c65fb480c7f04fff2bd78c5160ec5490597617ca9faaf1aa5db596896c1f Feb 20 15:11:52.810496 master-0 kubenswrapper[28120]: I0220 15:11:52.810431 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-67fxz" event={"ID":"bd3f2939-a415-4512-935a-614c7064cb87","Type":"ContainerStarted","Data":"2235c65fb480c7f04fff2bd78c5160ec5490597617ca9faaf1aa5db596896c1f"} Feb 20 15:11:52.812716 master-0 kubenswrapper[28120]: I0220 15:11:52.812679 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l" event={"ID":"f9410955-3f67-4030-9b3a-6485017ce3e5","Type":"ContainerStarted","Data":"4f4e149eafc608b355e1aa97826f9e6ae32683a5abefae5374ef96afade8cc44"} Feb 20 15:11:52.814140 master-0 kubenswrapper[28120]: I0220 15:11:52.814102 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg" event={"ID":"ca00bc4b-3578-4bb3-b9c0-5942d8723893","Type":"ContainerStarted","Data":"778fddbfff240862ad20c9f6c9e01202d85ef8029f9869d54080426027855323"} Feb 20 15:11:52.815626 master-0 kubenswrapper[28120]: I0220 15:11:52.815586 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-msvrz" event={"ID":"be0f8dc1-7bdf-4684-b539-0d61f093f0e2","Type":"ContainerStarted","Data":"21183523c6b2e99f269f7e40ad79e21b398086aa0038c8cd804a787565f72fea"} Feb 20 15:11:52.818206 master-0 kubenswrapper[28120]: I0220 15:11:52.818167 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kfhfm" event={"ID":"e5970649-ccb8-4c18-9b5e-40edde5f0994","Type":"ContainerStarted","Data":"8ca6ad7c1918c9338d27af938a7eee1c71da13be2710a8c1480b05701de10784"} Feb 20 15:11:55.338601 master-0 kubenswrapper[28120]: I0220 15:11:55.338207 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5c58c44cb6-vzkxf" Feb 20 15:11:56.959114 master-0 kubenswrapper[28120]: I0220 15:11:56.959007 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-h4rqf"] Feb 20 15:11:56.961158 master-0 kubenswrapper[28120]: I0220 15:11:56.960335 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-h4rqf" Feb 20 15:11:56.978106 master-0 kubenswrapper[28120]: I0220 15:11:56.974844 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-h4rqf"] Feb 20 15:11:57.124409 master-0 kubenswrapper[28120]: I0220 15:11:57.124342 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjmmh\" (UniqueName: \"kubernetes.io/projected/3dcb37f4-b170-427b-aa90-fbda68b22a92-kube-api-access-bjmmh\") pod \"cert-manager-545d4d4674-h4rqf\" (UID: \"3dcb37f4-b170-427b-aa90-fbda68b22a92\") " pod="cert-manager/cert-manager-545d4d4674-h4rqf" Feb 20 15:11:57.124631 master-0 kubenswrapper[28120]: I0220 15:11:57.124589 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3dcb37f4-b170-427b-aa90-fbda68b22a92-bound-sa-token\") pod \"cert-manager-545d4d4674-h4rqf\" (UID: \"3dcb37f4-b170-427b-aa90-fbda68b22a92\") " pod="cert-manager/cert-manager-545d4d4674-h4rqf" Feb 20 15:11:57.230577 master-0 kubenswrapper[28120]: I0220 15:11:57.230457 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3dcb37f4-b170-427b-aa90-fbda68b22a92-bound-sa-token\") pod \"cert-manager-545d4d4674-h4rqf\" (UID: \"3dcb37f4-b170-427b-aa90-fbda68b22a92\") " pod="cert-manager/cert-manager-545d4d4674-h4rqf" Feb 20 15:11:57.231298 master-0 kubenswrapper[28120]: I0220 15:11:57.231268 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjmmh\" (UniqueName: \"kubernetes.io/projected/3dcb37f4-b170-427b-aa90-fbda68b22a92-kube-api-access-bjmmh\") pod \"cert-manager-545d4d4674-h4rqf\" (UID: \"3dcb37f4-b170-427b-aa90-fbda68b22a92\") " pod="cert-manager/cert-manager-545d4d4674-h4rqf" Feb 20 15:11:57.249150 master-0 kubenswrapper[28120]: I0220 15:11:57.249103 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjmmh\" (UniqueName: \"kubernetes.io/projected/3dcb37f4-b170-427b-aa90-fbda68b22a92-kube-api-access-bjmmh\") pod \"cert-manager-545d4d4674-h4rqf\" (UID: \"3dcb37f4-b170-427b-aa90-fbda68b22a92\") " pod="cert-manager/cert-manager-545d4d4674-h4rqf" Feb 20 15:11:57.257964 master-0 kubenswrapper[28120]: I0220 15:11:57.256726 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3dcb37f4-b170-427b-aa90-fbda68b22a92-bound-sa-token\") pod \"cert-manager-545d4d4674-h4rqf\" (UID: \"3dcb37f4-b170-427b-aa90-fbda68b22a92\") " pod="cert-manager/cert-manager-545d4d4674-h4rqf" Feb 20 15:11:57.295600 master-0 kubenswrapper[28120]: I0220 15:11:57.295527 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-h4rqf" Feb 20 15:12:02.799117 master-0 kubenswrapper[28120]: I0220 15:12:02.799037 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-h4rqf"] Feb 20 15:12:02.807181 master-0 kubenswrapper[28120]: W0220 15:12:02.806210 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dcb37f4_b170_427b_aa90_fbda68b22a92.slice/crio-8a31f2cd7160c488747f4b5d9111b1817099ec5e6552cda4b140fce7b46e0156 WatchSource:0}: Error finding container 8a31f2cd7160c488747f4b5d9111b1817099ec5e6552cda4b140fce7b46e0156: Status 404 returned error can't find the container with id 8a31f2cd7160c488747f4b5d9111b1817099ec5e6552cda4b140fce7b46e0156 Feb 20 15:12:02.921973 master-0 kubenswrapper[28120]: I0220 15:12:02.921900 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-msvrz" event={"ID":"be0f8dc1-7bdf-4684-b539-0d61f093f0e2","Type":"ContainerStarted","Data":"b9629a6afe8c8984af63ea0e330e481bccbd95d54156f257e18fcb9c62afd108"} Feb 20 15:12:02.922155 master-0 kubenswrapper[28120]: I0220 15:12:02.921982 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-msvrz" Feb 20 15:12:02.923178 master-0 kubenswrapper[28120]: I0220 15:12:02.923119 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-ghsgk" event={"ID":"11dbd2fd-149c-4322-81f3-bd8adb42319c","Type":"ContainerStarted","Data":"857c0e1faa129c94fc333c3bda8aa25d8f1c5e5c2104250ba10b80d2fbfec470"} Feb 20 15:12:02.923528 master-0 kubenswrapper[28120]: I0220 15:12:02.923494 28120 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-msvrz container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.128.0.134:8081/healthz\": dial tcp 10.128.0.134:8081: connect: connection refused" start-of-body= Feb 20 15:12:02.923596 master-0 kubenswrapper[28120]: I0220 15:12:02.923570 28120 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-msvrz" podUID="be0f8dc1-7bdf-4684-b539-0d61f093f0e2" containerName="operator" probeResult="failure" output="Get \"http://10.128.0.134:8081/healthz\": dial tcp 10.128.0.134:8081: connect: connection refused" Feb 20 15:12:02.926133 master-0 kubenswrapper[28120]: I0220 15:12:02.926094 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kfhfm" event={"ID":"e5970649-ccb8-4c18-9b5e-40edde5f0994","Type":"ContainerStarted","Data":"26e8d8b6e6f7bf3982ede9f256169f64d55553be4de8e54ffe7fa058c773c39e"} Feb 20 15:12:02.927641 master-0 kubenswrapper[28120]: I0220 15:12:02.927605 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-kh9wb" event={"ID":"2019cf71-807c-41bd-966a-cefcd866e3d6","Type":"ContainerStarted","Data":"20cb54f50ab16f4335eb9d0b9e9b496e52d8dc91f7c0c9cd91beb36d6029a972"} Feb 20 15:12:02.928571 master-0 kubenswrapper[28120]: I0220 15:12:02.928539 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-kh9wb" Feb 20 15:12:02.930268 master-0 kubenswrapper[28120]: I0220 15:12:02.930211 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l" event={"ID":"f9410955-3f67-4030-9b3a-6485017ce3e5","Type":"ContainerStarted","Data":"82db7b97d6572dc0a3773097a485d591f897ab28585bfdf11095bdf2aba5d381"} Feb 20 15:12:02.940018 master-0 kubenswrapper[28120]: I0220 15:12:02.939974 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-67fxz" event={"ID":"bd3f2939-a415-4512-935a-614c7064cb87","Type":"ContainerStarted","Data":"d17c74badc968c2ab5bf8c3852d2cbc5b4d0ffc131d11fc647252ceacff06aa2"} Feb 20 15:12:02.940687 master-0 kubenswrapper[28120]: I0220 15:12:02.940653 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-67fxz" Feb 20 15:12:02.943657 master-0 kubenswrapper[28120]: I0220 15:12:02.943556 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-msvrz" podStartSLOduration=2.585488191 podStartE2EDuration="12.94353251s" podCreationTimestamp="2026-02-20 15:11:50 +0000 UTC" firstStartedPulling="2026-02-20 15:11:52.022154982 +0000 UTC m=+650.282948555" lastFinishedPulling="2026-02-20 15:12:02.380199301 +0000 UTC m=+660.640992874" observedRunningTime="2026-02-20 15:12:02.938660648 +0000 UTC m=+661.199454221" watchObservedRunningTime="2026-02-20 15:12:02.94353251 +0000 UTC m=+661.204326093" Feb 20 15:12:02.950254 master-0 kubenswrapper[28120]: I0220 15:12:02.950209 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg" event={"ID":"ca00bc4b-3578-4bb3-b9c0-5942d8723893","Type":"ContainerStarted","Data":"fbe03968676ec9efc492b3595d56c887969178ae5a08b0f8862d5453bb956131"} Feb 20 15:12:02.955249 master-0 kubenswrapper[28120]: I0220 15:12:02.955198 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-h4rqf" event={"ID":"3dcb37f4-b170-427b-aa90-fbda68b22a92","Type":"ContainerStarted","Data":"8a31f2cd7160c488747f4b5d9111b1817099ec5e6552cda4b140fce7b46e0156"} Feb 20 15:12:02.972578 master-0 kubenswrapper[28120]: I0220 15:12:02.972503 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-ghsgk" podStartSLOduration=2.362781107 podStartE2EDuration="12.972486992s" podCreationTimestamp="2026-02-20 15:11:50 +0000 UTC" firstStartedPulling="2026-02-20 15:11:51.746298212 +0000 UTC m=+650.007091775" lastFinishedPulling="2026-02-20 15:12:02.356004087 +0000 UTC m=+660.616797660" observedRunningTime="2026-02-20 15:12:02.968995995 +0000 UTC m=+661.229789568" watchObservedRunningTime="2026-02-20 15:12:02.972486992 +0000 UTC m=+661.233280555" Feb 20 15:12:02.996932 master-0 kubenswrapper[28120]: I0220 15:12:02.995804 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-kh9wb" podStartSLOduration=1.970684449 podStartE2EDuration="12.995782243s" podCreationTimestamp="2026-02-20 15:11:50 +0000 UTC" firstStartedPulling="2026-02-20 15:11:51.329700573 +0000 UTC m=+649.590494136" lastFinishedPulling="2026-02-20 15:12:02.354798327 +0000 UTC m=+660.615591930" observedRunningTime="2026-02-20 15:12:02.991232839 +0000 UTC m=+661.252026422" watchObservedRunningTime="2026-02-20 15:12:02.995782243 +0000 UTC m=+661.256575806" Feb 20 15:12:03.017448 master-0 kubenswrapper[28120]: I0220 15:12:03.016904 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-kfhfm" podStartSLOduration=2.526593652 podStartE2EDuration="13.016878699s" podCreationTimestamp="2026-02-20 15:11:50 +0000 UTC" firstStartedPulling="2026-02-20 15:11:51.868197292 +0000 UTC m=+650.128990855" lastFinishedPulling="2026-02-20 15:12:02.358482329 +0000 UTC m=+660.619275902" observedRunningTime="2026-02-20 15:12:03.012994962 +0000 UTC m=+661.273788555" watchObservedRunningTime="2026-02-20 15:12:03.016878699 +0000 UTC m=+661.277672262" Feb 20 15:12:03.043241 master-0 kubenswrapper[28120]: I0220 15:12:03.043152 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-5jp2l" podStartSLOduration=2.568180199 podStartE2EDuration="13.043134333s" podCreationTimestamp="2026-02-20 15:11:50 +0000 UTC" firstStartedPulling="2026-02-20 15:11:51.904466747 +0000 UTC m=+650.165260310" lastFinishedPulling="2026-02-20 15:12:02.379420841 +0000 UTC m=+660.640214444" observedRunningTime="2026-02-20 15:12:03.028590011 +0000 UTC m=+661.289383584" watchObservedRunningTime="2026-02-20 15:12:03.043134333 +0000 UTC m=+661.303927896" Feb 20 15:12:03.066291 master-0 kubenswrapper[28120]: I0220 15:12:03.066145 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-749d445fd4-gg8gg" podStartSLOduration=2.61713151 podStartE2EDuration="13.066119587s" podCreationTimestamp="2026-02-20 15:11:50 +0000 UTC" firstStartedPulling="2026-02-20 15:11:51.927392359 +0000 UTC m=+650.188185922" lastFinishedPulling="2026-02-20 15:12:02.376380396 +0000 UTC m=+660.637173999" observedRunningTime="2026-02-20 15:12:03.064736122 +0000 UTC m=+661.325529715" watchObservedRunningTime="2026-02-20 15:12:03.066119587 +0000 UTC m=+661.326913150" Feb 20 15:12:03.131644 master-0 kubenswrapper[28120]: I0220 15:12:03.131558 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-67fxz" podStartSLOduration=1.892597249 podStartE2EDuration="12.131539598s" podCreationTimestamp="2026-02-20 15:11:51 +0000 UTC" firstStartedPulling="2026-02-20 15:11:52.115304755 +0000 UTC m=+650.376098328" lastFinishedPulling="2026-02-20 15:12:02.354247074 +0000 UTC m=+660.615040677" observedRunningTime="2026-02-20 15:12:03.130381809 +0000 UTC m=+661.391175382" watchObservedRunningTime="2026-02-20 15:12:03.131539598 +0000 UTC m=+661.392333161" Feb 20 15:12:03.969060 master-0 kubenswrapper[28120]: I0220 15:12:03.968995 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-h4rqf" event={"ID":"3dcb37f4-b170-427b-aa90-fbda68b22a92","Type":"ContainerStarted","Data":"0b6d9720afbb64b5c6f917b13df2e6d2a82dfa29ac1555e1715d70133657427f"} Feb 20 15:12:03.970295 master-0 kubenswrapper[28120]: I0220 15:12:03.970246 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-msvrz" Feb 20 15:12:03.990639 master-0 kubenswrapper[28120]: I0220 15:12:03.990557 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-h4rqf" podStartSLOduration=7.9905364599999995 podStartE2EDuration="7.99053646s" podCreationTimestamp="2026-02-20 15:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:12:03.987989246 +0000 UTC m=+662.248782809" watchObservedRunningTime="2026-02-20 15:12:03.99053646 +0000 UTC m=+662.251330033" Feb 20 15:12:07.652436 master-0 kubenswrapper[28120]: I0220 15:12:07.652368 28120 scope.go:117] "RemoveContainer" containerID="b3ad512efb5dbbcc33d6138b656a0885487bca2c87d7c1ae457add1c2c74ff8e" Feb 20 15:12:10.749315 master-0 kubenswrapper[28120]: I0220 15:12:10.749242 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-kh9wb" Feb 20 15:12:11.613356 master-0 kubenswrapper[28120]: I0220 15:12:11.613266 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-67fxz" Feb 20 15:12:14.919207 master-0 kubenswrapper[28120]: I0220 15:12:14.919124 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7945d98f64-m5tgk" Feb 20 15:12:24.001291 master-0 kubenswrapper[28120]: I0220 15:12:24.001167 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-pgm29"] Feb 20 15:12:24.005944 master-0 kubenswrapper[28120]: I0220 15:12:24.002385 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pgm29" Feb 20 15:12:24.008246 master-0 kubenswrapper[28120]: I0220 15:12:24.008197 28120 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 20 15:12:24.041874 master-0 kubenswrapper[28120]: I0220 15:12:24.041808 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4wxlt"] Feb 20 15:12:24.088944 master-0 kubenswrapper[28120]: I0220 15:12:24.081497 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.146943 master-0 kubenswrapper[28120]: I0220 15:12:24.145805 28120 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 20 15:12:24.147413 master-0 kubenswrapper[28120]: I0220 15:12:24.147285 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 20 15:12:24.179870 master-0 kubenswrapper[28120]: I0220 15:12:24.179825 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-pgm29"] Feb 20 15:12:24.228840 master-0 kubenswrapper[28120]: I0220 15:12:24.228753 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-ddtx2"] Feb 20 15:12:24.230041 master-0 kubenswrapper[28120]: I0220 15:12:24.230018 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ddtx2" Feb 20 15:12:24.234881 master-0 kubenswrapper[28120]: I0220 15:12:24.234839 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-qp6jx"] Feb 20 15:12:24.236100 master-0 kubenswrapper[28120]: I0220 15:12:24.236070 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-qp6jx" Feb 20 15:12:24.239521 master-0 kubenswrapper[28120]: I0220 15:12:24.239496 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 20 15:12:24.239620 master-0 kubenswrapper[28120]: I0220 15:12:24.239501 28120 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 20 15:12:24.239684 master-0 kubenswrapper[28120]: I0220 15:12:24.239665 28120 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 20 15:12:24.239801 master-0 kubenswrapper[28120]: I0220 15:12:24.239766 28120 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 20 15:12:24.251412 master-0 kubenswrapper[28120]: I0220 15:12:24.251264 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-qp6jx"] Feb 20 15:12:24.279602 master-0 kubenswrapper[28120]: I0220 15:12:24.279539 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed264ff3-054a-4a96-a9b1-feeae607f3f1-cert\") pod \"controller-69bbfbf88f-qp6jx\" (UID: \"ed264ff3-054a-4a96-a9b1-feeae607f3f1\") " pod="metallb-system/controller-69bbfbf88f-qp6jx" Feb 20 15:12:24.279754 master-0 kubenswrapper[28120]: I0220 15:12:24.279625 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd6j7\" (UniqueName: \"kubernetes.io/projected/2d81ad0b-083d-4e88-a137-40d77ecc4d82-kube-api-access-cd6j7\") pod \"speaker-ddtx2\" (UID: \"2d81ad0b-083d-4e88-a137-40d77ecc4d82\") " pod="metallb-system/speaker-ddtx2" Feb 20 15:12:24.279754 master-0 kubenswrapper[28120]: I0220 15:12:24.279644 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed264ff3-054a-4a96-a9b1-feeae607f3f1-metrics-certs\") pod \"controller-69bbfbf88f-qp6jx\" (UID: \"ed264ff3-054a-4a96-a9b1-feeae607f3f1\") " pod="metallb-system/controller-69bbfbf88f-qp6jx" Feb 20 15:12:24.279754 master-0 kubenswrapper[28120]: I0220 15:12:24.279664 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2d81ad0b-083d-4e88-a137-40d77ecc4d82-metallb-excludel2\") pod \"speaker-ddtx2\" (UID: \"2d81ad0b-083d-4e88-a137-40d77ecc4d82\") " pod="metallb-system/speaker-ddtx2" Feb 20 15:12:24.279754 master-0 kubenswrapper[28120]: I0220 15:12:24.279690 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2d81ad0b-083d-4e88-a137-40d77ecc4d82-memberlist\") pod \"speaker-ddtx2\" (UID: \"2d81ad0b-083d-4e88-a137-40d77ecc4d82\") " pod="metallb-system/speaker-ddtx2" Feb 20 15:12:24.279754 master-0 kubenswrapper[28120]: I0220 15:12:24.279713 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7vd2\" (UniqueName: \"kubernetes.io/projected/f8c37333-9069-4a4b-b2b6-dd9b6a93390b-kube-api-access-s7vd2\") pod \"frr-k8s-webhook-server-78b44bf5bb-pgm29\" (UID: \"f8c37333-9069-4a4b-b2b6-dd9b6a93390b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pgm29" Feb 20 15:12:24.279754 master-0 kubenswrapper[28120]: I0220 15:12:24.279732 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8c37333-9069-4a4b-b2b6-dd9b6a93390b-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-pgm29\" (UID: \"f8c37333-9069-4a4b-b2b6-dd9b6a93390b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pgm29" Feb 20 15:12:24.279754 master-0 kubenswrapper[28120]: I0220 15:12:24.279748 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d81ad0b-083d-4e88-a137-40d77ecc4d82-metrics-certs\") pod \"speaker-ddtx2\" (UID: \"2d81ad0b-083d-4e88-a137-40d77ecc4d82\") " pod="metallb-system/speaker-ddtx2" Feb 20 15:12:24.279981 master-0 kubenswrapper[28120]: I0220 15:12:24.279764 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-reloader\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.279981 master-0 kubenswrapper[28120]: I0220 15:12:24.279783 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-frr-startup\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.279981 master-0 kubenswrapper[28120]: I0220 15:12:24.279800 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-frr-sockets\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.279981 master-0 kubenswrapper[28120]: I0220 15:12:24.279842 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-metrics\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.279981 master-0 kubenswrapper[28120]: I0220 15:12:24.279878 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-frr-conf\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.279981 master-0 kubenswrapper[28120]: I0220 15:12:24.279905 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb6n7\" (UniqueName: \"kubernetes.io/projected/ed264ff3-054a-4a96-a9b1-feeae607f3f1-kube-api-access-sb6n7\") pod \"controller-69bbfbf88f-qp6jx\" (UID: \"ed264ff3-054a-4a96-a9b1-feeae607f3f1\") " pod="metallb-system/controller-69bbfbf88f-qp6jx" Feb 20 15:12:24.279981 master-0 kubenswrapper[28120]: I0220 15:12:24.279934 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f98mh\" (UniqueName: \"kubernetes.io/projected/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-kube-api-access-f98mh\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.279981 master-0 kubenswrapper[28120]: I0220 15:12:24.279951 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-metrics-certs\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.380393 master-0 kubenswrapper[28120]: I0220 15:12:24.380325 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7vd2\" (UniqueName: \"kubernetes.io/projected/f8c37333-9069-4a4b-b2b6-dd9b6a93390b-kube-api-access-s7vd2\") pod \"frr-k8s-webhook-server-78b44bf5bb-pgm29\" (UID: \"f8c37333-9069-4a4b-b2b6-dd9b6a93390b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pgm29" Feb 20 15:12:24.380393 master-0 kubenswrapper[28120]: I0220 15:12:24.380376 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8c37333-9069-4a4b-b2b6-dd9b6a93390b-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-pgm29\" (UID: \"f8c37333-9069-4a4b-b2b6-dd9b6a93390b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pgm29" Feb 20 15:12:24.380393 master-0 kubenswrapper[28120]: I0220 15:12:24.380396 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d81ad0b-083d-4e88-a137-40d77ecc4d82-metrics-certs\") pod \"speaker-ddtx2\" (UID: \"2d81ad0b-083d-4e88-a137-40d77ecc4d82\") " pod="metallb-system/speaker-ddtx2" Feb 20 15:12:24.380651 master-0 kubenswrapper[28120]: I0220 15:12:24.380417 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-reloader\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.380651 master-0 kubenswrapper[28120]: I0220 15:12:24.380436 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-frr-startup\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.380651 master-0 kubenswrapper[28120]: I0220 15:12:24.380452 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-frr-sockets\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.380651 master-0 kubenswrapper[28120]: I0220 15:12:24.380477 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-metrics\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.380651 master-0 kubenswrapper[28120]: I0220 15:12:24.380511 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-frr-conf\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.380651 master-0 kubenswrapper[28120]: I0220 15:12:24.380534 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb6n7\" (UniqueName: \"kubernetes.io/projected/ed264ff3-054a-4a96-a9b1-feeae607f3f1-kube-api-access-sb6n7\") pod \"controller-69bbfbf88f-qp6jx\" (UID: \"ed264ff3-054a-4a96-a9b1-feeae607f3f1\") " pod="metallb-system/controller-69bbfbf88f-qp6jx" Feb 20 15:12:24.380651 master-0 kubenswrapper[28120]: I0220 15:12:24.380548 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f98mh\" (UniqueName: \"kubernetes.io/projected/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-kube-api-access-f98mh\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.380651 master-0 kubenswrapper[28120]: I0220 15:12:24.380563 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-metrics-certs\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.380651 master-0 kubenswrapper[28120]: I0220 15:12:24.380582 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed264ff3-054a-4a96-a9b1-feeae607f3f1-cert\") pod \"controller-69bbfbf88f-qp6jx\" (UID: \"ed264ff3-054a-4a96-a9b1-feeae607f3f1\") " pod="metallb-system/controller-69bbfbf88f-qp6jx" Feb 20 15:12:24.380651 master-0 kubenswrapper[28120]: I0220 15:12:24.380613 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd6j7\" (UniqueName: \"kubernetes.io/projected/2d81ad0b-083d-4e88-a137-40d77ecc4d82-kube-api-access-cd6j7\") pod \"speaker-ddtx2\" (UID: \"2d81ad0b-083d-4e88-a137-40d77ecc4d82\") " pod="metallb-system/speaker-ddtx2" Feb 20 15:12:24.380651 master-0 kubenswrapper[28120]: I0220 15:12:24.380629 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed264ff3-054a-4a96-a9b1-feeae607f3f1-metrics-certs\") pod \"controller-69bbfbf88f-qp6jx\" (UID: \"ed264ff3-054a-4a96-a9b1-feeae607f3f1\") " pod="metallb-system/controller-69bbfbf88f-qp6jx" Feb 20 15:12:24.380651 master-0 kubenswrapper[28120]: I0220 15:12:24.380648 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2d81ad0b-083d-4e88-a137-40d77ecc4d82-metallb-excludel2\") pod \"speaker-ddtx2\" (UID: \"2d81ad0b-083d-4e88-a137-40d77ecc4d82\") " pod="metallb-system/speaker-ddtx2" Feb 20 15:12:24.381117 master-0 kubenswrapper[28120]: I0220 15:12:24.380673 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2d81ad0b-083d-4e88-a137-40d77ecc4d82-memberlist\") pod \"speaker-ddtx2\" (UID: \"2d81ad0b-083d-4e88-a137-40d77ecc4d82\") " pod="metallb-system/speaker-ddtx2" Feb 20 15:12:24.381117 master-0 kubenswrapper[28120]: E0220 15:12:24.380787 28120 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 20 15:12:24.381117 master-0 kubenswrapper[28120]: E0220 15:12:24.380839 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d81ad0b-083d-4e88-a137-40d77ecc4d82-memberlist podName:2d81ad0b-083d-4e88-a137-40d77ecc4d82 nodeName:}" failed. No retries permitted until 2026-02-20 15:12:24.880821191 +0000 UTC m=+683.141614754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2d81ad0b-083d-4e88-a137-40d77ecc4d82-memberlist") pod "speaker-ddtx2" (UID: "2d81ad0b-083d-4e88-a137-40d77ecc4d82") : secret "metallb-memberlist" not found Feb 20 15:12:24.383908 master-0 kubenswrapper[28120]: I0220 15:12:24.383852 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-frr-startup\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.384023 master-0 kubenswrapper[28120]: E0220 15:12:24.383986 28120 secret.go:189] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 20 15:12:24.384064 master-0 kubenswrapper[28120]: E0220 15:12:24.384043 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d81ad0b-083d-4e88-a137-40d77ecc4d82-metrics-certs podName:2d81ad0b-083d-4e88-a137-40d77ecc4d82 nodeName:}" failed. No retries permitted until 2026-02-20 15:12:24.884026851 +0000 UTC m=+683.144820414 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d81ad0b-083d-4e88-a137-40d77ecc4d82-metrics-certs") pod "speaker-ddtx2" (UID: "2d81ad0b-083d-4e88-a137-40d77ecc4d82") : secret "speaker-certs-secret" not found Feb 20 15:12:24.385505 master-0 kubenswrapper[28120]: I0220 15:12:24.385472 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f8c37333-9069-4a4b-b2b6-dd9b6a93390b-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-pgm29\" (UID: \"f8c37333-9069-4a4b-b2b6-dd9b6a93390b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pgm29" Feb 20 15:12:24.386417 master-0 kubenswrapper[28120]: E0220 15:12:24.386393 28120 secret.go:189] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 20 15:12:24.386539 master-0 kubenswrapper[28120]: E0220 15:12:24.386525 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed264ff3-054a-4a96-a9b1-feeae607f3f1-metrics-certs podName:ed264ff3-054a-4a96-a9b1-feeae607f3f1 nodeName:}" failed. No retries permitted until 2026-02-20 15:12:24.886510553 +0000 UTC m=+683.147304226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed264ff3-054a-4a96-a9b1-feeae607f3f1-metrics-certs") pod "controller-69bbfbf88f-qp6jx" (UID: "ed264ff3-054a-4a96-a9b1-feeae607f3f1") : secret "controller-certs-secret" not found Feb 20 15:12:24.387233 master-0 kubenswrapper[28120]: I0220 15:12:24.387206 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-reloader\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.387309 master-0 kubenswrapper[28120]: I0220 15:12:24.387245 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-frr-sockets\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.387402 master-0 kubenswrapper[28120]: I0220 15:12:24.387363 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-metrics\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.387552 master-0 kubenswrapper[28120]: I0220 15:12:24.387508 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-frr-conf\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.387629 master-0 kubenswrapper[28120]: I0220 15:12:24.387612 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2d81ad0b-083d-4e88-a137-40d77ecc4d82-metallb-excludel2\") pod \"speaker-ddtx2\" (UID: \"2d81ad0b-083d-4e88-a137-40d77ecc4d82\") " pod="metallb-system/speaker-ddtx2" Feb 20 15:12:24.389885 master-0 kubenswrapper[28120]: I0220 15:12:24.389844 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-metrics-certs\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.389973 master-0 kubenswrapper[28120]: I0220 15:12:24.389908 28120 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 20 15:12:24.400995 master-0 kubenswrapper[28120]: I0220 15:12:24.400881 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7vd2\" (UniqueName: \"kubernetes.io/projected/f8c37333-9069-4a4b-b2b6-dd9b6a93390b-kube-api-access-s7vd2\") pod \"frr-k8s-webhook-server-78b44bf5bb-pgm29\" (UID: \"f8c37333-9069-4a4b-b2b6-dd9b6a93390b\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pgm29" Feb 20 15:12:24.402861 master-0 kubenswrapper[28120]: I0220 15:12:24.402837 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb6n7\" (UniqueName: \"kubernetes.io/projected/ed264ff3-054a-4a96-a9b1-feeae607f3f1-kube-api-access-sb6n7\") pod \"controller-69bbfbf88f-qp6jx\" (UID: \"ed264ff3-054a-4a96-a9b1-feeae607f3f1\") " pod="metallb-system/controller-69bbfbf88f-qp6jx" Feb 20 15:12:24.403590 master-0 kubenswrapper[28120]: I0220 15:12:24.403542 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed264ff3-054a-4a96-a9b1-feeae607f3f1-cert\") pod \"controller-69bbfbf88f-qp6jx\" (UID: \"ed264ff3-054a-4a96-a9b1-feeae607f3f1\") " pod="metallb-system/controller-69bbfbf88f-qp6jx" Feb 20 15:12:24.408063 master-0 kubenswrapper[28120]: I0220 15:12:24.407986 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd6j7\" (UniqueName: \"kubernetes.io/projected/2d81ad0b-083d-4e88-a137-40d77ecc4d82-kube-api-access-cd6j7\") pod \"speaker-ddtx2\" (UID: \"2d81ad0b-083d-4e88-a137-40d77ecc4d82\") " pod="metallb-system/speaker-ddtx2" Feb 20 15:12:24.408162 master-0 kubenswrapper[28120]: I0220 15:12:24.408108 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f98mh\" (UniqueName: \"kubernetes.io/projected/4b2c73aa-cb2b-44a5-9d6c-a790ab280c22-kube-api-access-f98mh\") pod \"frr-k8s-4wxlt\" (UID: \"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22\") " pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.499790 master-0 kubenswrapper[28120]: I0220 15:12:24.499744 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:24.631497 master-0 kubenswrapper[28120]: I0220 15:12:24.631452 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pgm29" Feb 20 15:12:24.893946 master-0 kubenswrapper[28120]: I0220 15:12:24.889356 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed264ff3-054a-4a96-a9b1-feeae607f3f1-metrics-certs\") pod \"controller-69bbfbf88f-qp6jx\" (UID: \"ed264ff3-054a-4a96-a9b1-feeae607f3f1\") " pod="metallb-system/controller-69bbfbf88f-qp6jx" Feb 20 15:12:24.893946 master-0 kubenswrapper[28120]: I0220 15:12:24.889461 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2d81ad0b-083d-4e88-a137-40d77ecc4d82-memberlist\") pod \"speaker-ddtx2\" (UID: \"2d81ad0b-083d-4e88-a137-40d77ecc4d82\") " pod="metallb-system/speaker-ddtx2" Feb 20 15:12:24.893946 master-0 kubenswrapper[28120]: I0220 15:12:24.889520 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d81ad0b-083d-4e88-a137-40d77ecc4d82-metrics-certs\") pod \"speaker-ddtx2\" (UID: \"2d81ad0b-083d-4e88-a137-40d77ecc4d82\") " pod="metallb-system/speaker-ddtx2" Feb 20 15:12:24.893946 master-0 kubenswrapper[28120]: E0220 15:12:24.889940 28120 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 20 15:12:24.893946 master-0 kubenswrapper[28120]: E0220 15:12:24.890044 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d81ad0b-083d-4e88-a137-40d77ecc4d82-memberlist podName:2d81ad0b-083d-4e88-a137-40d77ecc4d82 nodeName:}" failed. No retries permitted until 2026-02-20 15:12:25.890011109 +0000 UTC m=+684.150804702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2d81ad0b-083d-4e88-a137-40d77ecc4d82-memberlist") pod "speaker-ddtx2" (UID: "2d81ad0b-083d-4e88-a137-40d77ecc4d82") : secret "metallb-memberlist" not found Feb 20 15:12:24.894874 master-0 kubenswrapper[28120]: I0220 15:12:24.894372 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d81ad0b-083d-4e88-a137-40d77ecc4d82-metrics-certs\") pod \"speaker-ddtx2\" (UID: \"2d81ad0b-083d-4e88-a137-40d77ecc4d82\") " pod="metallb-system/speaker-ddtx2" Feb 20 15:12:24.896968 master-0 kubenswrapper[28120]: I0220 15:12:24.895974 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed264ff3-054a-4a96-a9b1-feeae607f3f1-metrics-certs\") pod \"controller-69bbfbf88f-qp6jx\" (UID: \"ed264ff3-054a-4a96-a9b1-feeae607f3f1\") " pod="metallb-system/controller-69bbfbf88f-qp6jx" Feb 20 15:12:25.070904 master-0 kubenswrapper[28120]: W0220 15:12:25.070852 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8c37333_9069_4a4b_b2b6_dd9b6a93390b.slice/crio-fb00475e4d8776838e7e5682e9cda207d77b18725861c4d406bce86628f41813 WatchSource:0}: Error finding container fb00475e4d8776838e7e5682e9cda207d77b18725861c4d406bce86628f41813: Status 404 returned error can't find the container with id fb00475e4d8776838e7e5682e9cda207d77b18725861c4d406bce86628f41813 Feb 20 15:12:25.072215 master-0 kubenswrapper[28120]: I0220 15:12:25.072157 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-pgm29"] Feb 20 15:12:25.175365 master-0 kubenswrapper[28120]: I0220 15:12:25.175311 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-qp6jx" Feb 20 15:12:25.227074 master-0 kubenswrapper[28120]: I0220 15:12:25.226997 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pgm29" event={"ID":"f8c37333-9069-4a4b-b2b6-dd9b6a93390b","Type":"ContainerStarted","Data":"fb00475e4d8776838e7e5682e9cda207d77b18725861c4d406bce86628f41813"} Feb 20 15:12:25.229344 master-0 kubenswrapper[28120]: I0220 15:12:25.229294 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wxlt" event={"ID":"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22","Type":"ContainerStarted","Data":"ee6b0a04ca015fc00a1b55c5a6c5c2058bcea34815c203e7940a825e4fe45eec"} Feb 20 15:12:25.665576 master-0 kubenswrapper[28120]: W0220 15:12:25.665519 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded264ff3_054a_4a96_a9b1_feeae607f3f1.slice/crio-01d20170a017633d9e2a536964f9effdb62ededffb755902695d4c486a52c200 WatchSource:0}: Error finding container 01d20170a017633d9e2a536964f9effdb62ededffb755902695d4c486a52c200: Status 404 returned error can't find the container with id 01d20170a017633d9e2a536964f9effdb62ededffb755902695d4c486a52c200 Feb 20 15:12:25.667571 master-0 kubenswrapper[28120]: I0220 15:12:25.667527 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-qp6jx"] Feb 20 15:12:25.910015 master-0 kubenswrapper[28120]: I0220 15:12:25.909895 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2d81ad0b-083d-4e88-a137-40d77ecc4d82-memberlist\") pod \"speaker-ddtx2\" (UID: \"2d81ad0b-083d-4e88-a137-40d77ecc4d82\") " pod="metallb-system/speaker-ddtx2" Feb 20 15:12:25.913664 master-0 kubenswrapper[28120]: I0220 15:12:25.913635 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2d81ad0b-083d-4e88-a137-40d77ecc4d82-memberlist\") pod \"speaker-ddtx2\" (UID: \"2d81ad0b-083d-4e88-a137-40d77ecc4d82\") " pod="metallb-system/speaker-ddtx2" Feb 20 15:12:26.067062 master-0 kubenswrapper[28120]: I0220 15:12:26.063254 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ddtx2" Feb 20 15:12:26.108190 master-0 kubenswrapper[28120]: I0220 15:12:26.100465 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-zv5c4"] Feb 20 15:12:26.110484 master-0 kubenswrapper[28120]: I0220 15:12:26.110442 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zv5c4" Feb 20 15:12:26.134993 master-0 kubenswrapper[28120]: I0220 15:12:26.134901 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-zv5c4"] Feb 20 15:12:26.146581 master-0 kubenswrapper[28120]: I0220 15:12:26.145787 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs"] Feb 20 15:12:26.149151 master-0 kubenswrapper[28120]: I0220 15:12:26.146834 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs" Feb 20 15:12:26.149255 master-0 kubenswrapper[28120]: I0220 15:12:26.149161 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 20 15:12:26.156232 master-0 kubenswrapper[28120]: I0220 15:12:26.156172 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-pjgrq"] Feb 20 15:12:26.158086 master-0 kubenswrapper[28120]: I0220 15:12:26.157306 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pjgrq" Feb 20 15:12:26.174663 master-0 kubenswrapper[28120]: I0220 15:12:26.174575 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs"] Feb 20 15:12:26.223232 master-0 kubenswrapper[28120]: I0220 15:12:26.212906 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqpdv\" (UniqueName: \"kubernetes.io/projected/ff53984a-4111-4c0e-a499-aa0dd2b72294-kube-api-access-fqpdv\") pod \"nmstate-metrics-58c85c668d-zv5c4\" (UID: \"ff53984a-4111-4c0e-a499-aa0dd2b72294\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-zv5c4" Feb 20 15:12:26.223232 master-0 kubenswrapper[28120]: I0220 15:12:26.212971 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/97a02112-c089-4f6d-8ff5-9d09b9ca022a-nmstate-lock\") pod \"nmstate-handler-pjgrq\" (UID: \"97a02112-c089-4f6d-8ff5-9d09b9ca022a\") " pod="openshift-nmstate/nmstate-handler-pjgrq" Feb 20 15:12:26.223232 master-0 kubenswrapper[28120]: I0220 15:12:26.213007 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/97a02112-c089-4f6d-8ff5-9d09b9ca022a-ovs-socket\") pod \"nmstate-handler-pjgrq\" (UID: \"97a02112-c089-4f6d-8ff5-9d09b9ca022a\") " pod="openshift-nmstate/nmstate-handler-pjgrq" Feb 20 15:12:26.223232 master-0 kubenswrapper[28120]: I0220 15:12:26.213025 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcngr\" (UniqueName: \"kubernetes.io/projected/97a02112-c089-4f6d-8ff5-9d09b9ca022a-kube-api-access-lcngr\") pod \"nmstate-handler-pjgrq\" (UID: \"97a02112-c089-4f6d-8ff5-9d09b9ca022a\") " pod="openshift-nmstate/nmstate-handler-pjgrq" Feb 20 15:12:26.223232 master-0 kubenswrapper[28120]: I0220 15:12:26.213069 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/11de2d3b-d697-4ba8-8902-9c484380edff-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-gk8gs\" (UID: \"11de2d3b-d697-4ba8-8902-9c484380edff\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs" Feb 20 15:12:26.223232 master-0 kubenswrapper[28120]: I0220 15:12:26.213092 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlcsb\" (UniqueName: \"kubernetes.io/projected/11de2d3b-d697-4ba8-8902-9c484380edff-kube-api-access-qlcsb\") pod \"nmstate-webhook-866bcb46dc-gk8gs\" (UID: \"11de2d3b-d697-4ba8-8902-9c484380edff\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs" Feb 20 15:12:26.223232 master-0 kubenswrapper[28120]: I0220 15:12:26.213108 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/97a02112-c089-4f6d-8ff5-9d09b9ca022a-dbus-socket\") pod \"nmstate-handler-pjgrq\" (UID: \"97a02112-c089-4f6d-8ff5-9d09b9ca022a\") " pod="openshift-nmstate/nmstate-handler-pjgrq" Feb 20 15:12:26.251453 master-0 kubenswrapper[28120]: I0220 15:12:26.251389 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ddtx2" event={"ID":"2d81ad0b-083d-4e88-a137-40d77ecc4d82","Type":"ContainerStarted","Data":"ca87778e7082e2295ef8d9bd624232a0855d19ef8a861dba472589c21839ddb8"} Feb 20 15:12:26.256139 master-0 kubenswrapper[28120]: I0220 15:12:26.256094 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr"] Feb 20 15:12:26.265662 master-0 kubenswrapper[28120]: I0220 15:12:26.265614 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr" Feb 20 15:12:26.272354 master-0 kubenswrapper[28120]: I0220 15:12:26.272290 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-qp6jx" event={"ID":"ed264ff3-054a-4a96-a9b1-feeae607f3f1","Type":"ContainerStarted","Data":"3220b85b3179a59cc4b3bd754fe52bedc9fadd06602f094f9c71c26e95a629e0"} Feb 20 15:12:26.272511 master-0 kubenswrapper[28120]: I0220 15:12:26.272480 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-qp6jx" event={"ID":"ed264ff3-054a-4a96-a9b1-feeae607f3f1","Type":"ContainerStarted","Data":"01d20170a017633d9e2a536964f9effdb62ededffb755902695d4c486a52c200"} Feb 20 15:12:26.287202 master-0 kubenswrapper[28120]: I0220 15:12:26.287151 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 20 15:12:26.287397 master-0 kubenswrapper[28120]: I0220 15:12:26.287373 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 20 15:12:26.297337 master-0 kubenswrapper[28120]: I0220 15:12:26.297272 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr"] Feb 20 15:12:26.313951 master-0 kubenswrapper[28120]: I0220 15:12:26.313890 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqpdv\" (UniqueName: \"kubernetes.io/projected/ff53984a-4111-4c0e-a499-aa0dd2b72294-kube-api-access-fqpdv\") pod \"nmstate-metrics-58c85c668d-zv5c4\" (UID: \"ff53984a-4111-4c0e-a499-aa0dd2b72294\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-zv5c4" Feb 20 15:12:26.314011 master-0 kubenswrapper[28120]: I0220 15:12:26.313958 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/97a02112-c089-4f6d-8ff5-9d09b9ca022a-nmstate-lock\") pod \"nmstate-handler-pjgrq\" (UID: \"97a02112-c089-4f6d-8ff5-9d09b9ca022a\") " pod="openshift-nmstate/nmstate-handler-pjgrq" Feb 20 15:12:26.314011 master-0 kubenswrapper[28120]: I0220 15:12:26.313997 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/97a02112-c089-4f6d-8ff5-9d09b9ca022a-ovs-socket\") pod \"nmstate-handler-pjgrq\" (UID: \"97a02112-c089-4f6d-8ff5-9d09b9ca022a\") " pod="openshift-nmstate/nmstate-handler-pjgrq" Feb 20 15:12:26.314587 master-0 kubenswrapper[28120]: I0220 15:12:26.314014 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcngr\" (UniqueName: \"kubernetes.io/projected/97a02112-c089-4f6d-8ff5-9d09b9ca022a-kube-api-access-lcngr\") pod \"nmstate-handler-pjgrq\" (UID: \"97a02112-c089-4f6d-8ff5-9d09b9ca022a\") " pod="openshift-nmstate/nmstate-handler-pjgrq" Feb 20 15:12:26.314587 master-0 kubenswrapper[28120]: I0220 15:12:26.314108 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/97a02112-c089-4f6d-8ff5-9d09b9ca022a-ovs-socket\") pod \"nmstate-handler-pjgrq\" (UID: \"97a02112-c089-4f6d-8ff5-9d09b9ca022a\") " pod="openshift-nmstate/nmstate-handler-pjgrq" Feb 20 15:12:26.314587 master-0 kubenswrapper[28120]: I0220 15:12:26.314168 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/97a02112-c089-4f6d-8ff5-9d09b9ca022a-nmstate-lock\") pod \"nmstate-handler-pjgrq\" (UID: \"97a02112-c089-4f6d-8ff5-9d09b9ca022a\") " pod="openshift-nmstate/nmstate-handler-pjgrq" Feb 20 15:12:26.314587 master-0 kubenswrapper[28120]: I0220 15:12:26.314170 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-28pwr\" (UID: \"eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr" Feb 20 15:12:26.314587 master-0 kubenswrapper[28120]: I0220 15:12:26.314201 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-28pwr\" (UID: \"eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr" Feb 20 15:12:26.314587 master-0 kubenswrapper[28120]: I0220 15:12:26.314235 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/11de2d3b-d697-4ba8-8902-9c484380edff-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-gk8gs\" (UID: \"11de2d3b-d697-4ba8-8902-9c484380edff\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs" Feb 20 15:12:26.314587 master-0 kubenswrapper[28120]: I0220 15:12:26.314255 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdh2r\" (UniqueName: \"kubernetes.io/projected/eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854-kube-api-access-tdh2r\") pod \"nmstate-console-plugin-5c78fc5d65-28pwr\" (UID: \"eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr" Feb 20 15:12:26.314587 master-0 kubenswrapper[28120]: I0220 15:12:26.314279 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlcsb\" (UniqueName: \"kubernetes.io/projected/11de2d3b-d697-4ba8-8902-9c484380edff-kube-api-access-qlcsb\") pod \"nmstate-webhook-866bcb46dc-gk8gs\" (UID: \"11de2d3b-d697-4ba8-8902-9c484380edff\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs" Feb 20 15:12:26.314587 master-0 kubenswrapper[28120]: I0220 15:12:26.314296 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/97a02112-c089-4f6d-8ff5-9d09b9ca022a-dbus-socket\") pod \"nmstate-handler-pjgrq\" (UID: \"97a02112-c089-4f6d-8ff5-9d09b9ca022a\") " pod="openshift-nmstate/nmstate-handler-pjgrq" Feb 20 15:12:26.314587 master-0 kubenswrapper[28120]: E0220 15:12:26.314387 28120 secret.go:189] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 20 15:12:26.314587 master-0 kubenswrapper[28120]: I0220 15:12:26.314427 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/97a02112-c089-4f6d-8ff5-9d09b9ca022a-dbus-socket\") pod \"nmstate-handler-pjgrq\" (UID: \"97a02112-c089-4f6d-8ff5-9d09b9ca022a\") " pod="openshift-nmstate/nmstate-handler-pjgrq" Feb 20 15:12:26.314587 master-0 kubenswrapper[28120]: E0220 15:12:26.314512 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11de2d3b-d697-4ba8-8902-9c484380edff-tls-key-pair podName:11de2d3b-d697-4ba8-8902-9c484380edff nodeName:}" failed. No retries permitted until 2026-02-20 15:12:26.814496682 +0000 UTC m=+685.075290235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/11de2d3b-d697-4ba8-8902-9c484380edff-tls-key-pair") pod "nmstate-webhook-866bcb46dc-gk8gs" (UID: "11de2d3b-d697-4ba8-8902-9c484380edff") : secret "openshift-nmstate-webhook" not found Feb 20 15:12:26.332061 master-0 kubenswrapper[28120]: I0220 15:12:26.330391 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqpdv\" (UniqueName: \"kubernetes.io/projected/ff53984a-4111-4c0e-a499-aa0dd2b72294-kube-api-access-fqpdv\") pod \"nmstate-metrics-58c85c668d-zv5c4\" (UID: \"ff53984a-4111-4c0e-a499-aa0dd2b72294\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-zv5c4" Feb 20 15:12:26.347762 master-0 kubenswrapper[28120]: I0220 15:12:26.344450 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcngr\" (UniqueName: \"kubernetes.io/projected/97a02112-c089-4f6d-8ff5-9d09b9ca022a-kube-api-access-lcngr\") pod \"nmstate-handler-pjgrq\" (UID: \"97a02112-c089-4f6d-8ff5-9d09b9ca022a\") " pod="openshift-nmstate/nmstate-handler-pjgrq" Feb 20 15:12:26.349724 master-0 kubenswrapper[28120]: I0220 15:12:26.349674 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlcsb\" (UniqueName: \"kubernetes.io/projected/11de2d3b-d697-4ba8-8902-9c484380edff-kube-api-access-qlcsb\") pod \"nmstate-webhook-866bcb46dc-gk8gs\" (UID: \"11de2d3b-d697-4ba8-8902-9c484380edff\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs" Feb 20 15:12:26.416498 master-0 kubenswrapper[28120]: I0220 15:12:26.415113 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdh2r\" (UniqueName: \"kubernetes.io/projected/eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854-kube-api-access-tdh2r\") pod \"nmstate-console-plugin-5c78fc5d65-28pwr\" (UID: \"eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr" Feb 20 15:12:26.416498 master-0 kubenswrapper[28120]: I0220 15:12:26.415264 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-28pwr\" (UID: \"eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr" Feb 20 15:12:26.416498 master-0 kubenswrapper[28120]: I0220 15:12:26.415285 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-28pwr\" (UID: \"eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr" Feb 20 15:12:26.417352 master-0 kubenswrapper[28120]: I0220 15:12:26.417322 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-28pwr\" (UID: \"eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr" Feb 20 15:12:26.419953 master-0 kubenswrapper[28120]: I0220 15:12:26.418645 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-28pwr\" (UID: \"eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr" Feb 20 15:12:26.446005 master-0 kubenswrapper[28120]: I0220 15:12:26.443585 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zv5c4" Feb 20 15:12:26.450860 master-0 kubenswrapper[28120]: I0220 15:12:26.446816 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdh2r\" (UniqueName: \"kubernetes.io/projected/eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854-kube-api-access-tdh2r\") pod \"nmstate-console-plugin-5c78fc5d65-28pwr\" (UID: \"eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr" Feb 20 15:12:26.454113 master-0 kubenswrapper[28120]: I0220 15:12:26.454030 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-56c6d95778-qpzbz"] Feb 20 15:12:26.455301 master-0 kubenswrapper[28120]: I0220 15:12:26.455276 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.464534 master-0 kubenswrapper[28120]: I0220 15:12:26.463171 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56c6d95778-qpzbz"] Feb 20 15:12:26.485295 master-0 kubenswrapper[28120]: I0220 15:12:26.485240 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pjgrq" Feb 20 15:12:26.518567 master-0 kubenswrapper[28120]: I0220 15:12:26.518520 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-service-ca\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.518567 master-0 kubenswrapper[28120]: I0220 15:12:26.518567 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-trusted-ca-bundle\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.518766 master-0 kubenswrapper[28120]: I0220 15:12:26.518587 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdljh\" (UniqueName: \"kubernetes.io/projected/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-kube-api-access-kdljh\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.518805 master-0 kubenswrapper[28120]: I0220 15:12:26.518712 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-console-oauth-config\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.518862 master-0 kubenswrapper[28120]: I0220 15:12:26.518836 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-console-config\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.518902 master-0 kubenswrapper[28120]: I0220 15:12:26.518870 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-console-serving-cert\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.518995 master-0 kubenswrapper[28120]: I0220 15:12:26.518908 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-oauth-serving-cert\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.519405 master-0 kubenswrapper[28120]: W0220 15:12:26.519349 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97a02112_c089_4f6d_8ff5_9d09b9ca022a.slice/crio-eb36ae9579c5b1d413151a63d884e42579b4e42f7db12d58bfb1993d69683fc0 WatchSource:0}: Error finding container eb36ae9579c5b1d413151a63d884e42579b4e42f7db12d58bfb1993d69683fc0: Status 404 returned error can't find the container with id eb36ae9579c5b1d413151a63d884e42579b4e42f7db12d58bfb1993d69683fc0 Feb 20 15:12:26.603052 master-0 kubenswrapper[28120]: I0220 15:12:26.594815 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr" Feb 20 15:12:26.654679 master-0 kubenswrapper[28120]: I0220 15:12:26.654249 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-service-ca\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.654679 master-0 kubenswrapper[28120]: I0220 15:12:26.654333 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-trusted-ca-bundle\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.654679 master-0 kubenswrapper[28120]: I0220 15:12:26.654349 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdljh\" (UniqueName: \"kubernetes.io/projected/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-kube-api-access-kdljh\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.654679 master-0 kubenswrapper[28120]: I0220 15:12:26.654559 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-console-oauth-config\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.654904 master-0 kubenswrapper[28120]: I0220 15:12:26.654699 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-console-config\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.654904 master-0 kubenswrapper[28120]: I0220 15:12:26.654737 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-console-serving-cert\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.654904 master-0 kubenswrapper[28120]: I0220 15:12:26.654814 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-oauth-serving-cert\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.655205 master-0 kubenswrapper[28120]: I0220 15:12:26.655171 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-service-ca\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.663827 master-0 kubenswrapper[28120]: I0220 15:12:26.656879 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-console-config\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.663827 master-0 kubenswrapper[28120]: I0220 15:12:26.657781 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-trusted-ca-bundle\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.663827 master-0 kubenswrapper[28120]: I0220 15:12:26.658441 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-console-oauth-config\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.665349 master-0 kubenswrapper[28120]: I0220 15:12:26.664811 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-console-serving-cert\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.684403 master-0 kubenswrapper[28120]: I0220 15:12:26.677668 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-oauth-serving-cert\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.684403 master-0 kubenswrapper[28120]: I0220 15:12:26.683370 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdljh\" (UniqueName: \"kubernetes.io/projected/94895bb1-f5be-4bd6-ab50-45b0ac8c7278-kube-api-access-kdljh\") pod \"console-56c6d95778-qpzbz\" (UID: \"94895bb1-f5be-4bd6-ab50-45b0ac8c7278\") " pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.841858 master-0 kubenswrapper[28120]: I0220 15:12:26.841753 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:26.863946 master-0 kubenswrapper[28120]: I0220 15:12:26.861843 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/11de2d3b-d697-4ba8-8902-9c484380edff-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-gk8gs\" (UID: \"11de2d3b-d697-4ba8-8902-9c484380edff\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs" Feb 20 15:12:26.864808 master-0 kubenswrapper[28120]: I0220 15:12:26.864618 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/11de2d3b-d697-4ba8-8902-9c484380edff-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-gk8gs\" (UID: \"11de2d3b-d697-4ba8-8902-9c484380edff\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs" Feb 20 15:12:26.976155 master-0 kubenswrapper[28120]: I0220 15:12:26.972412 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-zv5c4"] Feb 20 15:12:27.063055 master-0 kubenswrapper[28120]: I0220 15:12:27.062989 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs" Feb 20 15:12:27.072325 master-0 kubenswrapper[28120]: I0220 15:12:27.071431 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr"] Feb 20 15:12:27.078872 master-0 kubenswrapper[28120]: W0220 15:12:27.078838 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeda60af7_e9c6_4c59_a1fe_a0b2cb0b7854.slice/crio-be66d046e83a443379308362598cbd70717af474a52d29b84c187c4faf50935f WatchSource:0}: Error finding container be66d046e83a443379308362598cbd70717af474a52d29b84c187c4faf50935f: Status 404 returned error can't find the container with id be66d046e83a443379308362598cbd70717af474a52d29b84c187c4faf50935f Feb 20 15:12:27.282159 master-0 kubenswrapper[28120]: I0220 15:12:27.282110 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56c6d95778-qpzbz"] Feb 20 15:12:27.282660 master-0 kubenswrapper[28120]: W0220 15:12:27.282285 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94895bb1_f5be_4bd6_ab50_45b0ac8c7278.slice/crio-0c698f2b74628613f156c9f2a9a260089a0dd4d820473c30735692c3f8beb2d1 WatchSource:0}: Error finding container 0c698f2b74628613f156c9f2a9a260089a0dd4d820473c30735692c3f8beb2d1: Status 404 returned error can't find the container with id 0c698f2b74628613f156c9f2a9a260089a0dd4d820473c30735692c3f8beb2d1 Feb 20 15:12:27.283037 master-0 kubenswrapper[28120]: I0220 15:12:27.282960 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pjgrq" event={"ID":"97a02112-c089-4f6d-8ff5-9d09b9ca022a","Type":"ContainerStarted","Data":"eb36ae9579c5b1d413151a63d884e42579b4e42f7db12d58bfb1993d69683fc0"} Feb 20 15:12:27.284388 master-0 kubenswrapper[28120]: I0220 15:12:27.284360 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr" event={"ID":"eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854","Type":"ContainerStarted","Data":"be66d046e83a443379308362598cbd70717af474a52d29b84c187c4faf50935f"} Feb 20 15:12:27.285718 master-0 kubenswrapper[28120]: I0220 15:12:27.285681 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ddtx2" event={"ID":"2d81ad0b-083d-4e88-a137-40d77ecc4d82","Type":"ContainerStarted","Data":"599dbc650338dbba3c622002e7c02844573d8b85668e893accdb2505b0b3b18d"} Feb 20 15:12:27.287668 master-0 kubenswrapper[28120]: I0220 15:12:27.286786 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zv5c4" event={"ID":"ff53984a-4111-4c0e-a499-aa0dd2b72294","Type":"ContainerStarted","Data":"41fe814f70ea2b43738879d89f2007d71d3114ab80104527cd1cc45d63cf9168"} Feb 20 15:12:27.505435 master-0 kubenswrapper[28120]: I0220 15:12:27.505369 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs"] Feb 20 15:12:28.304129 master-0 kubenswrapper[28120]: I0220 15:12:28.304069 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56c6d95778-qpzbz" event={"ID":"94895bb1-f5be-4bd6-ab50-45b0ac8c7278","Type":"ContainerStarted","Data":"e7b7cc207eda865269e4b87a7f657c537a34dfc41f0c3197d071b8fb8ba79cdd"} Feb 20 15:12:28.304129 master-0 kubenswrapper[28120]: I0220 15:12:28.304135 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56c6d95778-qpzbz" event={"ID":"94895bb1-f5be-4bd6-ab50-45b0ac8c7278","Type":"ContainerStarted","Data":"0c698f2b74628613f156c9f2a9a260089a0dd4d820473c30735692c3f8beb2d1"} Feb 20 15:12:28.308033 master-0 kubenswrapper[28120]: I0220 15:12:28.307989 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ddtx2" event={"ID":"2d81ad0b-083d-4e88-a137-40d77ecc4d82","Type":"ContainerStarted","Data":"67781723d4a94b057fba41703a2b690ae9caaa14c383a991c4b4489126083444"} Feb 20 15:12:28.308214 master-0 kubenswrapper[28120]: I0220 15:12:28.308160 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-ddtx2" Feb 20 15:12:28.310673 master-0 kubenswrapper[28120]: I0220 15:12:28.310627 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-qp6jx" event={"ID":"ed264ff3-054a-4a96-a9b1-feeae607f3f1","Type":"ContainerStarted","Data":"18857b13133638c10f9a4d4b2c84d93e615d67b9182a5a601740317b51c16ef6"} Feb 20 15:12:28.310791 master-0 kubenswrapper[28120]: I0220 15:12:28.310765 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-qp6jx" Feb 20 15:12:28.312803 master-0 kubenswrapper[28120]: I0220 15:12:28.312760 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs" event={"ID":"11de2d3b-d697-4ba8-8902-9c484380edff","Type":"ContainerStarted","Data":"a752f440948c251f6e288027e858528fadbe72d24f79b5b65acec8aabf2f0bff"} Feb 20 15:12:28.327875 master-0 kubenswrapper[28120]: I0220 15:12:28.327791 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-56c6d95778-qpzbz" podStartSLOduration=2.32777309 podStartE2EDuration="2.32777309s" podCreationTimestamp="2026-02-20 15:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:12:28.32256958 +0000 UTC m=+686.583363143" watchObservedRunningTime="2026-02-20 15:12:28.32777309 +0000 UTC m=+686.588566663" Feb 20 15:12:28.346265 master-0 kubenswrapper[28120]: I0220 15:12:28.346181 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-qp6jx" podStartSLOduration=2.953701743 podStartE2EDuration="4.346160028s" podCreationTimestamp="2026-02-20 15:12:24 +0000 UTC" firstStartedPulling="2026-02-20 15:12:25.835516158 +0000 UTC m=+684.096309711" lastFinishedPulling="2026-02-20 15:12:27.227974393 +0000 UTC m=+685.488767996" observedRunningTime="2026-02-20 15:12:28.339126683 +0000 UTC m=+686.599920286" watchObservedRunningTime="2026-02-20 15:12:28.346160028 +0000 UTC m=+686.606953611" Feb 20 15:12:28.360244 master-0 kubenswrapper[28120]: I0220 15:12:28.360099 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-ddtx2" podStartSLOduration=3.320288415 podStartE2EDuration="4.360080515s" podCreationTimestamp="2026-02-20 15:12:24 +0000 UTC" firstStartedPulling="2026-02-20 15:12:26.386855347 +0000 UTC m=+684.647648910" lastFinishedPulling="2026-02-20 15:12:27.426647437 +0000 UTC m=+685.687441010" observedRunningTime="2026-02-20 15:12:28.359116161 +0000 UTC m=+686.619909724" watchObservedRunningTime="2026-02-20 15:12:28.360080515 +0000 UTC m=+686.620874078" Feb 20 15:12:33.373124 master-0 kubenswrapper[28120]: I0220 15:12:33.373060 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs" event={"ID":"11de2d3b-d697-4ba8-8902-9c484380edff","Type":"ContainerStarted","Data":"397e2168f06d53220a6ec8b5cc7d9079296996356170049315fdc2f155079485"} Feb 20 15:12:33.374049 master-0 kubenswrapper[28120]: I0220 15:12:33.373161 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs" Feb 20 15:12:33.376010 master-0 kubenswrapper[28120]: I0220 15:12:33.375951 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pjgrq" event={"ID":"97a02112-c089-4f6d-8ff5-9d09b9ca022a","Type":"ContainerStarted","Data":"d8f23cdabe15edbf101c325e3b0ed46263030216be51d118b2a8ab7278b66e96"} Feb 20 15:12:33.376153 master-0 kubenswrapper[28120]: I0220 15:12:33.376020 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-pjgrq" Feb 20 15:12:33.377648 master-0 kubenswrapper[28120]: I0220 15:12:33.377599 28120 generic.go:334] "Generic (PLEG): container finished" podID="4b2c73aa-cb2b-44a5-9d6c-a790ab280c22" containerID="0c5eae79432c96dae93ac6726c8639002ebf7d199592ee1db888092814e1ce70" exitCode=0 Feb 20 15:12:33.377797 master-0 kubenswrapper[28120]: I0220 15:12:33.377677 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wxlt" event={"ID":"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22","Type":"ContainerDied","Data":"0c5eae79432c96dae93ac6726c8639002ebf7d199592ee1db888092814e1ce70"} Feb 20 15:12:33.380026 master-0 kubenswrapper[28120]: I0220 15:12:33.379987 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr" event={"ID":"eda60af7-e9c6-4c59-a1fe-a0b2cb0b7854","Type":"ContainerStarted","Data":"88ff8c366a79732091892edb4c370b2e3133a05af00b7a1c3e0f03e821dff246"} Feb 20 15:12:33.382657 master-0 kubenswrapper[28120]: I0220 15:12:33.382614 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pgm29" event={"ID":"f8c37333-9069-4a4b-b2b6-dd9b6a93390b","Type":"ContainerStarted","Data":"368e9f39fbaf65238e347508f6d9db77e1cb81d2f320a43f9f732bc00e3ce68d"} Feb 20 15:12:33.385655 master-0 kubenswrapper[28120]: I0220 15:12:33.385606 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zv5c4" event={"ID":"ff53984a-4111-4c0e-a499-aa0dd2b72294","Type":"ContainerStarted","Data":"b4d1e481fbdd58411f728382c53f34c9d17ea474dfb7a311c5d42dbff513101e"} Feb 20 15:12:33.385861 master-0 kubenswrapper[28120]: I0220 15:12:33.385832 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zv5c4" event={"ID":"ff53984a-4111-4c0e-a499-aa0dd2b72294","Type":"ContainerStarted","Data":"1f7be04cf880a3b880ecd826e3e7eb90e8144e143a309470dd18586da8f61d2b"} Feb 20 15:12:33.409835 master-0 kubenswrapper[28120]: I0220 15:12:33.409718 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs" podStartSLOduration=2.070704848 podStartE2EDuration="7.409689512s" podCreationTimestamp="2026-02-20 15:12:26 +0000 UTC" firstStartedPulling="2026-02-20 15:12:27.514710483 +0000 UTC m=+685.775504056" lastFinishedPulling="2026-02-20 15:12:32.853695127 +0000 UTC m=+691.114488720" observedRunningTime="2026-02-20 15:12:33.402497202 +0000 UTC m=+691.663290805" watchObservedRunningTime="2026-02-20 15:12:33.409689512 +0000 UTC m=+691.670483105" Feb 20 15:12:33.422832 master-0 kubenswrapper[28120]: I0220 15:12:33.422540 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-pjgrq" podStartSLOduration=1.10315451 podStartE2EDuration="7.422522092s" podCreationTimestamp="2026-02-20 15:12:26 +0000 UTC" firstStartedPulling="2026-02-20 15:12:26.534620692 +0000 UTC m=+684.795414255" lastFinishedPulling="2026-02-20 15:12:32.853988244 +0000 UTC m=+691.114781837" observedRunningTime="2026-02-20 15:12:33.422278275 +0000 UTC m=+691.683071868" watchObservedRunningTime="2026-02-20 15:12:33.422522092 +0000 UTC m=+691.683315645" Feb 20 15:12:33.455724 master-0 kubenswrapper[28120]: I0220 15:12:33.455644 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pgm29" podStartSLOduration=2.607786318 podStartE2EDuration="10.455622347s" podCreationTimestamp="2026-02-20 15:12:23 +0000 UTC" firstStartedPulling="2026-02-20 15:12:25.073047464 +0000 UTC m=+683.333841027" lastFinishedPulling="2026-02-20 15:12:32.920883463 +0000 UTC m=+691.181677056" observedRunningTime="2026-02-20 15:12:33.448779756 +0000 UTC m=+691.709573319" watchObservedRunningTime="2026-02-20 15:12:33.455622347 +0000 UTC m=+691.716415920" Feb 20 15:12:33.514228 master-0 kubenswrapper[28120]: I0220 15:12:33.491531 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zv5c4" podStartSLOduration=1.629949906 podStartE2EDuration="7.491509012s" podCreationTimestamp="2026-02-20 15:12:26 +0000 UTC" firstStartedPulling="2026-02-20 15:12:26.994520781 +0000 UTC m=+685.255314344" lastFinishedPulling="2026-02-20 15:12:32.856079877 +0000 UTC m=+691.116873450" observedRunningTime="2026-02-20 15:12:33.475243366 +0000 UTC m=+691.736036949" watchObservedRunningTime="2026-02-20 15:12:33.491509012 +0000 UTC m=+691.752302585" Feb 20 15:12:33.547591 master-0 kubenswrapper[28120]: I0220 15:12:33.547433 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-28pwr" podStartSLOduration=1.784695885 podStartE2EDuration="7.547410836s" podCreationTimestamp="2026-02-20 15:12:26 +0000 UTC" firstStartedPulling="2026-02-20 15:12:27.083003047 +0000 UTC m=+685.343796610" lastFinishedPulling="2026-02-20 15:12:32.845717968 +0000 UTC m=+691.106511561" observedRunningTime="2026-02-20 15:12:33.527082529 +0000 UTC m=+691.787876102" watchObservedRunningTime="2026-02-20 15:12:33.547410836 +0000 UTC m=+691.808204399" Feb 20 15:12:34.401798 master-0 kubenswrapper[28120]: I0220 15:12:34.401707 28120 generic.go:334] "Generic (PLEG): container finished" podID="4b2c73aa-cb2b-44a5-9d6c-a790ab280c22" containerID="f5de25e2e5678ef302887b123bd14390efc9fc288d844058f840c0ac04342d0a" exitCode=0 Feb 20 15:12:34.401798 master-0 kubenswrapper[28120]: I0220 15:12:34.401770 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wxlt" event={"ID":"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22","Type":"ContainerDied","Data":"f5de25e2e5678ef302887b123bd14390efc9fc288d844058f840c0ac04342d0a"} Feb 20 15:12:34.403292 master-0 kubenswrapper[28120]: I0220 15:12:34.402850 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pgm29" Feb 20 15:12:35.186167 master-0 kubenswrapper[28120]: I0220 15:12:35.185553 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-qp6jx" Feb 20 15:12:35.417825 master-0 kubenswrapper[28120]: I0220 15:12:35.417744 28120 generic.go:334] "Generic (PLEG): container finished" podID="4b2c73aa-cb2b-44a5-9d6c-a790ab280c22" containerID="b155508941270b053f2a369b0143f801663ea1b147ebfb3a14c2983c1fe47be3" exitCode=0 Feb 20 15:12:35.418647 master-0 kubenswrapper[28120]: I0220 15:12:35.417832 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wxlt" event={"ID":"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22","Type":"ContainerDied","Data":"b155508941270b053f2a369b0143f801663ea1b147ebfb3a14c2983c1fe47be3"} Feb 20 15:12:36.068410 master-0 kubenswrapper[28120]: I0220 15:12:36.068320 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-ddtx2" Feb 20 15:12:36.433668 master-0 kubenswrapper[28120]: I0220 15:12:36.433620 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wxlt" event={"ID":"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22","Type":"ContainerStarted","Data":"b784c9c90b2a766807bf4dfef36ff35a35548bb7a230583e4e600e0bc48b5d41"} Feb 20 15:12:36.434245 master-0 kubenswrapper[28120]: I0220 15:12:36.433678 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wxlt" event={"ID":"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22","Type":"ContainerStarted","Data":"ab87184c17c25baed46975d0609f7b544843f715b2c3fd00cbe28813d7a76ee6"} Feb 20 15:12:36.434245 master-0 kubenswrapper[28120]: I0220 15:12:36.433691 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wxlt" event={"ID":"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22","Type":"ContainerStarted","Data":"3e324cc2c910c1957ef204d9355b6faf0f57f0f359522012a8482cc81319e674"} Feb 20 15:12:36.434245 master-0 kubenswrapper[28120]: I0220 15:12:36.433702 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wxlt" event={"ID":"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22","Type":"ContainerStarted","Data":"112ffa7d5087cf77edda0b5061b63a9c9b4f034be8805cccc078fa0f909f9a32"} Feb 20 15:12:36.842041 master-0 kubenswrapper[28120]: I0220 15:12:36.841987 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:36.842041 master-0 kubenswrapper[28120]: I0220 15:12:36.842045 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:36.850392 master-0 kubenswrapper[28120]: I0220 15:12:36.850328 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:37.452620 master-0 kubenswrapper[28120]: I0220 15:12:37.452449 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wxlt" event={"ID":"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22","Type":"ContainerStarted","Data":"3a1c3c44e5cf410b66d0beb8ffdf7c3aba06f0341c2230d0ad4c9307703dce48"} Feb 20 15:12:37.452620 master-0 kubenswrapper[28120]: I0220 15:12:37.452556 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4wxlt" event={"ID":"4b2c73aa-cb2b-44a5-9d6c-a790ab280c22","Type":"ContainerStarted","Data":"8a95fa891f35c0e9cfa1d017be75d563c4f3509d5cf7511e2e9bed984c99911d"} Feb 20 15:12:37.459318 master-0 kubenswrapper[28120]: I0220 15:12:37.458721 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-56c6d95778-qpzbz" Feb 20 15:12:37.517205 master-0 kubenswrapper[28120]: I0220 15:12:37.516882 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4wxlt" podStartSLOduration=6.348827662 podStartE2EDuration="14.516861776s" podCreationTimestamp="2026-02-20 15:12:23 +0000 UTC" firstStartedPulling="2026-02-20 15:12:24.686195026 +0000 UTC m=+682.946988629" lastFinishedPulling="2026-02-20 15:12:32.85422917 +0000 UTC m=+691.115022743" observedRunningTime="2026-02-20 15:12:37.516478887 +0000 UTC m=+695.777272500" watchObservedRunningTime="2026-02-20 15:12:37.516861776 +0000 UTC m=+695.777655349" Feb 20 15:12:37.614323 master-0 kubenswrapper[28120]: I0220 15:12:37.614231 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6fbcbfc7fb-bq4tx"] Feb 20 15:12:38.462493 master-0 kubenswrapper[28120]: I0220 15:12:38.462405 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:39.500853 master-0 kubenswrapper[28120]: I0220 15:12:39.500765 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:39.583958 master-0 kubenswrapper[28120]: I0220 15:12:39.583820 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:41.530016 master-0 kubenswrapper[28120]: I0220 15:12:41.529897 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-pjgrq" Feb 20 15:12:44.641864 master-0 kubenswrapper[28120]: I0220 15:12:44.641745 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-pgm29" Feb 20 15:12:47.072720 master-0 kubenswrapper[28120]: I0220 15:12:47.072635 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-gk8gs" Feb 20 15:12:51.905001 master-0 kubenswrapper[28120]: I0220 15:12:51.904944 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-6mdgd"] Feb 20 15:12:51.906149 master-0 kubenswrapper[28120]: I0220 15:12:51.905868 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:51.907763 master-0 kubenswrapper[28120]: I0220 15:12:51.907741 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Feb 20 15:12:51.931115 master-0 kubenswrapper[28120]: I0220 15:12:51.931066 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-6mdgd"] Feb 20 15:12:52.004557 master-0 kubenswrapper[28120]: I0220 15:12:52.004483 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-device-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.004557 master-0 kubenswrapper[28120]: I0220 15:12:52.004534 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-csi-plugin-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.004557 master-0 kubenswrapper[28120]: I0220 15:12:52.004552 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7fpm\" (UniqueName: \"kubernetes.io/projected/725cf3d9-4d69-4563-b231-5b5b443d16d1-kube-api-access-t7fpm\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.004870 master-0 kubenswrapper[28120]: I0220 15:12:52.004581 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-node-plugin-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.004870 master-0 kubenswrapper[28120]: I0220 15:12:52.004643 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-lvmd-config\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.004870 master-0 kubenswrapper[28120]: I0220 15:12:52.004687 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-run-udev\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.004870 master-0 kubenswrapper[28120]: I0220 15:12:52.004715 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-sys\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.004870 master-0 kubenswrapper[28120]: I0220 15:12:52.004743 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/725cf3d9-4d69-4563-b231-5b5b443d16d1-metrics-cert\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.004870 master-0 kubenswrapper[28120]: I0220 15:12:52.004760 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-file-lock-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.004870 master-0 kubenswrapper[28120]: I0220 15:12:52.004842 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-pod-volumes-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.005186 master-0 kubenswrapper[28120]: I0220 15:12:52.004907 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-registration-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.108951 master-0 kubenswrapper[28120]: I0220 15:12:52.108860 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-lvmd-config\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.109228 master-0 kubenswrapper[28120]: I0220 15:12:52.108991 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-run-udev\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.109228 master-0 kubenswrapper[28120]: I0220 15:12:52.109139 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-sys\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.109352 master-0 kubenswrapper[28120]: I0220 15:12:52.109241 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-run-udev\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.109413 master-0 kubenswrapper[28120]: I0220 15:12:52.109359 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-lvmd-config\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.109413 master-0 kubenswrapper[28120]: I0220 15:12:52.109376 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/725cf3d9-4d69-4563-b231-5b5b443d16d1-metrics-cert\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.109537 master-0 kubenswrapper[28120]: I0220 15:12:52.109392 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-sys\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.109640 master-0 kubenswrapper[28120]: I0220 15:12:52.109516 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-file-lock-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.109710 master-0 kubenswrapper[28120]: I0220 15:12:52.109663 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-pod-volumes-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.109771 master-0 kubenswrapper[28120]: I0220 15:12:52.109714 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-registration-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.109771 master-0 kubenswrapper[28120]: I0220 15:12:52.109743 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-pod-volumes-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.110093 master-0 kubenswrapper[28120]: I0220 15:12:52.109789 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-registration-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.110350 master-0 kubenswrapper[28120]: I0220 15:12:52.109839 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-device-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.110446 master-0 kubenswrapper[28120]: I0220 15:12:52.110414 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7fpm\" (UniqueName: \"kubernetes.io/projected/725cf3d9-4d69-4563-b231-5b5b443d16d1-kube-api-access-t7fpm\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.110519 master-0 kubenswrapper[28120]: I0220 15:12:52.110473 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-csi-plugin-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.110579 master-0 kubenswrapper[28120]: I0220 15:12:52.110530 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-node-plugin-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.110579 master-0 kubenswrapper[28120]: I0220 15:12:52.109866 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-device-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.110866 master-0 kubenswrapper[28120]: I0220 15:12:52.110820 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-node-plugin-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.111026 master-0 kubenswrapper[28120]: I0220 15:12:52.109858 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-file-lock-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.111112 master-0 kubenswrapper[28120]: I0220 15:12:52.111042 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/725cf3d9-4d69-4563-b231-5b5b443d16d1-csi-plugin-dir\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.113509 master-0 kubenswrapper[28120]: I0220 15:12:52.113457 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/725cf3d9-4d69-4563-b231-5b5b443d16d1-metrics-cert\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.133150 master-0 kubenswrapper[28120]: I0220 15:12:52.133084 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7fpm\" (UniqueName: \"kubernetes.io/projected/725cf3d9-4d69-4563-b231-5b5b443d16d1-kube-api-access-t7fpm\") pod \"vg-manager-6mdgd\" (UID: \"725cf3d9-4d69-4563-b231-5b5b443d16d1\") " pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.223256 master-0 kubenswrapper[28120]: I0220 15:12:52.223084 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:12:52.747413 master-0 kubenswrapper[28120]: W0220 15:12:52.747326 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod725cf3d9_4d69_4563_b231_5b5b443d16d1.slice/crio-bb542c8e0efa9f6cd5847976204b9957446d8001617e1402e50beb8494a86865 WatchSource:0}: Error finding container bb542c8e0efa9f6cd5847976204b9957446d8001617e1402e50beb8494a86865: Status 404 returned error can't find the container with id bb542c8e0efa9f6cd5847976204b9957446d8001617e1402e50beb8494a86865 Feb 20 15:12:52.749585 master-0 kubenswrapper[28120]: I0220 15:12:52.749491 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-6mdgd"] Feb 20 15:12:53.637344 master-0 kubenswrapper[28120]: I0220 15:12:53.637282 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-6mdgd" event={"ID":"725cf3d9-4d69-4563-b231-5b5b443d16d1","Type":"ContainerStarted","Data":"ba6207539abb926083fb0da062b4f1b88d5cb179e7e372e2505412d273752e3d"} Feb 20 15:12:53.637344 master-0 kubenswrapper[28120]: I0220 15:12:53.637338 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-6mdgd" event={"ID":"725cf3d9-4d69-4563-b231-5b5b443d16d1","Type":"ContainerStarted","Data":"bb542c8e0efa9f6cd5847976204b9957446d8001617e1402e50beb8494a86865"} Feb 20 15:12:53.677259 master-0 kubenswrapper[28120]: I0220 15:12:53.677133 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-6mdgd" podStartSLOduration=2.6770847829999997 podStartE2EDuration="2.677084783s" podCreationTimestamp="2026-02-20 15:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:12:53.665655648 +0000 UTC m=+711.926449221" watchObservedRunningTime="2026-02-20 15:12:53.677084783 +0000 UTC m=+711.937878366" Feb 20 15:12:54.509588 master-0 kubenswrapper[28120]: I0220 15:12:54.509525 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4wxlt" Feb 20 15:12:55.680885 master-0 kubenswrapper[28120]: I0220 15:12:55.680771 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-6mdgd_725cf3d9-4d69-4563-b231-5b5b443d16d1/vg-manager/0.log" Feb 20 15:12:55.680885 master-0 kubenswrapper[28120]: I0220 15:12:55.680866 28120 generic.go:334] "Generic (PLEG): container finished" podID="725cf3d9-4d69-4563-b231-5b5b443d16d1" containerID="ba6207539abb926083fb0da062b4f1b88d5cb179e7e372e2505412d273752e3d" exitCode=1 Feb 20 15:12:55.681764 master-0 kubenswrapper[28120]: I0220 15:12:55.680929 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-6mdgd" event={"ID":"725cf3d9-4d69-4563-b231-5b5b443d16d1","Type":"ContainerDied","Data":"ba6207539abb926083fb0da062b4f1b88d5cb179e7e372e2505412d273752e3d"} Feb 20 15:12:55.681894 master-0 kubenswrapper[28120]: I0220 15:12:55.681847 28120 scope.go:117] "RemoveContainer" containerID="ba6207539abb926083fb0da062b4f1b88d5cb179e7e372e2505412d273752e3d" Feb 20 15:12:56.162404 master-0 kubenswrapper[28120]: I0220 15:12:56.162310 28120 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Feb 20 15:12:56.327010 master-0 kubenswrapper[28120]: I0220 15:12:56.326840 28120 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-02-20T15:12:56.1623516Z","Handler":null,"Name":""} Feb 20 15:12:56.331285 master-0 kubenswrapper[28120]: I0220 15:12:56.331223 28120 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Feb 20 15:12:56.331436 master-0 kubenswrapper[28120]: I0220 15:12:56.331311 28120 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Feb 20 15:12:56.691818 master-0 kubenswrapper[28120]: I0220 15:12:56.691679 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-6mdgd_725cf3d9-4d69-4563-b231-5b5b443d16d1/vg-manager/0.log" Feb 20 15:12:56.692402 master-0 kubenswrapper[28120]: I0220 15:12:56.691835 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-6mdgd" event={"ID":"725cf3d9-4d69-4563-b231-5b5b443d16d1","Type":"ContainerStarted","Data":"56a5ba6a847f87c2bed050c7869dfd6996448871a8b23b8c259cec6edcc9086d"} Feb 20 15:12:59.092874 master-0 kubenswrapper[28120]: I0220 15:12:59.092802 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-msbzv"] Feb 20 15:12:59.094156 master-0 kubenswrapper[28120]: I0220 15:12:59.094118 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-msbzv" Feb 20 15:12:59.103266 master-0 kubenswrapper[28120]: I0220 15:12:59.103220 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 20 15:12:59.103414 master-0 kubenswrapper[28120]: I0220 15:12:59.103301 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 20 15:12:59.114802 master-0 kubenswrapper[28120]: I0220 15:12:59.114737 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-msbzv"] Feb 20 15:12:59.192089 master-0 kubenswrapper[28120]: I0220 15:12:59.192018 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxhvm\" (UniqueName: \"kubernetes.io/projected/e8748298-b06b-4cca-86c1-422b40514546-kube-api-access-fxhvm\") pod \"openstack-operator-index-msbzv\" (UID: \"e8748298-b06b-4cca-86c1-422b40514546\") " pod="openstack-operators/openstack-operator-index-msbzv" Feb 20 15:12:59.294463 master-0 kubenswrapper[28120]: I0220 15:12:59.294398 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxhvm\" (UniqueName: \"kubernetes.io/projected/e8748298-b06b-4cca-86c1-422b40514546-kube-api-access-fxhvm\") pod \"openstack-operator-index-msbzv\" (UID: \"e8748298-b06b-4cca-86c1-422b40514546\") " pod="openstack-operators/openstack-operator-index-msbzv" Feb 20 15:12:59.317536 master-0 kubenswrapper[28120]: I0220 15:12:59.317470 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxhvm\" (UniqueName: \"kubernetes.io/projected/e8748298-b06b-4cca-86c1-422b40514546-kube-api-access-fxhvm\") pod \"openstack-operator-index-msbzv\" (UID: \"e8748298-b06b-4cca-86c1-422b40514546\") " pod="openstack-operators/openstack-operator-index-msbzv" Feb 20 15:12:59.430438 master-0 kubenswrapper[28120]: I0220 15:12:59.430310 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-msbzv" Feb 20 15:12:59.880847 master-0 kubenswrapper[28120]: I0220 15:12:59.880784 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-msbzv"] Feb 20 15:12:59.897000 master-0 kubenswrapper[28120]: W0220 15:12:59.896906 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8748298_b06b_4cca_86c1_422b40514546.slice/crio-f4d643cc1cfbfcce84d5caae30bf16cfe16610416bfd1ee007da106713820a74 WatchSource:0}: Error finding container f4d643cc1cfbfcce84d5caae30bf16cfe16610416bfd1ee007da106713820a74: Status 404 returned error can't find the container with id f4d643cc1cfbfcce84d5caae30bf16cfe16610416bfd1ee007da106713820a74 Feb 20 15:13:00.743532 master-0 kubenswrapper[28120]: I0220 15:13:00.743460 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-msbzv" event={"ID":"e8748298-b06b-4cca-86c1-422b40514546","Type":"ContainerStarted","Data":"f4d643cc1cfbfcce84d5caae30bf16cfe16610416bfd1ee007da106713820a74"} Feb 20 15:13:01.754900 master-0 kubenswrapper[28120]: I0220 15:13:01.754824 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-msbzv" event={"ID":"e8748298-b06b-4cca-86c1-422b40514546","Type":"ContainerStarted","Data":"e1cef95ba86605d3b6a615b8d5df840cc80723a14d8b5a254ff9f7a507e7f15b"} Feb 20 15:13:01.782591 master-0 kubenswrapper[28120]: I0220 15:13:01.781288 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-msbzv" podStartSLOduration=1.882696634 podStartE2EDuration="2.781264633s" podCreationTimestamp="2026-02-20 15:12:59 +0000 UTC" firstStartedPulling="2026-02-20 15:12:59.898564673 +0000 UTC m=+718.159358246" lastFinishedPulling="2026-02-20 15:13:00.797132652 +0000 UTC m=+719.057926245" observedRunningTime="2026-02-20 15:13:01.773742856 +0000 UTC m=+720.034536439" watchObservedRunningTime="2026-02-20 15:13:01.781264633 +0000 UTC m=+720.042058196" Feb 20 15:13:02.224874 master-0 kubenswrapper[28120]: I0220 15:13:02.224713 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:13:02.227823 master-0 kubenswrapper[28120]: I0220 15:13:02.227766 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:13:02.687767 master-0 kubenswrapper[28120]: I0220 15:13:02.687641 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6fbcbfc7fb-bq4tx" podUID="744bf91d-59af-4aab-bb6b-71bde572550f" containerName="console" containerID="cri-o://19e1bf99a56344cb4e8da65d6ed306afe6382354ad74a4856e13bf3bec3ad560" gracePeriod=15 Feb 20 15:13:02.768119 master-0 kubenswrapper[28120]: I0220 15:13:02.767680 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:13:02.769111 master-0 kubenswrapper[28120]: I0220 15:13:02.768462 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-6mdgd" Feb 20 15:13:03.214689 master-0 kubenswrapper[28120]: I0220 15:13:03.214025 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6fbcbfc7fb-bq4tx_744bf91d-59af-4aab-bb6b-71bde572550f/console/0.log" Feb 20 15:13:03.214689 master-0 kubenswrapper[28120]: I0220 15:13:03.214117 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:13:03.238667 master-0 kubenswrapper[28120]: I0220 15:13:03.238559 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-msbzv"] Feb 20 15:13:03.298957 master-0 kubenswrapper[28120]: I0220 15:13:03.297182 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-service-ca\") pod \"744bf91d-59af-4aab-bb6b-71bde572550f\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " Feb 20 15:13:03.298957 master-0 kubenswrapper[28120]: I0220 15:13:03.297268 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-trusted-ca-bundle\") pod \"744bf91d-59af-4aab-bb6b-71bde572550f\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " Feb 20 15:13:03.298957 master-0 kubenswrapper[28120]: I0220 15:13:03.297469 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/744bf91d-59af-4aab-bb6b-71bde572550f-console-serving-cert\") pod \"744bf91d-59af-4aab-bb6b-71bde572550f\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " Feb 20 15:13:03.298957 master-0 kubenswrapper[28120]: I0220 15:13:03.297508 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-oauth-serving-cert\") pod \"744bf91d-59af-4aab-bb6b-71bde572550f\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " Feb 20 15:13:03.298957 master-0 kubenswrapper[28120]: I0220 15:13:03.297570 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-console-config\") pod \"744bf91d-59af-4aab-bb6b-71bde572550f\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " Feb 20 15:13:03.298957 master-0 kubenswrapper[28120]: I0220 15:13:03.297604 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbl5w\" (UniqueName: \"kubernetes.io/projected/744bf91d-59af-4aab-bb6b-71bde572550f-kube-api-access-lbl5w\") pod \"744bf91d-59af-4aab-bb6b-71bde572550f\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " Feb 20 15:13:03.298957 master-0 kubenswrapper[28120]: I0220 15:13:03.297682 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/744bf91d-59af-4aab-bb6b-71bde572550f-console-oauth-config\") pod \"744bf91d-59af-4aab-bb6b-71bde572550f\" (UID: \"744bf91d-59af-4aab-bb6b-71bde572550f\") " Feb 20 15:13:03.300256 master-0 kubenswrapper[28120]: I0220 15:13:03.300198 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "744bf91d-59af-4aab-bb6b-71bde572550f" (UID: "744bf91d-59af-4aab-bb6b-71bde572550f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:13:03.300656 master-0 kubenswrapper[28120]: I0220 15:13:03.300579 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-service-ca" (OuterVolumeSpecName: "service-ca") pod "744bf91d-59af-4aab-bb6b-71bde572550f" (UID: "744bf91d-59af-4aab-bb6b-71bde572550f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:13:03.300815 master-0 kubenswrapper[28120]: I0220 15:13:03.300777 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-console-config" (OuterVolumeSpecName: "console-config") pod "744bf91d-59af-4aab-bb6b-71bde572550f" (UID: "744bf91d-59af-4aab-bb6b-71bde572550f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:13:03.301246 master-0 kubenswrapper[28120]: I0220 15:13:03.301150 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "744bf91d-59af-4aab-bb6b-71bde572550f" (UID: "744bf91d-59af-4aab-bb6b-71bde572550f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:13:03.346145 master-0 kubenswrapper[28120]: I0220 15:13:03.344884 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/744bf91d-59af-4aab-bb6b-71bde572550f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "744bf91d-59af-4aab-bb6b-71bde572550f" (UID: "744bf91d-59af-4aab-bb6b-71bde572550f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:13:03.346589 master-0 kubenswrapper[28120]: I0220 15:13:03.346503 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/744bf91d-59af-4aab-bb6b-71bde572550f-kube-api-access-lbl5w" (OuterVolumeSpecName: "kube-api-access-lbl5w") pod "744bf91d-59af-4aab-bb6b-71bde572550f" (UID: "744bf91d-59af-4aab-bb6b-71bde572550f"). InnerVolumeSpecName "kube-api-access-lbl5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:13:03.347126 master-0 kubenswrapper[28120]: I0220 15:13:03.347107 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/744bf91d-59af-4aab-bb6b-71bde572550f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "744bf91d-59af-4aab-bb6b-71bde572550f" (UID: "744bf91d-59af-4aab-bb6b-71bde572550f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:13:03.399543 master-0 kubenswrapper[28120]: I0220 15:13:03.399441 28120 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/744bf91d-59af-4aab-bb6b-71bde572550f-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 15:13:03.399728 master-0 kubenswrapper[28120]: I0220 15:13:03.399716 28120 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 20 15:13:03.399794 master-0 kubenswrapper[28120]: I0220 15:13:03.399783 28120 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-console-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:13:03.399864 master-0 kubenswrapper[28120]: I0220 15:13:03.399854 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbl5w\" (UniqueName: \"kubernetes.io/projected/744bf91d-59af-4aab-bb6b-71bde572550f-kube-api-access-lbl5w\") on node \"master-0\" DevicePath \"\"" Feb 20 15:13:03.399949 master-0 kubenswrapper[28120]: I0220 15:13:03.399921 28120 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/744bf91d-59af-4aab-bb6b-71bde572550f-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:13:03.400021 master-0 kubenswrapper[28120]: I0220 15:13:03.400009 28120 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 20 15:13:03.400080 master-0 kubenswrapper[28120]: I0220 15:13:03.400070 28120 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/744bf91d-59af-4aab-bb6b-71bde572550f-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:13:03.781440 master-0 kubenswrapper[28120]: I0220 15:13:03.781401 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6fbcbfc7fb-bq4tx_744bf91d-59af-4aab-bb6b-71bde572550f/console/0.log" Feb 20 15:13:03.782027 master-0 kubenswrapper[28120]: I0220 15:13:03.782003 28120 generic.go:334] "Generic (PLEG): container finished" podID="744bf91d-59af-4aab-bb6b-71bde572550f" containerID="19e1bf99a56344cb4e8da65d6ed306afe6382354ad74a4856e13bf3bec3ad560" exitCode=2 Feb 20 15:13:03.782242 master-0 kubenswrapper[28120]: I0220 15:13:03.782163 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fbcbfc7fb-bq4tx" Feb 20 15:13:03.782242 master-0 kubenswrapper[28120]: I0220 15:13:03.782162 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fbcbfc7fb-bq4tx" event={"ID":"744bf91d-59af-4aab-bb6b-71bde572550f","Type":"ContainerDied","Data":"19e1bf99a56344cb4e8da65d6ed306afe6382354ad74a4856e13bf3bec3ad560"} Feb 20 15:13:03.782336 master-0 kubenswrapper[28120]: I0220 15:13:03.782254 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fbcbfc7fb-bq4tx" event={"ID":"744bf91d-59af-4aab-bb6b-71bde572550f","Type":"ContainerDied","Data":"fed62013b96ce4ab5651cc1ca0a130154335be7494045a62fb7335f6d97a1509"} Feb 20 15:13:03.782336 master-0 kubenswrapper[28120]: I0220 15:13:03.782288 28120 scope.go:117] "RemoveContainer" containerID="19e1bf99a56344cb4e8da65d6ed306afe6382354ad74a4856e13bf3bec3ad560" Feb 20 15:13:03.783249 master-0 kubenswrapper[28120]: I0220 15:13:03.783143 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-msbzv" podUID="e8748298-b06b-4cca-86c1-422b40514546" containerName="registry-server" containerID="cri-o://e1cef95ba86605d3b6a615b8d5df840cc80723a14d8b5a254ff9f7a507e7f15b" gracePeriod=2 Feb 20 15:13:03.835663 master-0 kubenswrapper[28120]: I0220 15:13:03.835596 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6fbcbfc7fb-bq4tx"] Feb 20 15:13:03.836756 master-0 kubenswrapper[28120]: I0220 15:13:03.836722 28120 scope.go:117] "RemoveContainer" containerID="19e1bf99a56344cb4e8da65d6ed306afe6382354ad74a4856e13bf3bec3ad560" Feb 20 15:13:03.838028 master-0 kubenswrapper[28120]: E0220 15:13:03.837972 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19e1bf99a56344cb4e8da65d6ed306afe6382354ad74a4856e13bf3bec3ad560\": container with ID starting with 19e1bf99a56344cb4e8da65d6ed306afe6382354ad74a4856e13bf3bec3ad560 not found: ID does not exist" containerID="19e1bf99a56344cb4e8da65d6ed306afe6382354ad74a4856e13bf3bec3ad560" Feb 20 15:13:03.838312 master-0 kubenswrapper[28120]: I0220 15:13:03.838254 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19e1bf99a56344cb4e8da65d6ed306afe6382354ad74a4856e13bf3bec3ad560"} err="failed to get container status \"19e1bf99a56344cb4e8da65d6ed306afe6382354ad74a4856e13bf3bec3ad560\": rpc error: code = NotFound desc = could not find container \"19e1bf99a56344cb4e8da65d6ed306afe6382354ad74a4856e13bf3bec3ad560\": container with ID starting with 19e1bf99a56344cb4e8da65d6ed306afe6382354ad74a4856e13bf3bec3ad560 not found: ID does not exist" Feb 20 15:13:03.845701 master-0 kubenswrapper[28120]: I0220 15:13:03.845662 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6fbcbfc7fb-bq4tx"] Feb 20 15:13:03.860778 master-0 kubenswrapper[28120]: I0220 15:13:03.860725 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pbqsz"] Feb 20 15:13:03.861469 master-0 kubenswrapper[28120]: E0220 15:13:03.861448 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744bf91d-59af-4aab-bb6b-71bde572550f" containerName="console" Feb 20 15:13:03.861571 master-0 kubenswrapper[28120]: I0220 15:13:03.861557 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="744bf91d-59af-4aab-bb6b-71bde572550f" containerName="console" Feb 20 15:13:03.861894 master-0 kubenswrapper[28120]: I0220 15:13:03.861878 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="744bf91d-59af-4aab-bb6b-71bde572550f" containerName="console" Feb 20 15:13:03.862685 master-0 kubenswrapper[28120]: I0220 15:13:03.862664 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pbqsz" Feb 20 15:13:03.870394 master-0 kubenswrapper[28120]: I0220 15:13:03.870304 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pbqsz"] Feb 20 15:13:03.917775 master-0 kubenswrapper[28120]: I0220 15:13:03.917690 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz2ch\" (UniqueName: \"kubernetes.io/projected/e3a8e312-d3a3-4298-8d1f-e4ed6d0101d1-kube-api-access-jz2ch\") pod \"openstack-operator-index-pbqsz\" (UID: \"e3a8e312-d3a3-4298-8d1f-e4ed6d0101d1\") " pod="openstack-operators/openstack-operator-index-pbqsz" Feb 20 15:13:04.019591 master-0 kubenswrapper[28120]: I0220 15:13:04.019489 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz2ch\" (UniqueName: \"kubernetes.io/projected/e3a8e312-d3a3-4298-8d1f-e4ed6d0101d1-kube-api-access-jz2ch\") pod \"openstack-operator-index-pbqsz\" (UID: \"e3a8e312-d3a3-4298-8d1f-e4ed6d0101d1\") " pod="openstack-operators/openstack-operator-index-pbqsz" Feb 20 15:13:04.049212 master-0 kubenswrapper[28120]: I0220 15:13:04.048708 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz2ch\" (UniqueName: \"kubernetes.io/projected/e3a8e312-d3a3-4298-8d1f-e4ed6d0101d1-kube-api-access-jz2ch\") pod \"openstack-operator-index-pbqsz\" (UID: \"e3a8e312-d3a3-4298-8d1f-e4ed6d0101d1\") " pod="openstack-operators/openstack-operator-index-pbqsz" Feb 20 15:13:04.076183 master-0 kubenswrapper[28120]: I0220 15:13:04.076072 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="744bf91d-59af-4aab-bb6b-71bde572550f" path="/var/lib/kubelet/pods/744bf91d-59af-4aab-bb6b-71bde572550f/volumes" Feb 20 15:13:04.240188 master-0 kubenswrapper[28120]: I0220 15:13:04.238641 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pbqsz" Feb 20 15:13:04.356510 master-0 kubenswrapper[28120]: I0220 15:13:04.356326 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-msbzv" Feb 20 15:13:04.429491 master-0 kubenswrapper[28120]: I0220 15:13:04.429420 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxhvm\" (UniqueName: \"kubernetes.io/projected/e8748298-b06b-4cca-86c1-422b40514546-kube-api-access-fxhvm\") pod \"e8748298-b06b-4cca-86c1-422b40514546\" (UID: \"e8748298-b06b-4cca-86c1-422b40514546\") " Feb 20 15:13:04.458458 master-0 kubenswrapper[28120]: I0220 15:13:04.458365 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8748298-b06b-4cca-86c1-422b40514546-kube-api-access-fxhvm" (OuterVolumeSpecName: "kube-api-access-fxhvm") pod "e8748298-b06b-4cca-86c1-422b40514546" (UID: "e8748298-b06b-4cca-86c1-422b40514546"). InnerVolumeSpecName "kube-api-access-fxhvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:13:04.533850 master-0 kubenswrapper[28120]: I0220 15:13:04.533728 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxhvm\" (UniqueName: \"kubernetes.io/projected/e8748298-b06b-4cca-86c1-422b40514546-kube-api-access-fxhvm\") on node \"master-0\" DevicePath \"\"" Feb 20 15:13:04.800776 master-0 kubenswrapper[28120]: I0220 15:13:04.800709 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pbqsz"] Feb 20 15:13:04.801117 master-0 kubenswrapper[28120]: I0220 15:13:04.800780 28120 generic.go:334] "Generic (PLEG): container finished" podID="e8748298-b06b-4cca-86c1-422b40514546" containerID="e1cef95ba86605d3b6a615b8d5df840cc80723a14d8b5a254ff9f7a507e7f15b" exitCode=0 Feb 20 15:13:04.801117 master-0 kubenswrapper[28120]: I0220 15:13:04.800797 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-msbzv" event={"ID":"e8748298-b06b-4cca-86c1-422b40514546","Type":"ContainerDied","Data":"e1cef95ba86605d3b6a615b8d5df840cc80723a14d8b5a254ff9f7a507e7f15b"} Feb 20 15:13:04.801117 master-0 kubenswrapper[28120]: I0220 15:13:04.800886 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-msbzv" Feb 20 15:13:04.801117 master-0 kubenswrapper[28120]: I0220 15:13:04.800975 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-msbzv" event={"ID":"e8748298-b06b-4cca-86c1-422b40514546","Type":"ContainerDied","Data":"f4d643cc1cfbfcce84d5caae30bf16cfe16610416bfd1ee007da106713820a74"} Feb 20 15:13:04.801117 master-0 kubenswrapper[28120]: I0220 15:13:04.801019 28120 scope.go:117] "RemoveContainer" containerID="e1cef95ba86605d3b6a615b8d5df840cc80723a14d8b5a254ff9f7a507e7f15b" Feb 20 15:13:04.808647 master-0 kubenswrapper[28120]: W0220 15:13:04.808579 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3a8e312_d3a3_4298_8d1f_e4ed6d0101d1.slice/crio-f0be61626fe70cce5cf3424d6ab6b58599a6aeaef1f305268abc9f6bef5a2faf WatchSource:0}: Error finding container f0be61626fe70cce5cf3424d6ab6b58599a6aeaef1f305268abc9f6bef5a2faf: Status 404 returned error can't find the container with id f0be61626fe70cce5cf3424d6ab6b58599a6aeaef1f305268abc9f6bef5a2faf Feb 20 15:13:04.827346 master-0 kubenswrapper[28120]: I0220 15:13:04.827300 28120 scope.go:117] "RemoveContainer" containerID="e1cef95ba86605d3b6a615b8d5df840cc80723a14d8b5a254ff9f7a507e7f15b" Feb 20 15:13:04.828763 master-0 kubenswrapper[28120]: E0220 15:13:04.828509 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1cef95ba86605d3b6a615b8d5df840cc80723a14d8b5a254ff9f7a507e7f15b\": container with ID starting with e1cef95ba86605d3b6a615b8d5df840cc80723a14d8b5a254ff9f7a507e7f15b not found: ID does not exist" containerID="e1cef95ba86605d3b6a615b8d5df840cc80723a14d8b5a254ff9f7a507e7f15b" Feb 20 15:13:04.828763 master-0 kubenswrapper[28120]: I0220 15:13:04.828561 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1cef95ba86605d3b6a615b8d5df840cc80723a14d8b5a254ff9f7a507e7f15b"} err="failed to get container status \"e1cef95ba86605d3b6a615b8d5df840cc80723a14d8b5a254ff9f7a507e7f15b\": rpc error: code = NotFound desc = could not find container \"e1cef95ba86605d3b6a615b8d5df840cc80723a14d8b5a254ff9f7a507e7f15b\": container with ID starting with e1cef95ba86605d3b6a615b8d5df840cc80723a14d8b5a254ff9f7a507e7f15b not found: ID does not exist" Feb 20 15:13:04.858832 master-0 kubenswrapper[28120]: I0220 15:13:04.858730 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-msbzv"] Feb 20 15:13:04.872554 master-0 kubenswrapper[28120]: I0220 15:13:04.872464 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-msbzv"] Feb 20 15:13:05.830911 master-0 kubenswrapper[28120]: I0220 15:13:05.830834 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pbqsz" event={"ID":"e3a8e312-d3a3-4298-8d1f-e4ed6d0101d1","Type":"ContainerStarted","Data":"b32a6d884093de8d0b6dfc1e799af0cbe02f6830e59f0c37911c85a211ca86ed"} Feb 20 15:13:05.832458 master-0 kubenswrapper[28120]: I0220 15:13:05.831011 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pbqsz" event={"ID":"e3a8e312-d3a3-4298-8d1f-e4ed6d0101d1","Type":"ContainerStarted","Data":"f0be61626fe70cce5cf3424d6ab6b58599a6aeaef1f305268abc9f6bef5a2faf"} Feb 20 15:13:05.868006 master-0 kubenswrapper[28120]: I0220 15:13:05.867832 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pbqsz" podStartSLOduration=2.467500031 podStartE2EDuration="2.867797703s" podCreationTimestamp="2026-02-20 15:13:03 +0000 UTC" firstStartedPulling="2026-02-20 15:13:04.827985043 +0000 UTC m=+723.088778646" lastFinishedPulling="2026-02-20 15:13:05.228282725 +0000 UTC m=+723.489076318" observedRunningTime="2026-02-20 15:13:05.860959383 +0000 UTC m=+724.121752996" watchObservedRunningTime="2026-02-20 15:13:05.867797703 +0000 UTC m=+724.128591306" Feb 20 15:13:06.076478 master-0 kubenswrapper[28120]: I0220 15:13:06.076256 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8748298-b06b-4cca-86c1-422b40514546" path="/var/lib/kubelet/pods/e8748298-b06b-4cca-86c1-422b40514546/volumes" Feb 20 15:13:07.731614 master-0 kubenswrapper[28120]: I0220 15:13:07.731504 28120 scope.go:117] "RemoveContainer" containerID="b565aadbad872ccb9f55e50e2a8984e6d0b605d8356a58bc1a9a25819417ad92" Feb 20 15:13:14.239877 master-0 kubenswrapper[28120]: I0220 15:13:14.239757 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-pbqsz" Feb 20 15:13:14.239877 master-0 kubenswrapper[28120]: I0220 15:13:14.239883 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-pbqsz" Feb 20 15:13:14.290545 master-0 kubenswrapper[28120]: I0220 15:13:14.290418 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-pbqsz" Feb 20 15:13:14.962992 master-0 kubenswrapper[28120]: I0220 15:13:14.962662 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-pbqsz" Feb 20 15:13:22.533241 master-0 kubenswrapper[28120]: I0220 15:13:22.533144 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb"] Feb 20 15:13:22.535441 master-0 kubenswrapper[28120]: E0220 15:13:22.533914 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8748298-b06b-4cca-86c1-422b40514546" containerName="registry-server" Feb 20 15:13:22.535441 master-0 kubenswrapper[28120]: I0220 15:13:22.533982 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8748298-b06b-4cca-86c1-422b40514546" containerName="registry-server" Feb 20 15:13:22.535441 master-0 kubenswrapper[28120]: I0220 15:13:22.534461 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8748298-b06b-4cca-86c1-422b40514546" containerName="registry-server" Feb 20 15:13:22.537085 master-0 kubenswrapper[28120]: I0220 15:13:22.537041 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" Feb 20 15:13:22.548574 master-0 kubenswrapper[28120]: I0220 15:13:22.548490 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb"] Feb 20 15:13:22.584812 master-0 kubenswrapper[28120]: I0220 15:13:22.584713 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvq88\" (UniqueName: \"kubernetes.io/projected/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-kube-api-access-qvq88\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb\" (UID: \"62f082a6-59a7-4bf2-ae8d-efaf494c4efd\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" Feb 20 15:13:22.584812 master-0 kubenswrapper[28120]: I0220 15:13:22.584774 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb\" (UID: \"62f082a6-59a7-4bf2-ae8d-efaf494c4efd\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" Feb 20 15:13:22.584996 master-0 kubenswrapper[28120]: I0220 15:13:22.584852 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb\" (UID: \"62f082a6-59a7-4bf2-ae8d-efaf494c4efd\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" Feb 20 15:13:22.686360 master-0 kubenswrapper[28120]: I0220 15:13:22.686303 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvq88\" (UniqueName: \"kubernetes.io/projected/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-kube-api-access-qvq88\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb\" (UID: \"62f082a6-59a7-4bf2-ae8d-efaf494c4efd\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" Feb 20 15:13:22.686360 master-0 kubenswrapper[28120]: I0220 15:13:22.686358 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb\" (UID: \"62f082a6-59a7-4bf2-ae8d-efaf494c4efd\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" Feb 20 15:13:22.686602 master-0 kubenswrapper[28120]: I0220 15:13:22.686413 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb\" (UID: \"62f082a6-59a7-4bf2-ae8d-efaf494c4efd\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" Feb 20 15:13:22.686893 master-0 kubenswrapper[28120]: I0220 15:13:22.686864 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb\" (UID: \"62f082a6-59a7-4bf2-ae8d-efaf494c4efd\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" Feb 20 15:13:22.687294 master-0 kubenswrapper[28120]: I0220 15:13:22.687218 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb\" (UID: \"62f082a6-59a7-4bf2-ae8d-efaf494c4efd\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" Feb 20 15:13:22.709030 master-0 kubenswrapper[28120]: I0220 15:13:22.708969 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvq88\" (UniqueName: \"kubernetes.io/projected/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-kube-api-access-qvq88\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb\" (UID: \"62f082a6-59a7-4bf2-ae8d-efaf494c4efd\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" Feb 20 15:13:22.889126 master-0 kubenswrapper[28120]: I0220 15:13:22.888982 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" Feb 20 15:13:23.397136 master-0 kubenswrapper[28120]: I0220 15:13:23.397050 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb"] Feb 20 15:13:23.399047 master-0 kubenswrapper[28120]: W0220 15:13:23.399002 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f082a6_59a7_4bf2_ae8d_efaf494c4efd.slice/crio-58132f8ed17b8e008ba3310b908d57a6538b2ca31118f1a691e46a4032f217b7 WatchSource:0}: Error finding container 58132f8ed17b8e008ba3310b908d57a6538b2ca31118f1a691e46a4032f217b7: Status 404 returned error can't find the container with id 58132f8ed17b8e008ba3310b908d57a6538b2ca31118f1a691e46a4032f217b7 Feb 20 15:13:24.042276 master-0 kubenswrapper[28120]: I0220 15:13:24.042198 28120 generic.go:334] "Generic (PLEG): container finished" podID="62f082a6-59a7-4bf2-ae8d-efaf494c4efd" containerID="0a37c83171d78cc5a4b5ecac259f7efc0a9ec310e18856c49a5079a36740a9f2" exitCode=0 Feb 20 15:13:24.043322 master-0 kubenswrapper[28120]: I0220 15:13:24.043243 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" event={"ID":"62f082a6-59a7-4bf2-ae8d-efaf494c4efd","Type":"ContainerDied","Data":"0a37c83171d78cc5a4b5ecac259f7efc0a9ec310e18856c49a5079a36740a9f2"} Feb 20 15:13:24.043584 master-0 kubenswrapper[28120]: I0220 15:13:24.043541 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" event={"ID":"62f082a6-59a7-4bf2-ae8d-efaf494c4efd","Type":"ContainerStarted","Data":"58132f8ed17b8e008ba3310b908d57a6538b2ca31118f1a691e46a4032f217b7"} Feb 20 15:13:25.063172 master-0 kubenswrapper[28120]: I0220 15:13:25.059153 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" event={"ID":"62f082a6-59a7-4bf2-ae8d-efaf494c4efd","Type":"ContainerStarted","Data":"57e7d60eb199129819d680d9e824f7d8e29eb8ad548ddebfad42acf2b26e7242"} Feb 20 15:13:26.085372 master-0 kubenswrapper[28120]: I0220 15:13:26.085146 28120 generic.go:334] "Generic (PLEG): container finished" podID="62f082a6-59a7-4bf2-ae8d-efaf494c4efd" containerID="57e7d60eb199129819d680d9e824f7d8e29eb8ad548ddebfad42acf2b26e7242" exitCode=0 Feb 20 15:13:26.085372 master-0 kubenswrapper[28120]: I0220 15:13:26.085255 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" event={"ID":"62f082a6-59a7-4bf2-ae8d-efaf494c4efd","Type":"ContainerDied","Data":"57e7d60eb199129819d680d9e824f7d8e29eb8ad548ddebfad42acf2b26e7242"} Feb 20 15:13:27.100714 master-0 kubenswrapper[28120]: I0220 15:13:27.100611 28120 generic.go:334] "Generic (PLEG): container finished" podID="62f082a6-59a7-4bf2-ae8d-efaf494c4efd" containerID="fb95a3dc8f3b1954f9dd8c274ace6f6ac3aa59e4e3d2999f0d4847b4eda94fa1" exitCode=0 Feb 20 15:13:27.100714 master-0 kubenswrapper[28120]: I0220 15:13:27.100703 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" event={"ID":"62f082a6-59a7-4bf2-ae8d-efaf494c4efd","Type":"ContainerDied","Data":"fb95a3dc8f3b1954f9dd8c274ace6f6ac3aa59e4e3d2999f0d4847b4eda94fa1"} Feb 20 15:13:28.577532 master-0 kubenswrapper[28120]: I0220 15:13:28.577453 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" Feb 20 15:13:28.604903 master-0 kubenswrapper[28120]: I0220 15:13:28.604684 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-util\") pod \"62f082a6-59a7-4bf2-ae8d-efaf494c4efd\" (UID: \"62f082a6-59a7-4bf2-ae8d-efaf494c4efd\") " Feb 20 15:13:28.604903 master-0 kubenswrapper[28120]: I0220 15:13:28.604828 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvq88\" (UniqueName: \"kubernetes.io/projected/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-kube-api-access-qvq88\") pod \"62f082a6-59a7-4bf2-ae8d-efaf494c4efd\" (UID: \"62f082a6-59a7-4bf2-ae8d-efaf494c4efd\") " Feb 20 15:13:28.604903 master-0 kubenswrapper[28120]: I0220 15:13:28.604904 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-bundle\") pod \"62f082a6-59a7-4bf2-ae8d-efaf494c4efd\" (UID: \"62f082a6-59a7-4bf2-ae8d-efaf494c4efd\") " Feb 20 15:13:28.606629 master-0 kubenswrapper[28120]: I0220 15:13:28.606544 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-bundle" (OuterVolumeSpecName: "bundle") pod "62f082a6-59a7-4bf2-ae8d-efaf494c4efd" (UID: "62f082a6-59a7-4bf2-ae8d-efaf494c4efd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:13:28.610643 master-0 kubenswrapper[28120]: I0220 15:13:28.610565 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-kube-api-access-qvq88" (OuterVolumeSpecName: "kube-api-access-qvq88") pod "62f082a6-59a7-4bf2-ae8d-efaf494c4efd" (UID: "62f082a6-59a7-4bf2-ae8d-efaf494c4efd"). InnerVolumeSpecName "kube-api-access-qvq88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:13:28.635743 master-0 kubenswrapper[28120]: I0220 15:13:28.635666 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-util" (OuterVolumeSpecName: "util") pod "62f082a6-59a7-4bf2-ae8d-efaf494c4efd" (UID: "62f082a6-59a7-4bf2-ae8d-efaf494c4efd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:13:28.708199 master-0 kubenswrapper[28120]: I0220 15:13:28.708150 28120 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-util\") on node \"master-0\" DevicePath \"\"" Feb 20 15:13:28.708199 master-0 kubenswrapper[28120]: I0220 15:13:28.708194 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvq88\" (UniqueName: \"kubernetes.io/projected/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-kube-api-access-qvq88\") on node \"master-0\" DevicePath \"\"" Feb 20 15:13:28.708199 master-0 kubenswrapper[28120]: I0220 15:13:28.708210 28120 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/62f082a6-59a7-4bf2-ae8d-efaf494c4efd-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:13:29.125723 master-0 kubenswrapper[28120]: I0220 15:13:29.125584 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" event={"ID":"62f082a6-59a7-4bf2-ae8d-efaf494c4efd","Type":"ContainerDied","Data":"58132f8ed17b8e008ba3310b908d57a6538b2ca31118f1a691e46a4032f217b7"} Feb 20 15:13:29.125723 master-0 kubenswrapper[28120]: I0220 15:13:29.125700 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58132f8ed17b8e008ba3310b908d57a6538b2ca31118f1a691e46a4032f217b7" Feb 20 15:13:29.125723 master-0 kubenswrapper[28120]: I0220 15:13:29.125637 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967sqjpb" Feb 20 15:13:35.120993 master-0 kubenswrapper[28120]: I0220 15:13:35.120899 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-z668x"] Feb 20 15:13:35.121617 master-0 kubenswrapper[28120]: E0220 15:13:35.121424 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f082a6-59a7-4bf2-ae8d-efaf494c4efd" containerName="pull" Feb 20 15:13:35.121617 master-0 kubenswrapper[28120]: I0220 15:13:35.121445 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f082a6-59a7-4bf2-ae8d-efaf494c4efd" containerName="pull" Feb 20 15:13:35.121617 master-0 kubenswrapper[28120]: E0220 15:13:35.121464 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f082a6-59a7-4bf2-ae8d-efaf494c4efd" containerName="util" Feb 20 15:13:35.121617 master-0 kubenswrapper[28120]: I0220 15:13:35.121472 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f082a6-59a7-4bf2-ae8d-efaf494c4efd" containerName="util" Feb 20 15:13:35.121617 master-0 kubenswrapper[28120]: E0220 15:13:35.121508 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f082a6-59a7-4bf2-ae8d-efaf494c4efd" containerName="extract" Feb 20 15:13:35.121617 master-0 kubenswrapper[28120]: I0220 15:13:35.121520 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f082a6-59a7-4bf2-ae8d-efaf494c4efd" containerName="extract" Feb 20 15:13:35.121863 master-0 kubenswrapper[28120]: I0220 15:13:35.121736 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f082a6-59a7-4bf2-ae8d-efaf494c4efd" containerName="extract" Feb 20 15:13:35.122430 master-0 kubenswrapper[28120]: I0220 15:13:35.122402 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-z668x" Feb 20 15:13:35.135815 master-0 kubenswrapper[28120]: I0220 15:13:35.135769 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn8kw\" (UniqueName: \"kubernetes.io/projected/32d3996e-7968-4604-a414-0a8e8164eb6d-kube-api-access-qn8kw\") pod \"openstack-operator-controller-init-6679bf9b57-z668x\" (UID: \"32d3996e-7968-4604-a414-0a8e8164eb6d\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-z668x" Feb 20 15:13:35.145046 master-0 kubenswrapper[28120]: I0220 15:13:35.145000 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-z668x"] Feb 20 15:13:35.237399 master-0 kubenswrapper[28120]: I0220 15:13:35.236854 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn8kw\" (UniqueName: \"kubernetes.io/projected/32d3996e-7968-4604-a414-0a8e8164eb6d-kube-api-access-qn8kw\") pod \"openstack-operator-controller-init-6679bf9b57-z668x\" (UID: \"32d3996e-7968-4604-a414-0a8e8164eb6d\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-z668x" Feb 20 15:13:35.253951 master-0 kubenswrapper[28120]: I0220 15:13:35.251823 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn8kw\" (UniqueName: \"kubernetes.io/projected/32d3996e-7968-4604-a414-0a8e8164eb6d-kube-api-access-qn8kw\") pod \"openstack-operator-controller-init-6679bf9b57-z668x\" (UID: \"32d3996e-7968-4604-a414-0a8e8164eb6d\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-z668x" Feb 20 15:13:35.441285 master-0 kubenswrapper[28120]: I0220 15:13:35.441129 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-z668x" Feb 20 15:13:35.947650 master-0 kubenswrapper[28120]: I0220 15:13:35.947319 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-z668x"] Feb 20 15:13:36.201895 master-0 kubenswrapper[28120]: I0220 15:13:36.201752 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-z668x" event={"ID":"32d3996e-7968-4604-a414-0a8e8164eb6d","Type":"ContainerStarted","Data":"27db055c90b7e0b048c3cc2372d9329fe030c9b9d7ef87f27ccd4fd9a4ed4283"} Feb 20 15:13:41.273694 master-0 kubenswrapper[28120]: I0220 15:13:41.273586 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-z668x" event={"ID":"32d3996e-7968-4604-a414-0a8e8164eb6d","Type":"ContainerStarted","Data":"ee49dc0e7679d9458374b001f189d12115908a0d172c92a8b1188b6a05f42cef"} Feb 20 15:13:41.274631 master-0 kubenswrapper[28120]: I0220 15:13:41.274308 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-z668x" Feb 20 15:13:41.313963 master-0 kubenswrapper[28120]: I0220 15:13:41.311991 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-z668x" podStartSLOduration=1.555551753 podStartE2EDuration="6.311973198s" podCreationTimestamp="2026-02-20 15:13:35 +0000 UTC" firstStartedPulling="2026-02-20 15:13:35.946632319 +0000 UTC m=+754.207425902" lastFinishedPulling="2026-02-20 15:13:40.703053784 +0000 UTC m=+758.963847347" observedRunningTime="2026-02-20 15:13:41.306456881 +0000 UTC m=+759.567250444" watchObservedRunningTime="2026-02-20 15:13:41.311973198 +0000 UTC m=+759.572766771" Feb 20 15:13:45.443716 master-0 kubenswrapper[28120]: I0220 15:13:45.443594 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-z668x" Feb 20 15:14:06.529891 master-0 kubenswrapper[28120]: I0220 15:14:06.529745 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-tv9jl"] Feb 20 15:14:06.531657 master-0 kubenswrapper[28120]: I0220 15:14:06.531621 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tv9jl" Feb 20 15:14:06.559950 master-0 kubenswrapper[28120]: I0220 15:14:06.552420 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-kmd7x"] Feb 20 15:14:06.559950 master-0 kubenswrapper[28120]: I0220 15:14:06.553753 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kmd7x" Feb 20 15:14:06.574456 master-0 kubenswrapper[28120]: I0220 15:14:06.574377 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-tv9jl"] Feb 20 15:14:06.587044 master-0 kubenswrapper[28120]: I0220 15:14:06.584853 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8gl6\" (UniqueName: \"kubernetes.io/projected/9b98b91c-3e81-430f-8f0e-8127407ec912-kube-api-access-q8gl6\") pod \"cinder-operator-controller-manager-5d946d989d-kmd7x\" (UID: \"9b98b91c-3e81-430f-8f0e-8127407ec912\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kmd7x" Feb 20 15:14:06.587044 master-0 kubenswrapper[28120]: I0220 15:14:06.585022 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmvlr\" (UniqueName: \"kubernetes.io/projected/5bea0f0b-5dca-43c9-a670-9dd77b112777-kube-api-access-fmvlr\") pod \"barbican-operator-controller-manager-868647ff47-tv9jl\" (UID: \"5bea0f0b-5dca-43c9-a670-9dd77b112777\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tv9jl" Feb 20 15:14:06.594537 master-0 kubenswrapper[28120]: I0220 15:14:06.594269 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-kmd7x"] Feb 20 15:14:06.616092 master-0 kubenswrapper[28120]: I0220 15:14:06.609165 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-cbwxj"] Feb 20 15:14:06.616092 master-0 kubenswrapper[28120]: I0220 15:14:06.615279 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cbwxj" Feb 20 15:14:06.630846 master-0 kubenswrapper[28120]: I0220 15:14:06.630246 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-cbwxj"] Feb 20 15:14:06.650354 master-0 kubenswrapper[28120]: I0220 15:14:06.650179 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-sbrtr"] Feb 20 15:14:06.698173 master-0 kubenswrapper[28120]: I0220 15:14:06.693135 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sbrtr" Feb 20 15:14:06.701266 master-0 kubenswrapper[28120]: I0220 15:14:06.701205 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8gl6\" (UniqueName: \"kubernetes.io/projected/9b98b91c-3e81-430f-8f0e-8127407ec912-kube-api-access-q8gl6\") pod \"cinder-operator-controller-manager-5d946d989d-kmd7x\" (UID: \"9b98b91c-3e81-430f-8f0e-8127407ec912\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kmd7x" Feb 20 15:14:06.701694 master-0 kubenswrapper[28120]: I0220 15:14:06.701660 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmvlr\" (UniqueName: \"kubernetes.io/projected/5bea0f0b-5dca-43c9-a670-9dd77b112777-kube-api-access-fmvlr\") pod \"barbican-operator-controller-manager-868647ff47-tv9jl\" (UID: \"5bea0f0b-5dca-43c9-a670-9dd77b112777\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tv9jl" Feb 20 15:14:06.738195 master-0 kubenswrapper[28120]: I0220 15:14:06.738119 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-sbrtr"] Feb 20 15:14:06.749686 master-0 kubenswrapper[28120]: I0220 15:14:06.745308 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmvlr\" (UniqueName: \"kubernetes.io/projected/5bea0f0b-5dca-43c9-a670-9dd77b112777-kube-api-access-fmvlr\") pod \"barbican-operator-controller-manager-868647ff47-tv9jl\" (UID: \"5bea0f0b-5dca-43c9-a670-9dd77b112777\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tv9jl" Feb 20 15:14:06.749686 master-0 kubenswrapper[28120]: I0220 15:14:06.747021 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8gl6\" (UniqueName: \"kubernetes.io/projected/9b98b91c-3e81-430f-8f0e-8127407ec912-kube-api-access-q8gl6\") pod \"cinder-operator-controller-manager-5d946d989d-kmd7x\" (UID: \"9b98b91c-3e81-430f-8f0e-8127407ec912\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kmd7x" Feb 20 15:14:06.759253 master-0 kubenswrapper[28120]: I0220 15:14:06.756794 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-jspvt"] Feb 20 15:14:06.759253 master-0 kubenswrapper[28120]: I0220 15:14:06.757816 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jspvt" Feb 20 15:14:06.764297 master-0 kubenswrapper[28120]: I0220 15:14:06.764252 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kqsmp"] Feb 20 15:14:06.765196 master-0 kubenswrapper[28120]: I0220 15:14:06.765117 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kqsmp" Feb 20 15:14:06.785695 master-0 kubenswrapper[28120]: I0220 15:14:06.785566 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-jspvt"] Feb 20 15:14:06.803253 master-0 kubenswrapper[28120]: I0220 15:14:06.803141 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqzpl\" (UniqueName: \"kubernetes.io/projected/1ea0ec03-f7e5-4698-ad90-a2977d8af231-kube-api-access-lqzpl\") pod \"glance-operator-controller-manager-77987464f4-sbrtr\" (UID: \"1ea0ec03-f7e5-4698-ad90-a2977d8af231\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-sbrtr" Feb 20 15:14:06.803253 master-0 kubenswrapper[28120]: I0220 15:14:06.803245 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxwpr\" (UniqueName: \"kubernetes.io/projected/a6071656-f5a6-46df-81d3-71d286384ca7-kube-api-access-wxwpr\") pod \"heat-operator-controller-manager-69f49c598c-jspvt\" (UID: \"a6071656-f5a6-46df-81d3-71d286384ca7\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jspvt" Feb 20 15:14:06.803504 master-0 kubenswrapper[28120]: I0220 15:14:06.803278 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x95lc\" (UniqueName: \"kubernetes.io/projected/35ec9b56-658e-4a51-9fef-5b628826b83d-kube-api-access-x95lc\") pod \"horizon-operator-controller-manager-5b9b8895d5-kqsmp\" (UID: \"35ec9b56-658e-4a51-9fef-5b628826b83d\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kqsmp" Feb 20 15:14:06.803504 master-0 kubenswrapper[28120]: I0220 15:14:06.803310 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6thcz\" (UniqueName: \"kubernetes.io/projected/571a5455-abe5-4154-a213-ac7d403f8bd2-kube-api-access-6thcz\") pod \"designate-operator-controller-manager-6d8bf5c495-cbwxj\" (UID: \"571a5455-abe5-4154-a213-ac7d403f8bd2\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cbwxj" Feb 20 15:14:06.813125 master-0 kubenswrapper[28120]: I0220 15:14:06.813069 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kqsmp"] Feb 20 15:14:06.866901 master-0 kubenswrapper[28120]: I0220 15:14:06.866793 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l"] Feb 20 15:14:06.870557 master-0 kubenswrapper[28120]: I0220 15:14:06.869003 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" Feb 20 15:14:06.873256 master-0 kubenswrapper[28120]: I0220 15:14:06.873225 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 20 15:14:06.885800 master-0 kubenswrapper[28120]: I0220 15:14:06.885731 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l"] Feb 20 15:14:06.895403 master-0 kubenswrapper[28120]: I0220 15:14:06.893478 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-qkpbj"] Feb 20 15:14:06.898838 master-0 kubenswrapper[28120]: I0220 15:14:06.896132 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-qkpbj" Feb 20 15:14:06.909687 master-0 kubenswrapper[28120]: I0220 15:14:06.909636 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxwpr\" (UniqueName: \"kubernetes.io/projected/a6071656-f5a6-46df-81d3-71d286384ca7-kube-api-access-wxwpr\") pod \"heat-operator-controller-manager-69f49c598c-jspvt\" (UID: \"a6071656-f5a6-46df-81d3-71d286384ca7\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jspvt" Feb 20 15:14:06.909885 master-0 kubenswrapper[28120]: I0220 15:14:06.909715 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x95lc\" (UniqueName: \"kubernetes.io/projected/35ec9b56-658e-4a51-9fef-5b628826b83d-kube-api-access-x95lc\") pod \"horizon-operator-controller-manager-5b9b8895d5-kqsmp\" (UID: \"35ec9b56-658e-4a51-9fef-5b628826b83d\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kqsmp" Feb 20 15:14:06.909885 master-0 kubenswrapper[28120]: I0220 15:14:06.909759 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxth7\" (UniqueName: \"kubernetes.io/projected/2788df58-11e5-421f-96c5-ddf698d5e2ee-kube-api-access-lxth7\") pod \"ironic-operator-controller-manager-554564d7fc-qkpbj\" (UID: \"2788df58-11e5-421f-96c5-ddf698d5e2ee\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-qkpbj" Feb 20 15:14:06.909885 master-0 kubenswrapper[28120]: I0220 15:14:06.909796 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-9pk5l\" (UID: \"86ae9025-ec1e-4e3a-b990-cb73f94ef8f6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" Feb 20 15:14:06.909885 master-0 kubenswrapper[28120]: I0220 15:14:06.909835 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6thcz\" (UniqueName: \"kubernetes.io/projected/571a5455-abe5-4154-a213-ac7d403f8bd2-kube-api-access-6thcz\") pod \"designate-operator-controller-manager-6d8bf5c495-cbwxj\" (UID: \"571a5455-abe5-4154-a213-ac7d403f8bd2\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cbwxj" Feb 20 15:14:06.910034 master-0 kubenswrapper[28120]: I0220 15:14:06.909916 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqzpl\" (UniqueName: \"kubernetes.io/projected/1ea0ec03-f7e5-4698-ad90-a2977d8af231-kube-api-access-lqzpl\") pod \"glance-operator-controller-manager-77987464f4-sbrtr\" (UID: \"1ea0ec03-f7e5-4698-ad90-a2977d8af231\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-sbrtr" Feb 20 15:14:06.910034 master-0 kubenswrapper[28120]: I0220 15:14:06.909976 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qj8s\" (UniqueName: \"kubernetes.io/projected/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-kube-api-access-6qj8s\") pod \"infra-operator-controller-manager-5f879c76b6-9pk5l\" (UID: \"86ae9025-ec1e-4e3a-b990-cb73f94ef8f6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" Feb 20 15:14:06.912367 master-0 kubenswrapper[28120]: I0220 15:14:06.912315 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tv9jl" Feb 20 15:14:06.913609 master-0 kubenswrapper[28120]: I0220 15:14:06.913584 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-qkpbj"] Feb 20 15:14:06.933833 master-0 kubenswrapper[28120]: I0220 15:14:06.933787 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-kkrd6"] Feb 20 15:14:06.935126 master-0 kubenswrapper[28120]: I0220 15:14:06.935100 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-kkrd6" Feb 20 15:14:06.939867 master-0 kubenswrapper[28120]: I0220 15:14:06.939825 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kmd7x" Feb 20 15:14:06.944812 master-0 kubenswrapper[28120]: I0220 15:14:06.944777 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-kkrd6"] Feb 20 15:14:06.951330 master-0 kubenswrapper[28120]: I0220 15:14:06.949013 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x95lc\" (UniqueName: \"kubernetes.io/projected/35ec9b56-658e-4a51-9fef-5b628826b83d-kube-api-access-x95lc\") pod \"horizon-operator-controller-manager-5b9b8895d5-kqsmp\" (UID: \"35ec9b56-658e-4a51-9fef-5b628826b83d\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kqsmp" Feb 20 15:14:06.951330 master-0 kubenswrapper[28120]: I0220 15:14:06.949123 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxwpr\" (UniqueName: \"kubernetes.io/projected/a6071656-f5a6-46df-81d3-71d286384ca7-kube-api-access-wxwpr\") pod \"heat-operator-controller-manager-69f49c598c-jspvt\" (UID: \"a6071656-f5a6-46df-81d3-71d286384ca7\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jspvt" Feb 20 15:14:06.953341 master-0 kubenswrapper[28120]: I0220 15:14:06.953300 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqzpl\" (UniqueName: \"kubernetes.io/projected/1ea0ec03-f7e5-4698-ad90-a2977d8af231-kube-api-access-lqzpl\") pod \"glance-operator-controller-manager-77987464f4-sbrtr\" (UID: \"1ea0ec03-f7e5-4698-ad90-a2977d8af231\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-sbrtr" Feb 20 15:14:06.953824 master-0 kubenswrapper[28120]: I0220 15:14:06.953793 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6thcz\" (UniqueName: \"kubernetes.io/projected/571a5455-abe5-4154-a213-ac7d403f8bd2-kube-api-access-6thcz\") pod \"designate-operator-controller-manager-6d8bf5c495-cbwxj\" (UID: \"571a5455-abe5-4154-a213-ac7d403f8bd2\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cbwxj" Feb 20 15:14:06.966805 master-0 kubenswrapper[28120]: I0220 15:14:06.966752 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-hp67m"] Feb 20 15:14:06.971955 master-0 kubenswrapper[28120]: I0220 15:14:06.971903 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-hp67m" Feb 20 15:14:06.978153 master-0 kubenswrapper[28120]: I0220 15:14:06.976558 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-hp67m"] Feb 20 15:14:06.982384 master-0 kubenswrapper[28120]: I0220 15:14:06.982335 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cbwxj" Feb 20 15:14:06.986643 master-0 kubenswrapper[28120]: I0220 15:14:06.986596 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-nvh8d"] Feb 20 15:14:06.988008 master-0 kubenswrapper[28120]: I0220 15:14:06.987954 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-nvh8d" Feb 20 15:14:07.001516 master-0 kubenswrapper[28120]: I0220 15:14:07.001456 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-nvh8d"] Feb 20 15:14:07.008243 master-0 kubenswrapper[28120]: I0220 15:14:07.008214 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-slfqp"] Feb 20 15:14:07.010004 master-0 kubenswrapper[28120]: I0220 15:14:07.009694 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-slfqp" Feb 20 15:14:07.011305 master-0 kubenswrapper[28120]: I0220 15:14:07.011271 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxth7\" (UniqueName: \"kubernetes.io/projected/2788df58-11e5-421f-96c5-ddf698d5e2ee-kube-api-access-lxth7\") pod \"ironic-operator-controller-manager-554564d7fc-qkpbj\" (UID: \"2788df58-11e5-421f-96c5-ddf698d5e2ee\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-qkpbj" Feb 20 15:14:07.011367 master-0 kubenswrapper[28120]: I0220 15:14:07.011317 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-9pk5l\" (UID: \"86ae9025-ec1e-4e3a-b990-cb73f94ef8f6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" Feb 20 15:14:07.011410 master-0 kubenswrapper[28120]: I0220 15:14:07.011383 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qj8s\" (UniqueName: \"kubernetes.io/projected/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-kube-api-access-6qj8s\") pod \"infra-operator-controller-manager-5f879c76b6-9pk5l\" (UID: \"86ae9025-ec1e-4e3a-b990-cb73f94ef8f6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" Feb 20 15:14:07.011410 master-0 kubenswrapper[28120]: I0220 15:14:07.011405 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jltgh\" (UniqueName: \"kubernetes.io/projected/c4be1807-9b9c-4945-a039-2a0dbdf5bb60-kube-api-access-jltgh\") pod \"mariadb-operator-controller-manager-6994f66f48-nvh8d\" (UID: \"c4be1807-9b9c-4945-a039-2a0dbdf5bb60\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-nvh8d" Feb 20 15:14:07.011470 master-0 kubenswrapper[28120]: I0220 15:14:07.011423 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgwjg\" (UniqueName: \"kubernetes.io/projected/844f7139-c477-49ba-8219-80aa439f8ff0-kube-api-access-kgwjg\") pod \"keystone-operator-controller-manager-b4d948c87-kkrd6\" (UID: \"844f7139-c477-49ba-8219-80aa439f8ff0\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-kkrd6" Feb 20 15:14:07.011470 master-0 kubenswrapper[28120]: I0220 15:14:07.011451 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf74x\" (UniqueName: \"kubernetes.io/projected/04007ddd-2b6c-4d07-b3c8-a783aa7de40f-kube-api-access-kf74x\") pod \"manila-operator-controller-manager-54f6768c69-hp67m\" (UID: \"04007ddd-2b6c-4d07-b3c8-a783aa7de40f\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-hp67m" Feb 20 15:14:07.011850 master-0 kubenswrapper[28120]: E0220 15:14:07.011794 28120 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 20 15:14:07.011850 master-0 kubenswrapper[28120]: E0220 15:14:07.011841 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert podName:86ae9025-ec1e-4e3a-b990-cb73f94ef8f6 nodeName:}" failed. No retries permitted until 2026-02-20 15:14:07.51182367 +0000 UTC m=+785.772617233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert") pod "infra-operator-controller-manager-5f879c76b6-9pk5l" (UID: "86ae9025-ec1e-4e3a-b990-cb73f94ef8f6") : secret "infra-operator-webhook-server-cert" not found Feb 20 15:14:07.016977 master-0 kubenswrapper[28120]: I0220 15:14:07.016013 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-slfqp"] Feb 20 15:14:07.042265 master-0 kubenswrapper[28120]: I0220 15:14:07.039642 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxth7\" (UniqueName: \"kubernetes.io/projected/2788df58-11e5-421f-96c5-ddf698d5e2ee-kube-api-access-lxth7\") pod \"ironic-operator-controller-manager-554564d7fc-qkpbj\" (UID: \"2788df58-11e5-421f-96c5-ddf698d5e2ee\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-qkpbj" Feb 20 15:14:07.045717 master-0 kubenswrapper[28120]: I0220 15:14:07.045671 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qj8s\" (UniqueName: \"kubernetes.io/projected/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-kube-api-access-6qj8s\") pod \"infra-operator-controller-manager-5f879c76b6-9pk5l\" (UID: \"86ae9025-ec1e-4e3a-b990-cb73f94ef8f6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" Feb 20 15:14:07.045870 master-0 kubenswrapper[28120]: I0220 15:14:07.045733 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-g75bk"] Feb 20 15:14:07.047634 master-0 kubenswrapper[28120]: I0220 15:14:07.046740 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-g75bk" Feb 20 15:14:07.054046 master-0 kubenswrapper[28120]: I0220 15:14:07.054004 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-g75bk"] Feb 20 15:14:07.083235 master-0 kubenswrapper[28120]: I0220 15:14:07.081937 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pz7sc"] Feb 20 15:14:07.083903 master-0 kubenswrapper[28120]: I0220 15:14:07.083861 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pz7sc" Feb 20 15:14:07.099196 master-0 kubenswrapper[28120]: I0220 15:14:07.099162 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pz7sc"] Feb 20 15:14:07.113105 master-0 kubenswrapper[28120]: I0220 15:14:07.113065 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jltgh\" (UniqueName: \"kubernetes.io/projected/c4be1807-9b9c-4945-a039-2a0dbdf5bb60-kube-api-access-jltgh\") pod \"mariadb-operator-controller-manager-6994f66f48-nvh8d\" (UID: \"c4be1807-9b9c-4945-a039-2a0dbdf5bb60\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-nvh8d" Feb 20 15:14:07.113230 master-0 kubenswrapper[28120]: I0220 15:14:07.113110 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgwjg\" (UniqueName: \"kubernetes.io/projected/844f7139-c477-49ba-8219-80aa439f8ff0-kube-api-access-kgwjg\") pod \"keystone-operator-controller-manager-b4d948c87-kkrd6\" (UID: \"844f7139-c477-49ba-8219-80aa439f8ff0\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-kkrd6" Feb 20 15:14:07.113230 master-0 kubenswrapper[28120]: I0220 15:14:07.113145 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbt8d\" (UniqueName: \"kubernetes.io/projected/b7484e6b-acc7-4e73-bde4-a4a31808f501-kube-api-access-cbt8d\") pod \"nova-operator-controller-manager-567668f5cf-g75bk\" (UID: \"b7484e6b-acc7-4e73-bde4-a4a31808f501\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-g75bk" Feb 20 15:14:07.113230 master-0 kubenswrapper[28120]: I0220 15:14:07.113165 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf74x\" (UniqueName: \"kubernetes.io/projected/04007ddd-2b6c-4d07-b3c8-a783aa7de40f-kube-api-access-kf74x\") pod \"manila-operator-controller-manager-54f6768c69-hp67m\" (UID: \"04007ddd-2b6c-4d07-b3c8-a783aa7de40f\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-hp67m" Feb 20 15:14:07.113323 master-0 kubenswrapper[28120]: I0220 15:14:07.113277 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qdm9\" (UniqueName: \"kubernetes.io/projected/684221fe-1668-49a2-8bcd-1e23d8e6248f-kube-api-access-2qdm9\") pod \"octavia-operator-controller-manager-69f8888797-pz7sc\" (UID: \"684221fe-1668-49a2-8bcd-1e23d8e6248f\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pz7sc" Feb 20 15:14:07.113353 master-0 kubenswrapper[28120]: I0220 15:14:07.113342 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlbvz\" (UniqueName: \"kubernetes.io/projected/406c9444-8bab-4866-8681-7cc8365af4ae-kube-api-access-wlbvz\") pod \"neutron-operator-controller-manager-64ddbf8bb-slfqp\" (UID: \"406c9444-8bab-4866-8681-7cc8365af4ae\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-slfqp" Feb 20 15:14:07.116848 master-0 kubenswrapper[28120]: I0220 15:14:07.116808 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n"] Feb 20 15:14:07.118231 master-0 kubenswrapper[28120]: I0220 15:14:07.118096 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" Feb 20 15:14:07.118740 master-0 kubenswrapper[28120]: I0220 15:14:07.118684 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sbrtr" Feb 20 15:14:07.121371 master-0 kubenswrapper[28120]: I0220 15:14:07.120838 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 20 15:14:07.130487 master-0 kubenswrapper[28120]: I0220 15:14:07.128361 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n"] Feb 20 15:14:07.131700 master-0 kubenswrapper[28120]: I0220 15:14:07.131642 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jspvt" Feb 20 15:14:07.133342 master-0 kubenswrapper[28120]: I0220 15:14:07.133305 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf74x\" (UniqueName: \"kubernetes.io/projected/04007ddd-2b6c-4d07-b3c8-a783aa7de40f-kube-api-access-kf74x\") pod \"manila-operator-controller-manager-54f6768c69-hp67m\" (UID: \"04007ddd-2b6c-4d07-b3c8-a783aa7de40f\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-hp67m" Feb 20 15:14:07.149341 master-0 kubenswrapper[28120]: I0220 15:14:07.138244 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-p22m8"] Feb 20 15:14:07.149341 master-0 kubenswrapper[28120]: I0220 15:14:07.139577 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-p22m8" Feb 20 15:14:07.149341 master-0 kubenswrapper[28120]: I0220 15:14:07.140980 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kqsmp" Feb 20 15:14:07.149341 master-0 kubenswrapper[28120]: I0220 15:14:07.145445 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgwjg\" (UniqueName: \"kubernetes.io/projected/844f7139-c477-49ba-8219-80aa439f8ff0-kube-api-access-kgwjg\") pod \"keystone-operator-controller-manager-b4d948c87-kkrd6\" (UID: \"844f7139-c477-49ba-8219-80aa439f8ff0\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-kkrd6" Feb 20 15:14:07.163602 master-0 kubenswrapper[28120]: I0220 15:14:07.163570 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jltgh\" (UniqueName: \"kubernetes.io/projected/c4be1807-9b9c-4945-a039-2a0dbdf5bb60-kube-api-access-jltgh\") pod \"mariadb-operator-controller-manager-6994f66f48-nvh8d\" (UID: \"c4be1807-9b9c-4945-a039-2a0dbdf5bb60\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-nvh8d" Feb 20 15:14:07.188071 master-0 kubenswrapper[28120]: I0220 15:14:07.187240 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-p22m8"] Feb 20 15:14:07.234633 master-0 kubenswrapper[28120]: I0220 15:14:07.215742 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-94rgn"] Feb 20 15:14:07.234633 master-0 kubenswrapper[28120]: I0220 15:14:07.216842 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-94rgn" Feb 20 15:14:07.234633 master-0 kubenswrapper[28120]: I0220 15:14:07.218257 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qdm9\" (UniqueName: \"kubernetes.io/projected/684221fe-1668-49a2-8bcd-1e23d8e6248f-kube-api-access-2qdm9\") pod \"octavia-operator-controller-manager-69f8888797-pz7sc\" (UID: \"684221fe-1668-49a2-8bcd-1e23d8e6248f\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pz7sc" Feb 20 15:14:07.234633 master-0 kubenswrapper[28120]: I0220 15:14:07.218302 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlbvz\" (UniqueName: \"kubernetes.io/projected/406c9444-8bab-4866-8681-7cc8365af4ae-kube-api-access-wlbvz\") pod \"neutron-operator-controller-manager-64ddbf8bb-slfqp\" (UID: \"406c9444-8bab-4866-8681-7cc8365af4ae\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-slfqp" Feb 20 15:14:07.234633 master-0 kubenswrapper[28120]: I0220 15:14:07.218409 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbt8d\" (UniqueName: \"kubernetes.io/projected/b7484e6b-acc7-4e73-bde4-a4a31808f501-kube-api-access-cbt8d\") pod \"nova-operator-controller-manager-567668f5cf-g75bk\" (UID: \"b7484e6b-acc7-4e73-bde4-a4a31808f501\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-g75bk" Feb 20 15:14:07.252944 master-0 kubenswrapper[28120]: I0220 15:14:07.248367 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlbvz\" (UniqueName: \"kubernetes.io/projected/406c9444-8bab-4866-8681-7cc8365af4ae-kube-api-access-wlbvz\") pod \"neutron-operator-controller-manager-64ddbf8bb-slfqp\" (UID: \"406c9444-8bab-4866-8681-7cc8365af4ae\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-slfqp" Feb 20 15:14:07.252944 master-0 kubenswrapper[28120]: I0220 15:14:07.248415 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbt8d\" (UniqueName: \"kubernetes.io/projected/b7484e6b-acc7-4e73-bde4-a4a31808f501-kube-api-access-cbt8d\") pod \"nova-operator-controller-manager-567668f5cf-g75bk\" (UID: \"b7484e6b-acc7-4e73-bde4-a4a31808f501\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-g75bk" Feb 20 15:14:07.252944 master-0 kubenswrapper[28120]: I0220 15:14:07.250584 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-qkpbj" Feb 20 15:14:07.252944 master-0 kubenswrapper[28120]: I0220 15:14:07.251322 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qdm9\" (UniqueName: \"kubernetes.io/projected/684221fe-1668-49a2-8bcd-1e23d8e6248f-kube-api-access-2qdm9\") pod \"octavia-operator-controller-manager-69f8888797-pz7sc\" (UID: \"684221fe-1668-49a2-8bcd-1e23d8e6248f\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pz7sc" Feb 20 15:14:07.286985 master-0 kubenswrapper[28120]: I0220 15:14:07.286384 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-94rgn"] Feb 20 15:14:07.302131 master-0 kubenswrapper[28120]: I0220 15:14:07.299541 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-n865p"] Feb 20 15:14:07.302131 master-0 kubenswrapper[28120]: I0220 15:14:07.300773 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n865p" Feb 20 15:14:07.337500 master-0 kubenswrapper[28120]: I0220 15:14:07.332119 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n\" (UID: \"f4f68dcb-1812-4a20-944f-c5d9256a6f97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" Feb 20 15:14:07.337500 master-0 kubenswrapper[28120]: I0220 15:14:07.334370 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zp29\" (UniqueName: \"kubernetes.io/projected/09e800d6-539c-4388-9101-b9fdef4c0969-kube-api-access-6zp29\") pod \"swift-operator-controller-manager-68f46476f-n865p\" (UID: \"09e800d6-539c-4388-9101-b9fdef4c0969\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-n865p" Feb 20 15:14:07.338036 master-0 kubenswrapper[28120]: I0220 15:14:07.337992 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j77m\" (UniqueName: \"kubernetes.io/projected/d054c93d-eb44-48ff-9005-a806434bf6a6-kube-api-access-6j77m\") pod \"ovn-operator-controller-manager-d44cf6b75-p22m8\" (UID: \"d054c93d-eb44-48ff-9005-a806434bf6a6\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-p22m8" Feb 20 15:14:07.338191 master-0 kubenswrapper[28120]: I0220 15:14:07.338173 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m22ps\" (UniqueName: \"kubernetes.io/projected/f4f68dcb-1812-4a20-944f-c5d9256a6f97-kube-api-access-m22ps\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n\" (UID: \"f4f68dcb-1812-4a20-944f-c5d9256a6f97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" Feb 20 15:14:07.338350 master-0 kubenswrapper[28120]: I0220 15:14:07.338330 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r8bz\" (UniqueName: \"kubernetes.io/projected/63ce0e64-a2f8-4f28-bfd6-aa8635c38a15-kube-api-access-6r8bz\") pod \"placement-operator-controller-manager-8497b45c89-94rgn\" (UID: \"63ce0e64-a2f8-4f28-bfd6-aa8635c38a15\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-94rgn" Feb 20 15:14:07.354954 master-0 kubenswrapper[28120]: I0220 15:14:07.350995 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-kkrd6" Feb 20 15:14:07.354954 master-0 kubenswrapper[28120]: I0220 15:14:07.354396 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-n865p"] Feb 20 15:14:07.365661 master-0 kubenswrapper[28120]: I0220 15:14:07.365310 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-hp67m" Feb 20 15:14:07.367249 master-0 kubenswrapper[28120]: I0220 15:14:07.367228 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cdwb7"] Feb 20 15:14:07.369788 master-0 kubenswrapper[28120]: I0220 15:14:07.369479 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cdwb7" Feb 20 15:14:07.397949 master-0 kubenswrapper[28120]: I0220 15:14:07.385067 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cdwb7"] Feb 20 15:14:07.401105 master-0 kubenswrapper[28120]: I0220 15:14:07.401067 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-7dxcp"] Feb 20 15:14:07.401406 master-0 kubenswrapper[28120]: I0220 15:14:07.401353 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-slfqp" Feb 20 15:14:07.402444 master-0 kubenswrapper[28120]: I0220 15:14:07.402342 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-7dxcp" Feb 20 15:14:07.403391 master-0 kubenswrapper[28120]: I0220 15:14:07.402760 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-nvh8d" Feb 20 15:14:07.405058 master-0 kubenswrapper[28120]: I0220 15:14:07.405027 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-g75bk" Feb 20 15:14:07.429485 master-0 kubenswrapper[28120]: I0220 15:14:07.417468 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-7dxcp"] Feb 20 15:14:07.439531 master-0 kubenswrapper[28120]: I0220 15:14:07.437180 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pz7sc" Feb 20 15:14:07.446671 master-0 kubenswrapper[28120]: I0220 15:14:07.446601 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n\" (UID: \"f4f68dcb-1812-4a20-944f-c5d9256a6f97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" Feb 20 15:14:07.446759 master-0 kubenswrapper[28120]: I0220 15:14:07.446711 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zp29\" (UniqueName: \"kubernetes.io/projected/09e800d6-539c-4388-9101-b9fdef4c0969-kube-api-access-6zp29\") pod \"swift-operator-controller-manager-68f46476f-n865p\" (UID: \"09e800d6-539c-4388-9101-b9fdef4c0969\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-n865p" Feb 20 15:14:07.446795 master-0 kubenswrapper[28120]: I0220 15:14:07.446784 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j77m\" (UniqueName: \"kubernetes.io/projected/d054c93d-eb44-48ff-9005-a806434bf6a6-kube-api-access-6j77m\") pod \"ovn-operator-controller-manager-d44cf6b75-p22m8\" (UID: \"d054c93d-eb44-48ff-9005-a806434bf6a6\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-p22m8" Feb 20 15:14:07.446899 master-0 kubenswrapper[28120]: I0220 15:14:07.446823 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m22ps\" (UniqueName: \"kubernetes.io/projected/f4f68dcb-1812-4a20-944f-c5d9256a6f97-kube-api-access-m22ps\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n\" (UID: \"f4f68dcb-1812-4a20-944f-c5d9256a6f97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" Feb 20 15:14:07.446899 master-0 kubenswrapper[28120]: I0220 15:14:07.446870 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r8bz\" (UniqueName: \"kubernetes.io/projected/63ce0e64-a2f8-4f28-bfd6-aa8635c38a15-kube-api-access-6r8bz\") pod \"placement-operator-controller-manager-8497b45c89-94rgn\" (UID: \"63ce0e64-a2f8-4f28-bfd6-aa8635c38a15\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-94rgn" Feb 20 15:14:07.446980 master-0 kubenswrapper[28120]: I0220 15:14:07.446914 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjwsd\" (UniqueName: \"kubernetes.io/projected/26d662d7-afd6-428e-90e1-3c561b8cbaa9-kube-api-access-rjwsd\") pod \"telemetry-operator-controller-manager-7f45b4ff68-cdwb7\" (UID: \"26d662d7-afd6-428e-90e1-3c561b8cbaa9\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cdwb7" Feb 20 15:14:07.447258 master-0 kubenswrapper[28120]: I0220 15:14:07.447024 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cst9\" (UniqueName: \"kubernetes.io/projected/96531071-f49e-44d4-98d3-3d4e3e24867a-kube-api-access-5cst9\") pod \"test-operator-controller-manager-7866795846-7dxcp\" (UID: \"96531071-f49e-44d4-98d3-3d4e3e24867a\") " pod="openstack-operators/test-operator-controller-manager-7866795846-7dxcp" Feb 20 15:14:07.463951 master-0 kubenswrapper[28120]: E0220 15:14:07.453982 28120 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 15:14:07.463951 master-0 kubenswrapper[28120]: E0220 15:14:07.454076 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert podName:f4f68dcb-1812-4a20-944f-c5d9256a6f97 nodeName:}" failed. No retries permitted until 2026-02-20 15:14:07.954046229 +0000 UTC m=+786.214839782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" (UID: "f4f68dcb-1812-4a20-944f-c5d9256a6f97") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 15:14:07.472152 master-0 kubenswrapper[28120]: I0220 15:14:07.467503 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-nk84g"] Feb 20 15:14:07.472152 master-0 kubenswrapper[28120]: I0220 15:14:07.469629 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-nk84g" Feb 20 15:14:07.472152 master-0 kubenswrapper[28120]: I0220 15:14:07.471818 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j77m\" (UniqueName: \"kubernetes.io/projected/d054c93d-eb44-48ff-9005-a806434bf6a6-kube-api-access-6j77m\") pod \"ovn-operator-controller-manager-d44cf6b75-p22m8\" (UID: \"d054c93d-eb44-48ff-9005-a806434bf6a6\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-p22m8" Feb 20 15:14:07.473018 master-0 kubenswrapper[28120]: I0220 15:14:07.472978 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zp29\" (UniqueName: \"kubernetes.io/projected/09e800d6-539c-4388-9101-b9fdef4c0969-kube-api-access-6zp29\") pod \"swift-operator-controller-manager-68f46476f-n865p\" (UID: \"09e800d6-539c-4388-9101-b9fdef4c0969\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-n865p" Feb 20 15:14:07.475738 master-0 kubenswrapper[28120]: I0220 15:14:07.475562 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r8bz\" (UniqueName: \"kubernetes.io/projected/63ce0e64-a2f8-4f28-bfd6-aa8635c38a15-kube-api-access-6r8bz\") pod \"placement-operator-controller-manager-8497b45c89-94rgn\" (UID: \"63ce0e64-a2f8-4f28-bfd6-aa8635c38a15\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-94rgn" Feb 20 15:14:07.478415 master-0 kubenswrapper[28120]: I0220 15:14:07.478374 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m22ps\" (UniqueName: \"kubernetes.io/projected/f4f68dcb-1812-4a20-944f-c5d9256a6f97-kube-api-access-m22ps\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n\" (UID: \"f4f68dcb-1812-4a20-944f-c5d9256a6f97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" Feb 20 15:14:07.481587 master-0 kubenswrapper[28120]: I0220 15:14:07.479269 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-nk84g"] Feb 20 15:14:07.496792 master-0 kubenswrapper[28120]: I0220 15:14:07.496751 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-p22m8" Feb 20 15:14:07.552259 master-0 kubenswrapper[28120]: I0220 15:14:07.549528 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5lxz\" (UniqueName: \"kubernetes.io/projected/48a4847f-0f2f-496b-b07e-4249ed44547c-kube-api-access-v5lxz\") pod \"watcher-operator-controller-manager-5db88f68c-nk84g\" (UID: \"48a4847f-0f2f-496b-b07e-4249ed44547c\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-nk84g" Feb 20 15:14:07.552259 master-0 kubenswrapper[28120]: I0220 15:14:07.549622 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjwsd\" (UniqueName: \"kubernetes.io/projected/26d662d7-afd6-428e-90e1-3c561b8cbaa9-kube-api-access-rjwsd\") pod \"telemetry-operator-controller-manager-7f45b4ff68-cdwb7\" (UID: \"26d662d7-afd6-428e-90e1-3c561b8cbaa9\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cdwb7" Feb 20 15:14:07.552259 master-0 kubenswrapper[28120]: I0220 15:14:07.549684 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-9pk5l\" (UID: \"86ae9025-ec1e-4e3a-b990-cb73f94ef8f6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" Feb 20 15:14:07.552259 master-0 kubenswrapper[28120]: I0220 15:14:07.549706 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cst9\" (UniqueName: \"kubernetes.io/projected/96531071-f49e-44d4-98d3-3d4e3e24867a-kube-api-access-5cst9\") pod \"test-operator-controller-manager-7866795846-7dxcp\" (UID: \"96531071-f49e-44d4-98d3-3d4e3e24867a\") " pod="openstack-operators/test-operator-controller-manager-7866795846-7dxcp" Feb 20 15:14:07.552259 master-0 kubenswrapper[28120]: I0220 15:14:07.550211 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4"] Feb 20 15:14:07.552259 master-0 kubenswrapper[28120]: E0220 15:14:07.550397 28120 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 20 15:14:07.552259 master-0 kubenswrapper[28120]: E0220 15:14:07.550488 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert podName:86ae9025-ec1e-4e3a-b990-cb73f94ef8f6 nodeName:}" failed. No retries permitted until 2026-02-20 15:14:08.550462813 +0000 UTC m=+786.811256376 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert") pod "infra-operator-controller-manager-5f879c76b6-9pk5l" (UID: "86ae9025-ec1e-4e3a-b990-cb73f94ef8f6") : secret "infra-operator-webhook-server-cert" not found Feb 20 15:14:07.553188 master-0 kubenswrapper[28120]: I0220 15:14:07.553044 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:07.562322 master-0 kubenswrapper[28120]: I0220 15:14:07.562271 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4"] Feb 20 15:14:07.566434 master-0 kubenswrapper[28120]: I0220 15:14:07.566121 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 20 15:14:07.566434 master-0 kubenswrapper[28120]: I0220 15:14:07.566216 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 20 15:14:07.573040 master-0 kubenswrapper[28120]: I0220 15:14:07.571264 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjwsd\" (UniqueName: \"kubernetes.io/projected/26d662d7-afd6-428e-90e1-3c561b8cbaa9-kube-api-access-rjwsd\") pod \"telemetry-operator-controller-manager-7f45b4ff68-cdwb7\" (UID: \"26d662d7-afd6-428e-90e1-3c561b8cbaa9\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cdwb7" Feb 20 15:14:07.584285 master-0 kubenswrapper[28120]: I0220 15:14:07.583313 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j9m5x"] Feb 20 15:14:07.597232 master-0 kubenswrapper[28120]: I0220 15:14:07.597199 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cst9\" (UniqueName: \"kubernetes.io/projected/96531071-f49e-44d4-98d3-3d4e3e24867a-kube-api-access-5cst9\") pod \"test-operator-controller-manager-7866795846-7dxcp\" (UID: \"96531071-f49e-44d4-98d3-3d4e3e24867a\") " pod="openstack-operators/test-operator-controller-manager-7866795846-7dxcp" Feb 20 15:14:07.598738 master-0 kubenswrapper[28120]: I0220 15:14:07.597536 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-94rgn" Feb 20 15:14:07.608889 master-0 kubenswrapper[28120]: I0220 15:14:07.607802 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j9m5x"] Feb 20 15:14:07.608889 master-0 kubenswrapper[28120]: I0220 15:14:07.607905 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j9m5x" Feb 20 15:14:07.624500 master-0 kubenswrapper[28120]: W0220 15:14:07.624356 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bea0f0b_5dca_43c9_a670_9dd77b112777.slice/crio-137f9952d843c3390fb748a76ef36c81155eeac18bfd3d0f789b148fd788f77c WatchSource:0}: Error finding container 137f9952d843c3390fb748a76ef36c81155eeac18bfd3d0f789b148fd788f77c: Status 404 returned error can't find the container with id 137f9952d843c3390fb748a76ef36c81155eeac18bfd3d0f789b148fd788f77c Feb 20 15:14:07.641066 master-0 kubenswrapper[28120]: I0220 15:14:07.640165 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n865p" Feb 20 15:14:07.651023 master-0 kubenswrapper[28120]: I0220 15:14:07.650325 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-tv9jl"] Feb 20 15:14:07.651307 master-0 kubenswrapper[28120]: I0220 15:14:07.651257 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v89hf\" (UniqueName: \"kubernetes.io/projected/6dafbcda-4fec-47a9-b4d1-950010f65d3e-kube-api-access-v89hf\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:07.651375 master-0 kubenswrapper[28120]: I0220 15:14:07.651355 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rgkc\" (UniqueName: \"kubernetes.io/projected/6cb742a4-b88d-48fd-8f62-8e48250236d6-kube-api-access-5rgkc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-j9m5x\" (UID: \"6cb742a4-b88d-48fd-8f62-8e48250236d6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j9m5x" Feb 20 15:14:07.651413 master-0 kubenswrapper[28120]: I0220 15:14:07.651382 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:07.651570 master-0 kubenswrapper[28120]: I0220 15:14:07.651539 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:07.651715 master-0 kubenswrapper[28120]: I0220 15:14:07.651666 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5lxz\" (UniqueName: \"kubernetes.io/projected/48a4847f-0f2f-496b-b07e-4249ed44547c-kube-api-access-v5lxz\") pod \"watcher-operator-controller-manager-5db88f68c-nk84g\" (UID: \"48a4847f-0f2f-496b-b07e-4249ed44547c\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-nk84g" Feb 20 15:14:07.684565 master-0 kubenswrapper[28120]: I0220 15:14:07.684515 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-kmd7x"] Feb 20 15:14:07.708783 master-0 kubenswrapper[28120]: I0220 15:14:07.708676 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-cbwxj"] Feb 20 15:14:07.708783 master-0 kubenswrapper[28120]: I0220 15:14:07.708686 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5lxz\" (UniqueName: \"kubernetes.io/projected/48a4847f-0f2f-496b-b07e-4249ed44547c-kube-api-access-v5lxz\") pod \"watcher-operator-controller-manager-5db88f68c-nk84g\" (UID: \"48a4847f-0f2f-496b-b07e-4249ed44547c\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-nk84g" Feb 20 15:14:07.724727 master-0 kubenswrapper[28120]: I0220 15:14:07.724632 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cdwb7" Feb 20 15:14:07.736302 master-0 kubenswrapper[28120]: I0220 15:14:07.735952 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-7dxcp" Feb 20 15:14:07.754690 master-0 kubenswrapper[28120]: I0220 15:14:07.754472 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rgkc\" (UniqueName: \"kubernetes.io/projected/6cb742a4-b88d-48fd-8f62-8e48250236d6-kube-api-access-5rgkc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-j9m5x\" (UID: \"6cb742a4-b88d-48fd-8f62-8e48250236d6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j9m5x" Feb 20 15:14:07.754690 master-0 kubenswrapper[28120]: I0220 15:14:07.754538 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:07.754690 master-0 kubenswrapper[28120]: I0220 15:14:07.754632 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:07.754946 master-0 kubenswrapper[28120]: E0220 15:14:07.754897 28120 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 20 15:14:07.755161 master-0 kubenswrapper[28120]: E0220 15:14:07.754973 28120 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 20 15:14:07.755161 master-0 kubenswrapper[28120]: E0220 15:14:07.754984 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs podName:6dafbcda-4fec-47a9-b4d1-950010f65d3e nodeName:}" failed. No retries permitted until 2026-02-20 15:14:08.254962813 +0000 UTC m=+786.515756386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-gssk4" (UID: "6dafbcda-4fec-47a9-b4d1-950010f65d3e") : secret "webhook-server-cert" not found Feb 20 15:14:07.755161 master-0 kubenswrapper[28120]: I0220 15:14:07.755086 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v89hf\" (UniqueName: \"kubernetes.io/projected/6dafbcda-4fec-47a9-b4d1-950010f65d3e-kube-api-access-v89hf\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:07.755412 master-0 kubenswrapper[28120]: E0220 15:14:07.755166 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs podName:6dafbcda-4fec-47a9-b4d1-950010f65d3e nodeName:}" failed. No retries permitted until 2026-02-20 15:14:08.255149087 +0000 UTC m=+786.515942660 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-gssk4" (UID: "6dafbcda-4fec-47a9-b4d1-950010f65d3e") : secret "metrics-server-cert" not found Feb 20 15:14:07.778542 master-0 kubenswrapper[28120]: I0220 15:14:07.777197 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v89hf\" (UniqueName: \"kubernetes.io/projected/6dafbcda-4fec-47a9-b4d1-950010f65d3e-kube-api-access-v89hf\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:07.789713 master-0 kubenswrapper[28120]: I0220 15:14:07.788961 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rgkc\" (UniqueName: \"kubernetes.io/projected/6cb742a4-b88d-48fd-8f62-8e48250236d6-kube-api-access-5rgkc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-j9m5x\" (UID: \"6cb742a4-b88d-48fd-8f62-8e48250236d6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j9m5x" Feb 20 15:14:07.793196 master-0 kubenswrapper[28120]: I0220 15:14:07.792631 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-nk84g" Feb 20 15:14:07.944136 master-0 kubenswrapper[28120]: I0220 15:14:07.943772 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j9m5x" Feb 20 15:14:07.958196 master-0 kubenswrapper[28120]: I0220 15:14:07.958159 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n\" (UID: \"f4f68dcb-1812-4a20-944f-c5d9256a6f97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" Feb 20 15:14:07.958341 master-0 kubenswrapper[28120]: E0220 15:14:07.958312 28120 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 15:14:07.958384 master-0 kubenswrapper[28120]: E0220 15:14:07.958363 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert podName:f4f68dcb-1812-4a20-944f-c5d9256a6f97 nodeName:}" failed. No retries permitted until 2026-02-20 15:14:08.958346005 +0000 UTC m=+787.219139568 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" (UID: "f4f68dcb-1812-4a20-944f-c5d9256a6f97") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 15:14:08.090148 master-0 kubenswrapper[28120]: I0220 15:14:08.089066 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-qkpbj"] Feb 20 15:14:08.095673 master-0 kubenswrapper[28120]: W0220 15:14:08.095598 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2788df58_11e5_421f_96c5_ddf698d5e2ee.slice/crio-12af83cc6e629a574ac16be390bbd970865776f01bd013cbbe9fc3a364567c00 WatchSource:0}: Error finding container 12af83cc6e629a574ac16be390bbd970865776f01bd013cbbe9fc3a364567c00: Status 404 returned error can't find the container with id 12af83cc6e629a574ac16be390bbd970865776f01bd013cbbe9fc3a364567c00 Feb 20 15:14:08.101625 master-0 kubenswrapper[28120]: W0220 15:14:08.101595 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6071656_f5a6_46df_81d3_71d286384ca7.slice/crio-5640608f53e4ace727a5b5af04bcda9ce39d0c4d94851c0e2ead8fd32fb8e50a WatchSource:0}: Error finding container 5640608f53e4ace727a5b5af04bcda9ce39d0c4d94851c0e2ead8fd32fb8e50a: Status 404 returned error can't find the container with id 5640608f53e4ace727a5b5af04bcda9ce39d0c4d94851c0e2ead8fd32fb8e50a Feb 20 15:14:08.116363 master-0 kubenswrapper[28120]: I0220 15:14:08.115466 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-sbrtr"] Feb 20 15:14:08.123379 master-0 kubenswrapper[28120]: W0220 15:14:08.123321 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35ec9b56_658e_4a51_9fef_5b628826b83d.slice/crio-a18ee19c3573d9e98e6f9a7c6f6cace7415beaf3775c4c895d3efe8d325f9c2e WatchSource:0}: Error finding container a18ee19c3573d9e98e6f9a7c6f6cace7415beaf3775c4c895d3efe8d325f9c2e: Status 404 returned error can't find the container with id a18ee19c3573d9e98e6f9a7c6f6cace7415beaf3775c4c895d3efe8d325f9c2e Feb 20 15:14:08.153166 master-0 kubenswrapper[28120]: I0220 15:14:08.153103 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-jspvt"] Feb 20 15:14:08.165162 master-0 kubenswrapper[28120]: I0220 15:14:08.165098 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kqsmp"] Feb 20 15:14:08.283257 master-0 kubenswrapper[28120]: I0220 15:14:08.283171 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:08.283477 master-0 kubenswrapper[28120]: E0220 15:14:08.283407 28120 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 20 15:14:08.283575 master-0 kubenswrapper[28120]: I0220 15:14:08.283463 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:08.283775 master-0 kubenswrapper[28120]: E0220 15:14:08.283497 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs podName:6dafbcda-4fec-47a9-b4d1-950010f65d3e nodeName:}" failed. No retries permitted until 2026-02-20 15:14:09.283474083 +0000 UTC m=+787.544267646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-gssk4" (UID: "6dafbcda-4fec-47a9-b4d1-950010f65d3e") : secret "metrics-server-cert" not found Feb 20 15:14:08.283775 master-0 kubenswrapper[28120]: E0220 15:14:08.283657 28120 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 20 15:14:08.283907 master-0 kubenswrapper[28120]: E0220 15:14:08.283797 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs podName:6dafbcda-4fec-47a9-b4d1-950010f65d3e nodeName:}" failed. No retries permitted until 2026-02-20 15:14:09.283783861 +0000 UTC m=+787.544577424 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-gssk4" (UID: "6dafbcda-4fec-47a9-b4d1-950010f65d3e") : secret "webhook-server-cert" not found Feb 20 15:14:08.369483 master-0 kubenswrapper[28120]: I0220 15:14:08.369414 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-nvh8d"] Feb 20 15:14:08.433110 master-0 kubenswrapper[28120]: I0220 15:14:08.431677 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-slfqp"] Feb 20 15:14:08.437353 master-0 kubenswrapper[28120]: I0220 15:14:08.437304 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-kkrd6"] Feb 20 15:14:08.590668 master-0 kubenswrapper[28120]: I0220 15:14:08.590557 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-9pk5l\" (UID: \"86ae9025-ec1e-4e3a-b990-cb73f94ef8f6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" Feb 20 15:14:08.591193 master-0 kubenswrapper[28120]: E0220 15:14:08.590796 28120 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 20 15:14:08.591193 master-0 kubenswrapper[28120]: E0220 15:14:08.590945 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert podName:86ae9025-ec1e-4e3a-b990-cb73f94ef8f6 nodeName:}" failed. No retries permitted until 2026-02-20 15:14:10.59089932 +0000 UTC m=+788.851692873 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert") pod "infra-operator-controller-manager-5f879c76b6-9pk5l" (UID: "86ae9025-ec1e-4e3a-b990-cb73f94ef8f6") : secret "infra-operator-webhook-server-cert" not found Feb 20 15:14:08.608415 master-0 kubenswrapper[28120]: I0220 15:14:08.608347 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-nvh8d" event={"ID":"c4be1807-9b9c-4945-a039-2a0dbdf5bb60","Type":"ContainerStarted","Data":"59af7c1377130d081d67bff280f6308b3bda99373b70869b6e2b79568f1e206f"} Feb 20 15:14:08.610240 master-0 kubenswrapper[28120]: I0220 15:14:08.610203 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-qkpbj" event={"ID":"2788df58-11e5-421f-96c5-ddf698d5e2ee","Type":"ContainerStarted","Data":"12af83cc6e629a574ac16be390bbd970865776f01bd013cbbe9fc3a364567c00"} Feb 20 15:14:08.613651 master-0 kubenswrapper[28120]: I0220 15:14:08.613580 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jspvt" event={"ID":"a6071656-f5a6-46df-81d3-71d286384ca7","Type":"ContainerStarted","Data":"5640608f53e4ace727a5b5af04bcda9ce39d0c4d94851c0e2ead8fd32fb8e50a"} Feb 20 15:14:08.617272 master-0 kubenswrapper[28120]: I0220 15:14:08.614796 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kqsmp" event={"ID":"35ec9b56-658e-4a51-9fef-5b628826b83d","Type":"ContainerStarted","Data":"a18ee19c3573d9e98e6f9a7c6f6cace7415beaf3775c4c895d3efe8d325f9c2e"} Feb 20 15:14:08.617272 master-0 kubenswrapper[28120]: I0220 15:14:08.616771 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-slfqp" event={"ID":"406c9444-8bab-4866-8681-7cc8365af4ae","Type":"ContainerStarted","Data":"0504077ac8e2d78d2eaf02b89e3678fcf725df008f919c7dba7e784f94175de7"} Feb 20 15:14:08.618780 master-0 kubenswrapper[28120]: I0220 15:14:08.618656 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cbwxj" event={"ID":"571a5455-abe5-4154-a213-ac7d403f8bd2","Type":"ContainerStarted","Data":"968931f90c4227083c9a2de2a643641f7fe8cf2246f48f65db09a2761eaf7948"} Feb 20 15:14:08.625077 master-0 kubenswrapper[28120]: I0220 15:14:08.625023 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tv9jl" event={"ID":"5bea0f0b-5dca-43c9-a670-9dd77b112777","Type":"ContainerStarted","Data":"137f9952d843c3390fb748a76ef36c81155eeac18bfd3d0f789b148fd788f77c"} Feb 20 15:14:08.626498 master-0 kubenswrapper[28120]: I0220 15:14:08.626458 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-kkrd6" event={"ID":"844f7139-c477-49ba-8219-80aa439f8ff0","Type":"ContainerStarted","Data":"3294cb1e339055ce2b0c504a0603a68af1b7da189d45b9f2cee7dde35af12fca"} Feb 20 15:14:08.634847 master-0 kubenswrapper[28120]: I0220 15:14:08.634727 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sbrtr" event={"ID":"1ea0ec03-f7e5-4698-ad90-a2977d8af231","Type":"ContainerStarted","Data":"250b6473387e651efb9fa0a322c00e415e819fea7563bfe3a51f924b9421aebf"} Feb 20 15:14:08.637250 master-0 kubenswrapper[28120]: I0220 15:14:08.637213 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kmd7x" event={"ID":"9b98b91c-3e81-430f-8f0e-8127407ec912","Type":"ContainerStarted","Data":"f434a30ace560a233619584be1340839d44397a207cc0db485f8366d60d2a21a"} Feb 20 15:14:08.639101 master-0 kubenswrapper[28120]: I0220 15:14:08.639066 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cdwb7"] Feb 20 15:14:08.646191 master-0 kubenswrapper[28120]: W0220 15:14:08.646129 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7484e6b_acc7_4e73_bde4_a4a31808f501.slice/crio-edf3520c649109897c93a1e0ccc1b85bdd484dbf4fb655615a3b9f980b0902a2 WatchSource:0}: Error finding container edf3520c649109897c93a1e0ccc1b85bdd484dbf4fb655615a3b9f980b0902a2: Status 404 returned error can't find the container with id edf3520c649109897c93a1e0ccc1b85bdd484dbf4fb655615a3b9f980b0902a2 Feb 20 15:14:08.648760 master-0 kubenswrapper[28120]: I0220 15:14:08.648725 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-g75bk"] Feb 20 15:14:08.659406 master-0 kubenswrapper[28120]: W0220 15:14:08.659378 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26d662d7_afd6_428e_90e1_3c561b8cbaa9.slice/crio-40a8ae03b4665bddefdb82584c13f43434a1dd42790588b3216d73aff7ed6b21 WatchSource:0}: Error finding container 40a8ae03b4665bddefdb82584c13f43434a1dd42790588b3216d73aff7ed6b21: Status 404 returned error can't find the container with id 40a8ae03b4665bddefdb82584c13f43434a1dd42790588b3216d73aff7ed6b21 Feb 20 15:14:08.669158 master-0 kubenswrapper[28120]: I0220 15:14:08.669109 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pz7sc"] Feb 20 15:14:08.689066 master-0 kubenswrapper[28120]: W0220 15:14:08.689029 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04007ddd_2b6c_4d07_b3c8_a783aa7de40f.slice/crio-baffa33738fdf6be6d22b40aa146fecc81ded586008d6e730191266b0193ed0e WatchSource:0}: Error finding container baffa33738fdf6be6d22b40aa146fecc81ded586008d6e730191266b0193ed0e: Status 404 returned error can't find the container with id baffa33738fdf6be6d22b40aa146fecc81ded586008d6e730191266b0193ed0e Feb 20 15:14:08.689473 master-0 kubenswrapper[28120]: I0220 15:14:08.689432 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-hp67m"] Feb 20 15:14:08.690745 master-0 kubenswrapper[28120]: W0220 15:14:08.690727 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod684221fe_1668_49a2_8bcd_1e23d8e6248f.slice/crio-e79bccabf2f33d1986a892a0f2f663e676ecb27467a6d782fd7c6c2d19c3a9af WatchSource:0}: Error finding container e79bccabf2f33d1986a892a0f2f663e676ecb27467a6d782fd7c6c2d19c3a9af: Status 404 returned error can't find the container with id e79bccabf2f33d1986a892a0f2f663e676ecb27467a6d782fd7c6c2d19c3a9af Feb 20 15:14:08.901299 master-0 kubenswrapper[28120]: W0220 15:14:08.901070 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09e800d6_539c_4388_9101_b9fdef4c0969.slice/crio-2b52d7510f0ea0a3b6dbc61f4ac0a01b8aa51729f5f58757952c7e966fbbcddf WatchSource:0}: Error finding container 2b52d7510f0ea0a3b6dbc61f4ac0a01b8aa51729f5f58757952c7e966fbbcddf: Status 404 returned error can't find the container with id 2b52d7510f0ea0a3b6dbc61f4ac0a01b8aa51729f5f58757952c7e966fbbcddf Feb 20 15:14:08.903324 master-0 kubenswrapper[28120]: I0220 15:14:08.903038 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-n865p"] Feb 20 15:14:08.918360 master-0 kubenswrapper[28120]: W0220 15:14:08.912341 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48a4847f_0f2f_496b_b07e_4249ed44547c.slice/crio-5ce87ea3aabff553c3bc44106f882612aa433a7cabc0c132d03c6b843a2ceb43 WatchSource:0}: Error finding container 5ce87ea3aabff553c3bc44106f882612aa433a7cabc0c132d03c6b843a2ceb43: Status 404 returned error can't find the container with id 5ce87ea3aabff553c3bc44106f882612aa433a7cabc0c132d03c6b843a2ceb43 Feb 20 15:14:08.921382 master-0 kubenswrapper[28120]: I0220 15:14:08.919577 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-nk84g"] Feb 20 15:14:08.921382 master-0 kubenswrapper[28120]: E0220 15:14:08.919751 28120 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6r8bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-94rgn_openstack-operators(63ce0e64-a2f8-4f28-bfd6-aa8635c38a15): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 20 15:14:08.921604 master-0 kubenswrapper[28120]: E0220 15:14:08.921540 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-94rgn" podUID="63ce0e64-a2f8-4f28-bfd6-aa8635c38a15" Feb 20 15:14:08.929565 master-0 kubenswrapper[28120]: I0220 15:14:08.928979 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-94rgn"] Feb 20 15:14:08.937120 master-0 kubenswrapper[28120]: I0220 15:14:08.937054 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-7dxcp"] Feb 20 15:14:08.944900 master-0 kubenswrapper[28120]: I0220 15:14:08.944858 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-p22m8"] Feb 20 15:14:08.947780 master-0 kubenswrapper[28120]: W0220 15:14:08.947742 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cb742a4_b88d_48fd_8f62_8e48250236d6.slice/crio-5bb8a5c9aaf9e349dacb2f89ff10a8bf4a147a42e33e044c0926a257b891f871 WatchSource:0}: Error finding container 5bb8a5c9aaf9e349dacb2f89ff10a8bf4a147a42e33e044c0926a257b891f871: Status 404 returned error can't find the container with id 5bb8a5c9aaf9e349dacb2f89ff10a8bf4a147a42e33e044c0926a257b891f871 Feb 20 15:14:08.948913 master-0 kubenswrapper[28120]: W0220 15:14:08.948857 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96531071_f49e_44d4_98d3_3d4e3e24867a.slice/crio-98dfaa0741c060dda0f2932da34cc77d7af855ba94353c480fb546a05702f070 WatchSource:0}: Error finding container 98dfaa0741c060dda0f2932da34cc77d7af855ba94353c480fb546a05702f070: Status 404 returned error can't find the container with id 98dfaa0741c060dda0f2932da34cc77d7af855ba94353c480fb546a05702f070 Feb 20 15:14:08.949714 master-0 kubenswrapper[28120]: W0220 15:14:08.949674 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd054c93d_eb44_48ff_9005_a806434bf6a6.slice/crio-28712dcb248798fa21f9308109e8c449e880cc1d8ab630260fad26222de872fa WatchSource:0}: Error finding container 28712dcb248798fa21f9308109e8c449e880cc1d8ab630260fad26222de872fa: Status 404 returned error can't find the container with id 28712dcb248798fa21f9308109e8c449e880cc1d8ab630260fad26222de872fa Feb 20 15:14:08.952115 master-0 kubenswrapper[28120]: I0220 15:14:08.951801 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j9m5x"] Feb 20 15:14:08.952436 master-0 kubenswrapper[28120]: E0220 15:14:08.952270 28120 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6j77m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-p22m8_openstack-operators(d054c93d-eb44-48ff-9005-a806434bf6a6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 20 15:14:08.952436 master-0 kubenswrapper[28120]: E0220 15:14:08.952337 28120 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5cst9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-7dxcp_openstack-operators(96531071-f49e-44d4-98d3-3d4e3e24867a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 20 15:14:08.952721 master-0 kubenswrapper[28120]: E0220 15:14:08.952663 28120 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rgkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000810000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-j9m5x_openstack-operators(6cb742a4-b88d-48fd-8f62-8e48250236d6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 20 15:14:08.953763 master-0 kubenswrapper[28120]: E0220 15:14:08.953715 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-p22m8" podUID="d054c93d-eb44-48ff-9005-a806434bf6a6" Feb 20 15:14:08.953763 master-0 kubenswrapper[28120]: E0220 15:14:08.953736 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-7dxcp" podUID="96531071-f49e-44d4-98d3-3d4e3e24867a" Feb 20 15:14:08.953843 master-0 kubenswrapper[28120]: E0220 15:14:08.953764 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j9m5x" podUID="6cb742a4-b88d-48fd-8f62-8e48250236d6" Feb 20 15:14:09.003143 master-0 kubenswrapper[28120]: I0220 15:14:09.002944 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n\" (UID: \"f4f68dcb-1812-4a20-944f-c5d9256a6f97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" Feb 20 15:14:09.003143 master-0 kubenswrapper[28120]: E0220 15:14:09.003132 28120 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 15:14:09.003349 master-0 kubenswrapper[28120]: E0220 15:14:09.003207 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert podName:f4f68dcb-1812-4a20-944f-c5d9256a6f97 nodeName:}" failed. No retries permitted until 2026-02-20 15:14:11.003186161 +0000 UTC m=+789.263979724 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" (UID: "f4f68dcb-1812-4a20-944f-c5d9256a6f97") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 15:14:09.306795 master-0 kubenswrapper[28120]: I0220 15:14:09.306728 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:09.307018 master-0 kubenswrapper[28120]: I0220 15:14:09.306951 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:09.307067 master-0 kubenswrapper[28120]: E0220 15:14:09.306946 28120 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 20 15:14:09.307097 master-0 kubenswrapper[28120]: E0220 15:14:09.306993 28120 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 20 15:14:09.307097 master-0 kubenswrapper[28120]: E0220 15:14:09.307086 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs podName:6dafbcda-4fec-47a9-b4d1-950010f65d3e nodeName:}" failed. No retries permitted until 2026-02-20 15:14:11.30706685 +0000 UTC m=+789.567860413 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-gssk4" (UID: "6dafbcda-4fec-47a9-b4d1-950010f65d3e") : secret "webhook-server-cert" not found Feb 20 15:14:09.307159 master-0 kubenswrapper[28120]: E0220 15:14:09.307141 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs podName:6dafbcda-4fec-47a9-b4d1-950010f65d3e nodeName:}" failed. No retries permitted until 2026-02-20 15:14:11.307119541 +0000 UTC m=+789.567913104 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-gssk4" (UID: "6dafbcda-4fec-47a9-b4d1-950010f65d3e") : secret "metrics-server-cert" not found Feb 20 15:14:09.656008 master-0 kubenswrapper[28120]: I0220 15:14:09.655875 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j9m5x" event={"ID":"6cb742a4-b88d-48fd-8f62-8e48250236d6","Type":"ContainerStarted","Data":"5bb8a5c9aaf9e349dacb2f89ff10a8bf4a147a42e33e044c0926a257b891f871"} Feb 20 15:14:09.657311 master-0 kubenswrapper[28120]: I0220 15:14:09.657270 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-hp67m" event={"ID":"04007ddd-2b6c-4d07-b3c8-a783aa7de40f","Type":"ContainerStarted","Data":"baffa33738fdf6be6d22b40aa146fecc81ded586008d6e730191266b0193ed0e"} Feb 20 15:14:09.659507 master-0 kubenswrapper[28120]: E0220 15:14:09.659481 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j9m5x" podUID="6cb742a4-b88d-48fd-8f62-8e48250236d6" Feb 20 15:14:09.659780 master-0 kubenswrapper[28120]: I0220 15:14:09.659747 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pz7sc" event={"ID":"684221fe-1668-49a2-8bcd-1e23d8e6248f","Type":"ContainerStarted","Data":"e79bccabf2f33d1986a892a0f2f663e676ecb27467a6d782fd7c6c2d19c3a9af"} Feb 20 15:14:09.663170 master-0 kubenswrapper[28120]: I0220 15:14:09.663137 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cdwb7" event={"ID":"26d662d7-afd6-428e-90e1-3c561b8cbaa9","Type":"ContainerStarted","Data":"40a8ae03b4665bddefdb82584c13f43434a1dd42790588b3216d73aff7ed6b21"} Feb 20 15:14:09.664866 master-0 kubenswrapper[28120]: I0220 15:14:09.664836 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-p22m8" event={"ID":"d054c93d-eb44-48ff-9005-a806434bf6a6","Type":"ContainerStarted","Data":"28712dcb248798fa21f9308109e8c449e880cc1d8ab630260fad26222de872fa"} Feb 20 15:14:09.667432 master-0 kubenswrapper[28120]: E0220 15:14:09.667390 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-p22m8" podUID="d054c93d-eb44-48ff-9005-a806434bf6a6" Feb 20 15:14:09.667868 master-0 kubenswrapper[28120]: I0220 15:14:09.667833 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-nk84g" event={"ID":"48a4847f-0f2f-496b-b07e-4249ed44547c","Type":"ContainerStarted","Data":"5ce87ea3aabff553c3bc44106f882612aa433a7cabc0c132d03c6b843a2ceb43"} Feb 20 15:14:09.669096 master-0 kubenswrapper[28120]: I0220 15:14:09.669066 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-7dxcp" event={"ID":"96531071-f49e-44d4-98d3-3d4e3e24867a","Type":"ContainerStarted","Data":"98dfaa0741c060dda0f2932da34cc77d7af855ba94353c480fb546a05702f070"} Feb 20 15:14:09.670506 master-0 kubenswrapper[28120]: E0220 15:14:09.670478 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-7dxcp" podUID="96531071-f49e-44d4-98d3-3d4e3e24867a" Feb 20 15:14:09.671657 master-0 kubenswrapper[28120]: I0220 15:14:09.671622 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-g75bk" event={"ID":"b7484e6b-acc7-4e73-bde4-a4a31808f501","Type":"ContainerStarted","Data":"edf3520c649109897c93a1e0ccc1b85bdd484dbf4fb655615a3b9f980b0902a2"} Feb 20 15:14:09.676322 master-0 kubenswrapper[28120]: I0220 15:14:09.673275 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n865p" event={"ID":"09e800d6-539c-4388-9101-b9fdef4c0969","Type":"ContainerStarted","Data":"2b52d7510f0ea0a3b6dbc61f4ac0a01b8aa51729f5f58757952c7e966fbbcddf"} Feb 20 15:14:09.676322 master-0 kubenswrapper[28120]: I0220 15:14:09.674474 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-94rgn" event={"ID":"63ce0e64-a2f8-4f28-bfd6-aa8635c38a15","Type":"ContainerStarted","Data":"9355c58b9d54e8e74c66225d56f55a8e51a36cb5925665fb6cba9096ff08b6b9"} Feb 20 15:14:09.676830 master-0 kubenswrapper[28120]: E0220 15:14:09.676595 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-94rgn" podUID="63ce0e64-a2f8-4f28-bfd6-aa8635c38a15" Feb 20 15:14:10.794478 master-0 kubenswrapper[28120]: I0220 15:14:10.793852 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-9pk5l\" (UID: \"86ae9025-ec1e-4e3a-b990-cb73f94ef8f6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" Feb 20 15:14:10.794478 master-0 kubenswrapper[28120]: E0220 15:14:10.794251 28120 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 20 15:14:10.794478 master-0 kubenswrapper[28120]: E0220 15:14:10.794313 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert podName:86ae9025-ec1e-4e3a-b990-cb73f94ef8f6 nodeName:}" failed. No retries permitted until 2026-02-20 15:14:14.794291847 +0000 UTC m=+793.055085410 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert") pod "infra-operator-controller-manager-5f879c76b6-9pk5l" (UID: "86ae9025-ec1e-4e3a-b990-cb73f94ef8f6") : secret "infra-operator-webhook-server-cert" not found Feb 20 15:14:10.833243 master-0 kubenswrapper[28120]: E0220 15:14:10.833154 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j9m5x" podUID="6cb742a4-b88d-48fd-8f62-8e48250236d6" Feb 20 15:14:10.834733 master-0 kubenswrapper[28120]: E0220 15:14:10.834694 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-94rgn" podUID="63ce0e64-a2f8-4f28-bfd6-aa8635c38a15" Feb 20 15:14:10.836508 master-0 kubenswrapper[28120]: E0220 15:14:10.836450 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-7dxcp" podUID="96531071-f49e-44d4-98d3-3d4e3e24867a" Feb 20 15:14:10.838275 master-0 kubenswrapper[28120]: E0220 15:14:10.836548 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-p22m8" podUID="d054c93d-eb44-48ff-9005-a806434bf6a6" Feb 20 15:14:11.101938 master-0 kubenswrapper[28120]: I0220 15:14:11.101597 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n\" (UID: \"f4f68dcb-1812-4a20-944f-c5d9256a6f97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" Feb 20 15:14:11.102325 master-0 kubenswrapper[28120]: E0220 15:14:11.102207 28120 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 15:14:11.102496 master-0 kubenswrapper[28120]: E0220 15:14:11.102477 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert podName:f4f68dcb-1812-4a20-944f-c5d9256a6f97 nodeName:}" failed. No retries permitted until 2026-02-20 15:14:15.102453322 +0000 UTC m=+793.363246885 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" (UID: "f4f68dcb-1812-4a20-944f-c5d9256a6f97") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 15:14:11.309875 master-0 kubenswrapper[28120]: I0220 15:14:11.309795 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:11.310116 master-0 kubenswrapper[28120]: E0220 15:14:11.310062 28120 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 20 15:14:11.310195 master-0 kubenswrapper[28120]: I0220 15:14:11.310102 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:11.310195 master-0 kubenswrapper[28120]: E0220 15:14:11.310185 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs podName:6dafbcda-4fec-47a9-b4d1-950010f65d3e nodeName:}" failed. No retries permitted until 2026-02-20 15:14:15.310156262 +0000 UTC m=+793.570950025 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-gssk4" (UID: "6dafbcda-4fec-47a9-b4d1-950010f65d3e") : secret "metrics-server-cert" not found Feb 20 15:14:11.310422 master-0 kubenswrapper[28120]: E0220 15:14:11.310372 28120 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 20 15:14:11.310565 master-0 kubenswrapper[28120]: E0220 15:14:11.310539 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs podName:6dafbcda-4fec-47a9-b4d1-950010f65d3e nodeName:}" failed. No retries permitted until 2026-02-20 15:14:15.310463569 +0000 UTC m=+793.571257132 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-gssk4" (UID: "6dafbcda-4fec-47a9-b4d1-950010f65d3e") : secret "webhook-server-cert" not found Feb 20 15:14:14.877845 master-0 kubenswrapper[28120]: I0220 15:14:14.877750 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-9pk5l\" (UID: \"86ae9025-ec1e-4e3a-b990-cb73f94ef8f6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" Feb 20 15:14:14.878970 master-0 kubenswrapper[28120]: E0220 15:14:14.877916 28120 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 20 15:14:14.880030 master-0 kubenswrapper[28120]: E0220 15:14:14.879882 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert podName:86ae9025-ec1e-4e3a-b990-cb73f94ef8f6 nodeName:}" failed. No retries permitted until 2026-02-20 15:14:22.878063908 +0000 UTC m=+801.138857491 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert") pod "infra-operator-controller-manager-5f879c76b6-9pk5l" (UID: "86ae9025-ec1e-4e3a-b990-cb73f94ef8f6") : secret "infra-operator-webhook-server-cert" not found Feb 20 15:14:15.185258 master-0 kubenswrapper[28120]: I0220 15:14:15.185130 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n\" (UID: \"f4f68dcb-1812-4a20-944f-c5d9256a6f97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" Feb 20 15:14:15.185538 master-0 kubenswrapper[28120]: E0220 15:14:15.185511 28120 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 15:14:15.185604 master-0 kubenswrapper[28120]: E0220 15:14:15.185578 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert podName:f4f68dcb-1812-4a20-944f-c5d9256a6f97 nodeName:}" failed. No retries permitted until 2026-02-20 15:14:23.185557026 +0000 UTC m=+801.446350589 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" (UID: "f4f68dcb-1812-4a20-944f-c5d9256a6f97") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 15:14:15.388590 master-0 kubenswrapper[28120]: I0220 15:14:15.388494 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:15.388899 master-0 kubenswrapper[28120]: E0220 15:14:15.388651 28120 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 20 15:14:15.388899 master-0 kubenswrapper[28120]: E0220 15:14:15.388714 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs podName:6dafbcda-4fec-47a9-b4d1-950010f65d3e nodeName:}" failed. No retries permitted until 2026-02-20 15:14:23.388695002 +0000 UTC m=+801.649488565 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-gssk4" (UID: "6dafbcda-4fec-47a9-b4d1-950010f65d3e") : secret "metrics-server-cert" not found Feb 20 15:14:15.388899 master-0 kubenswrapper[28120]: I0220 15:14:15.388792 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:15.389189 master-0 kubenswrapper[28120]: E0220 15:14:15.389047 28120 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 20 15:14:15.389189 master-0 kubenswrapper[28120]: E0220 15:14:15.389183 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs podName:6dafbcda-4fec-47a9-b4d1-950010f65d3e nodeName:}" failed. No retries permitted until 2026-02-20 15:14:23.389151974 +0000 UTC m=+801.649945577 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-gssk4" (UID: "6dafbcda-4fec-47a9-b4d1-950010f65d3e") : secret "webhook-server-cert" not found Feb 20 15:14:22.955718 master-0 kubenswrapper[28120]: I0220 15:14:22.954915 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-9pk5l\" (UID: \"86ae9025-ec1e-4e3a-b990-cb73f94ef8f6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" Feb 20 15:14:22.955718 master-0 kubenswrapper[28120]: E0220 15:14:22.955109 28120 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 20 15:14:22.955718 master-0 kubenswrapper[28120]: E0220 15:14:22.955151 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert podName:86ae9025-ec1e-4e3a-b990-cb73f94ef8f6 nodeName:}" failed. No retries permitted until 2026-02-20 15:14:38.955134214 +0000 UTC m=+817.215927777 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert") pod "infra-operator-controller-manager-5f879c76b6-9pk5l" (UID: "86ae9025-ec1e-4e3a-b990-cb73f94ef8f6") : secret "infra-operator-webhook-server-cert" not found Feb 20 15:14:22.986239 master-0 kubenswrapper[28120]: I0220 15:14:22.983701 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tv9jl" event={"ID":"5bea0f0b-5dca-43c9-a670-9dd77b112777","Type":"ContainerStarted","Data":"88896499385b1ce8895528b3714f914d027e03dbb4f1e6e8247036cc652f5fde"} Feb 20 15:14:22.986239 master-0 kubenswrapper[28120]: I0220 15:14:22.984999 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tv9jl" Feb 20 15:14:22.995378 master-0 kubenswrapper[28120]: I0220 15:14:22.995329 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-nvh8d" event={"ID":"c4be1807-9b9c-4945-a039-2a0dbdf5bb60","Type":"ContainerStarted","Data":"bbe0ed2cb8b1f19e3c9b2d335cb3eec862c614e36bd7e579d47cf38d26abc1d8"} Feb 20 15:14:22.995754 master-0 kubenswrapper[28120]: I0220 15:14:22.995500 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-nvh8d" Feb 20 15:14:22.997908 master-0 kubenswrapper[28120]: I0220 15:14:22.997882 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n865p" event={"ID":"09e800d6-539c-4388-9101-b9fdef4c0969","Type":"ContainerStarted","Data":"82af1af5525fb4d16fcb70b6e09b1925ec24e9263c3293444423a3e727fc0cb7"} Feb 20 15:14:22.998186 master-0 kubenswrapper[28120]: I0220 15:14:22.998142 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n865p" Feb 20 15:14:23.004762 master-0 kubenswrapper[28120]: I0220 15:14:23.004132 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jspvt" event={"ID":"a6071656-f5a6-46df-81d3-71d286384ca7","Type":"ContainerStarted","Data":"f7079b98e226af0cb3f9fb93e41ebee9ec4210cb5c8160f3d320adc029057289"} Feb 20 15:14:23.004762 master-0 kubenswrapper[28120]: I0220 15:14:23.004224 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jspvt" Feb 20 15:14:23.009733 master-0 kubenswrapper[28120]: I0220 15:14:23.007799 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kqsmp" event={"ID":"35ec9b56-658e-4a51-9fef-5b628826b83d","Type":"ContainerStarted","Data":"56b351beeb6a48a5087bea99f2850e6b730aeeacd7a56a3b006e62cee76823a6"} Feb 20 15:14:23.009733 master-0 kubenswrapper[28120]: I0220 15:14:23.008095 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kqsmp" Feb 20 15:14:23.012129 master-0 kubenswrapper[28120]: I0220 15:14:23.010686 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-slfqp" event={"ID":"406c9444-8bab-4866-8681-7cc8365af4ae","Type":"ContainerStarted","Data":"4c28491acf1df5ffda9e4a244297e8663ea72d9e4fcfab3579352e42b8f450c4"} Feb 20 15:14:23.012129 master-0 kubenswrapper[28120]: I0220 15:14:23.010812 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-slfqp" Feb 20 15:14:23.021689 master-0 kubenswrapper[28120]: I0220 15:14:23.021644 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cbwxj" event={"ID":"571a5455-abe5-4154-a213-ac7d403f8bd2","Type":"ContainerStarted","Data":"eb0973c160674557680a7556378002fa62ad270e841dd5e0c3c863d81a57f295"} Feb 20 15:14:23.022443 master-0 kubenswrapper[28120]: I0220 15:14:23.022423 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cbwxj" Feb 20 15:14:23.031940 master-0 kubenswrapper[28120]: I0220 15:14:23.026178 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tv9jl" podStartSLOduration=3.013687704 podStartE2EDuration="17.026156625s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:07.641431572 +0000 UTC m=+785.902225135" lastFinishedPulling="2026-02-20 15:14:21.653900473 +0000 UTC m=+799.914694056" observedRunningTime="2026-02-20 15:14:23.014515545 +0000 UTC m=+801.275309108" watchObservedRunningTime="2026-02-20 15:14:23.026156625 +0000 UTC m=+801.286950198" Feb 20 15:14:23.076966 master-0 kubenswrapper[28120]: I0220 15:14:23.076492 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-nvh8d" podStartSLOduration=3.228122621 podStartE2EDuration="17.076467499s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.37035385 +0000 UTC m=+786.631147423" lastFinishedPulling="2026-02-20 15:14:22.218698708 +0000 UTC m=+800.479492301" observedRunningTime="2026-02-20 15:14:23.04199885 +0000 UTC m=+801.302792413" watchObservedRunningTime="2026-02-20 15:14:23.076467499 +0000 UTC m=+801.337261072" Feb 20 15:14:23.099013 master-0 kubenswrapper[28120]: I0220 15:14:23.097982 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n865p" podStartSLOduration=3.712236473 podStartE2EDuration="17.097958335s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.903021493 +0000 UTC m=+787.163815056" lastFinishedPulling="2026-02-20 15:14:22.288743335 +0000 UTC m=+800.549536918" observedRunningTime="2026-02-20 15:14:23.064688206 +0000 UTC m=+801.325481769" watchObservedRunningTime="2026-02-20 15:14:23.097958335 +0000 UTC m=+801.358751898" Feb 20 15:14:23.105393 master-0 kubenswrapper[28120]: I0220 15:14:23.103740 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jspvt" podStartSLOduration=2.995025838 podStartE2EDuration="17.103725729s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.110006787 +0000 UTC m=+786.370800350" lastFinishedPulling="2026-02-20 15:14:22.218706678 +0000 UTC m=+800.479500241" observedRunningTime="2026-02-20 15:14:23.098458378 +0000 UTC m=+801.359251941" watchObservedRunningTime="2026-02-20 15:14:23.103725729 +0000 UTC m=+801.364519292" Feb 20 15:14:23.131397 master-0 kubenswrapper[28120]: I0220 15:14:23.130874 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kqsmp" podStartSLOduration=3.048252345 podStartE2EDuration="17.130858356s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.136148029 +0000 UTC m=+786.396941602" lastFinishedPulling="2026-02-20 15:14:22.21875406 +0000 UTC m=+800.479547613" observedRunningTime="2026-02-20 15:14:23.12259312 +0000 UTC m=+801.383386683" watchObservedRunningTime="2026-02-20 15:14:23.130858356 +0000 UTC m=+801.391651919" Feb 20 15:14:23.166946 master-0 kubenswrapper[28120]: I0220 15:14:23.165901 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-slfqp" podStartSLOduration=3.355141658 podStartE2EDuration="17.165871709s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.407958467 +0000 UTC m=+786.668752030" lastFinishedPulling="2026-02-20 15:14:22.218688488 +0000 UTC m=+800.479482081" observedRunningTime="2026-02-20 15:14:23.157707715 +0000 UTC m=+801.418501278" watchObservedRunningTime="2026-02-20 15:14:23.165871709 +0000 UTC m=+801.426665292" Feb 20 15:14:23.193035 master-0 kubenswrapper[28120]: I0220 15:14:23.192332 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cbwxj" podStartSLOduration=2.693664141 podStartE2EDuration="17.192315138s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:07.781773321 +0000 UTC m=+786.042566884" lastFinishedPulling="2026-02-20 15:14:22.280424318 +0000 UTC m=+800.541217881" observedRunningTime="2026-02-20 15:14:23.189267732 +0000 UTC m=+801.450061305" watchObservedRunningTime="2026-02-20 15:14:23.192315138 +0000 UTC m=+801.453108701" Feb 20 15:14:23.261890 master-0 kubenswrapper[28120]: I0220 15:14:23.261285 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n\" (UID: \"f4f68dcb-1812-4a20-944f-c5d9256a6f97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" Feb 20 15:14:23.261890 master-0 kubenswrapper[28120]: E0220 15:14:23.261508 28120 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 15:14:23.261890 master-0 kubenswrapper[28120]: E0220 15:14:23.261555 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert podName:f4f68dcb-1812-4a20-944f-c5d9256a6f97 nodeName:}" failed. No retries permitted until 2026-02-20 15:14:39.261538535 +0000 UTC m=+817.522332098 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" (UID: "f4f68dcb-1812-4a20-944f-c5d9256a6f97") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 20 15:14:23.466557 master-0 kubenswrapper[28120]: I0220 15:14:23.466428 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:23.466557 master-0 kubenswrapper[28120]: I0220 15:14:23.466512 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:23.467209 master-0 kubenswrapper[28120]: E0220 15:14:23.467110 28120 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 20 15:14:23.467308 master-0 kubenswrapper[28120]: E0220 15:14:23.467214 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs podName:6dafbcda-4fec-47a9-b4d1-950010f65d3e nodeName:}" failed. No retries permitted until 2026-02-20 15:14:39.467185253 +0000 UTC m=+817.727978816 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-gssk4" (UID: "6dafbcda-4fec-47a9-b4d1-950010f65d3e") : secret "metrics-server-cert" not found Feb 20 15:14:23.470966 master-0 kubenswrapper[28120]: I0220 15:14:23.470632 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:24.052897 master-0 kubenswrapper[28120]: I0220 15:14:24.052843 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-kkrd6" event={"ID":"844f7139-c477-49ba-8219-80aa439f8ff0","Type":"ContainerStarted","Data":"5e8fe5a70f3099f9d13bda8b683a0260daad70a20c5e5d4ecf63b55e162cb0e2"} Feb 20 15:14:24.053835 master-0 kubenswrapper[28120]: I0220 15:14:24.053805 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-kkrd6" Feb 20 15:14:24.055535 master-0 kubenswrapper[28120]: I0220 15:14:24.055350 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sbrtr" event={"ID":"1ea0ec03-f7e5-4698-ad90-a2977d8af231","Type":"ContainerStarted","Data":"fc6e73561d9ec594a4c089b80aea3b38c1c59142518773cb500960e71dfa0139"} Feb 20 15:14:24.055915 master-0 kubenswrapper[28120]: I0220 15:14:24.055892 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sbrtr" Feb 20 15:14:24.090834 master-0 kubenswrapper[28120]: I0220 15:14:24.090791 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-g75bk" Feb 20 15:14:24.090834 master-0 kubenswrapper[28120]: I0220 15:14:24.090829 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kmd7x" Feb 20 15:14:24.091070 master-0 kubenswrapper[28120]: I0220 15:14:24.090839 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-g75bk" event={"ID":"b7484e6b-acc7-4e73-bde4-a4a31808f501","Type":"ContainerStarted","Data":"090e0098b88157d1bb2988684b06d05503f5fdb7de8f23021a6edab6ae4148e4"} Feb 20 15:14:24.091070 master-0 kubenswrapper[28120]: I0220 15:14:24.090855 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pz7sc" Feb 20 15:14:24.091070 master-0 kubenswrapper[28120]: I0220 15:14:24.090866 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kmd7x" event={"ID":"9b98b91c-3e81-430f-8f0e-8127407ec912","Type":"ContainerStarted","Data":"e067034ae905dcd4ee8345baec41cb007f79c86a27afd01b59365b4f040a26fc"} Feb 20 15:14:24.091070 master-0 kubenswrapper[28120]: I0220 15:14:24.090875 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pz7sc" event={"ID":"684221fe-1668-49a2-8bcd-1e23d8e6248f","Type":"ContainerStarted","Data":"737292a9ae2b4e58c9910fbf2bc12ec37178442f2e7f1dcac8877f735ac72f5c"} Feb 20 15:14:24.096005 master-0 kubenswrapper[28120]: I0220 15:14:24.094394 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-qkpbj" event={"ID":"2788df58-11e5-421f-96c5-ddf698d5e2ee","Type":"ContainerStarted","Data":"10d8183dcb1b897ca0b2a757d62896023c0e44be92526b79741065d6660a2ed0"} Feb 20 15:14:24.096005 master-0 kubenswrapper[28120]: I0220 15:14:24.094770 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-qkpbj" Feb 20 15:14:24.097608 master-0 kubenswrapper[28120]: I0220 15:14:24.096852 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cdwb7" event={"ID":"26d662d7-afd6-428e-90e1-3c561b8cbaa9","Type":"ContainerStarted","Data":"7f77cb2503a8b657b8db05859523cdecdbc831b53bbcaa63dc67cfdd4e55142b"} Feb 20 15:14:24.097608 master-0 kubenswrapper[28120]: I0220 15:14:24.097013 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cdwb7" Feb 20 15:14:24.098351 master-0 kubenswrapper[28120]: I0220 15:14:24.098313 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-nk84g" event={"ID":"48a4847f-0f2f-496b-b07e-4249ed44547c","Type":"ContainerStarted","Data":"0a9b5879f208340ffb5a367fcf2cbaef9e6e84338183596442f881ffad155a44"} Feb 20 15:14:24.098418 master-0 kubenswrapper[28120]: I0220 15:14:24.098397 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-nk84g" Feb 20 15:14:24.104590 master-0 kubenswrapper[28120]: I0220 15:14:24.104510 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-kkrd6" podStartSLOduration=4.205053814 podStartE2EDuration="18.104492316s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.424849259 +0000 UTC m=+786.685642822" lastFinishedPulling="2026-02-20 15:14:22.324287771 +0000 UTC m=+800.585081324" observedRunningTime="2026-02-20 15:14:24.078353604 +0000 UTC m=+802.339147167" watchObservedRunningTime="2026-02-20 15:14:24.104492316 +0000 UTC m=+802.365285879" Feb 20 15:14:24.106544 master-0 kubenswrapper[28120]: I0220 15:14:24.106301 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-hp67m" event={"ID":"04007ddd-2b6c-4d07-b3c8-a783aa7de40f","Type":"ContainerStarted","Data":"e4bd46ee6e161d90fe48a55192db715623756540c9516573310d1895ab4ed1e4"} Feb 20 15:14:24.125069 master-0 kubenswrapper[28120]: I0220 15:14:24.124980 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sbrtr" podStartSLOduration=4.009372093 podStartE2EDuration="18.124957326s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.102596202 +0000 UTC m=+786.363389765" lastFinishedPulling="2026-02-20 15:14:22.218181435 +0000 UTC m=+800.478974998" observedRunningTime="2026-02-20 15:14:24.103317526 +0000 UTC m=+802.364111089" watchObservedRunningTime="2026-02-20 15:14:24.124957326 +0000 UTC m=+802.385750889" Feb 20 15:14:24.126772 master-0 kubenswrapper[28120]: I0220 15:14:24.126742 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-g75bk" podStartSLOduration=4.4686982969999995 podStartE2EDuration="18.12673682s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.650340132 +0000 UTC m=+786.911133695" lastFinishedPulling="2026-02-20 15:14:22.308378605 +0000 UTC m=+800.569172218" observedRunningTime="2026-02-20 15:14:24.120012993 +0000 UTC m=+802.380806556" watchObservedRunningTime="2026-02-20 15:14:24.12673682 +0000 UTC m=+802.387530383" Feb 20 15:14:24.168294 master-0 kubenswrapper[28120]: I0220 15:14:24.167721 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kmd7x" podStartSLOduration=3.692526372 podStartE2EDuration="18.167703912s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:07.743528188 +0000 UTC m=+786.004321751" lastFinishedPulling="2026-02-20 15:14:22.218705728 +0000 UTC m=+800.479499291" observedRunningTime="2026-02-20 15:14:24.147098608 +0000 UTC m=+802.407892191" watchObservedRunningTime="2026-02-20 15:14:24.167703912 +0000 UTC m=+802.428497465" Feb 20 15:14:24.185251 master-0 kubenswrapper[28120]: I0220 15:14:24.185160 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-nk84g" podStartSLOduration=3.8149661630000002 podStartE2EDuration="17.185141807s" podCreationTimestamp="2026-02-20 15:14:07 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.918571321 +0000 UTC m=+787.179364884" lastFinishedPulling="2026-02-20 15:14:22.288746965 +0000 UTC m=+800.549540528" observedRunningTime="2026-02-20 15:14:24.169096737 +0000 UTC m=+802.429890300" watchObservedRunningTime="2026-02-20 15:14:24.185141807 +0000 UTC m=+802.445935370" Feb 20 15:14:24.196108 master-0 kubenswrapper[28120]: I0220 15:14:24.196034 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-hp67m" podStartSLOduration=4.64926759 podStartE2EDuration="18.196020588s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.691060187 +0000 UTC m=+786.951853740" lastFinishedPulling="2026-02-20 15:14:22.237813175 +0000 UTC m=+800.498606738" observedRunningTime="2026-02-20 15:14:24.195149866 +0000 UTC m=+802.455943429" watchObservedRunningTime="2026-02-20 15:14:24.196020588 +0000 UTC m=+802.456814161" Feb 20 15:14:24.220259 master-0 kubenswrapper[28120]: I0220 15:14:24.220174 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-qkpbj" podStartSLOduration=4.096955057 podStartE2EDuration="18.2201589s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.09767876 +0000 UTC m=+786.358472323" lastFinishedPulling="2026-02-20 15:14:22.220882603 +0000 UTC m=+800.481676166" observedRunningTime="2026-02-20 15:14:24.213076043 +0000 UTC m=+802.473869606" watchObservedRunningTime="2026-02-20 15:14:24.2201589 +0000 UTC m=+802.480952463" Feb 20 15:14:24.276269 master-0 kubenswrapper[28120]: I0220 15:14:24.276199 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pz7sc" podStartSLOduration=4.683412332 podStartE2EDuration="18.276182417s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.694420361 +0000 UTC m=+786.955213914" lastFinishedPulling="2026-02-20 15:14:22.287190436 +0000 UTC m=+800.547983999" observedRunningTime="2026-02-20 15:14:24.238472697 +0000 UTC m=+802.499266260" watchObservedRunningTime="2026-02-20 15:14:24.276182417 +0000 UTC m=+802.536975970" Feb 20 15:14:25.118134 master-0 kubenswrapper[28120]: I0220 15:14:25.118091 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-hp67m" Feb 20 15:14:26.143021 master-0 kubenswrapper[28120]: I0220 15:14:26.142584 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-p22m8" event={"ID":"d054c93d-eb44-48ff-9005-a806434bf6a6","Type":"ContainerStarted","Data":"2b6e42dcf755275fa9930c88d79a213790fec4e15a8805e6ba19f4af1ee4304f"} Feb 20 15:14:26.145218 master-0 kubenswrapper[28120]: I0220 15:14:26.144135 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-p22m8" Feb 20 15:14:26.174630 master-0 kubenswrapper[28120]: I0220 15:14:26.174525 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-p22m8" podStartSLOduration=3.7197583720000003 podStartE2EDuration="20.174498158s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.952156199 +0000 UTC m=+787.212949762" lastFinishedPulling="2026-02-20 15:14:25.406895985 +0000 UTC m=+803.667689548" observedRunningTime="2026-02-20 15:14:26.171049492 +0000 UTC m=+804.431843085" watchObservedRunningTime="2026-02-20 15:14:26.174498158 +0000 UTC m=+804.435291751" Feb 20 15:14:26.188263 master-0 kubenswrapper[28120]: I0220 15:14:26.188174 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cdwb7" podStartSLOduration=6.564883622 podStartE2EDuration="20.188148118s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.662896295 +0000 UTC m=+786.923689858" lastFinishedPulling="2026-02-20 15:14:22.286160791 +0000 UTC m=+800.546954354" observedRunningTime="2026-02-20 15:14:24.27109956 +0000 UTC m=+802.531893113" watchObservedRunningTime="2026-02-20 15:14:26.188148118 +0000 UTC m=+804.448941711" Feb 20 15:14:27.122351 master-0 kubenswrapper[28120]: I0220 15:14:27.122277 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sbrtr" Feb 20 15:14:27.135828 master-0 kubenswrapper[28120]: I0220 15:14:27.135768 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jspvt" Feb 20 15:14:27.146742 master-0 kubenswrapper[28120]: I0220 15:14:27.146682 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-kqsmp" Feb 20 15:14:27.264397 master-0 kubenswrapper[28120]: I0220 15:14:27.264359 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-qkpbj" Feb 20 15:14:27.378020 master-0 kubenswrapper[28120]: I0220 15:14:27.372613 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-hp67m" Feb 20 15:14:27.411262 master-0 kubenswrapper[28120]: I0220 15:14:27.411162 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-slfqp" Feb 20 15:14:27.411977 master-0 kubenswrapper[28120]: I0220 15:14:27.411800 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-nvh8d" Feb 20 15:14:27.643524 master-0 kubenswrapper[28120]: I0220 15:14:27.643386 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n865p" Feb 20 15:14:29.191794 master-0 kubenswrapper[28120]: I0220 15:14:29.191744 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j9m5x" event={"ID":"6cb742a4-b88d-48fd-8f62-8e48250236d6","Type":"ContainerStarted","Data":"0e1affd26a5af855f942facdf39dd401b3a66f0fa51da7955f539e5339d96a8b"} Feb 20 15:14:29.196498 master-0 kubenswrapper[28120]: I0220 15:14:29.196465 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-7dxcp" event={"ID":"96531071-f49e-44d4-98d3-3d4e3e24867a","Type":"ContainerStarted","Data":"d7486e42b265ce5e2d49c16f7ca7eb9290569bb1a7eaee515d25d7405baef39e"} Feb 20 15:14:29.196658 master-0 kubenswrapper[28120]: I0220 15:14:29.196630 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-7dxcp" Feb 20 15:14:29.213765 master-0 kubenswrapper[28120]: I0220 15:14:29.213681 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-j9m5x" podStartSLOduration=2.819347315 podStartE2EDuration="22.213656738s" podCreationTimestamp="2026-02-20 15:14:07 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.952528658 +0000 UTC m=+787.213322231" lastFinishedPulling="2026-02-20 15:14:28.346838091 +0000 UTC m=+806.607631654" observedRunningTime="2026-02-20 15:14:29.211166896 +0000 UTC m=+807.471960499" watchObservedRunningTime="2026-02-20 15:14:29.213656738 +0000 UTC m=+807.474450321" Feb 20 15:14:29.251810 master-0 kubenswrapper[28120]: I0220 15:14:29.251733 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-7dxcp" podStartSLOduration=3.876998073 podStartE2EDuration="23.251116462s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.95219858 +0000 UTC m=+787.212992133" lastFinishedPulling="2026-02-20 15:14:28.326316959 +0000 UTC m=+806.587110522" observedRunningTime="2026-02-20 15:14:29.248081166 +0000 UTC m=+807.508874759" watchObservedRunningTime="2026-02-20 15:14:29.251116462 +0000 UTC m=+807.511910025" Feb 20 15:14:30.208201 master-0 kubenswrapper[28120]: I0220 15:14:30.208157 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-94rgn" event={"ID":"63ce0e64-a2f8-4f28-bfd6-aa8635c38a15","Type":"ContainerStarted","Data":"d65d22ba809b022792626c0f2f31401beecbea4754537467a202e6079f12a2dd"} Feb 20 15:14:30.209114 master-0 kubenswrapper[28120]: I0220 15:14:30.209030 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-94rgn" Feb 20 15:14:30.244710 master-0 kubenswrapper[28120]: I0220 15:14:30.244586 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-94rgn" podStartSLOduration=3.75416322 podStartE2EDuration="24.244565237s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:08.919563826 +0000 UTC m=+787.180357389" lastFinishedPulling="2026-02-20 15:14:29.409965843 +0000 UTC m=+807.670759406" observedRunningTime="2026-02-20 15:14:30.242344462 +0000 UTC m=+808.503138035" watchObservedRunningTime="2026-02-20 15:14:30.244565237 +0000 UTC m=+808.505358810" Feb 20 15:14:36.917199 master-0 kubenswrapper[28120]: I0220 15:14:36.917108 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tv9jl" Feb 20 15:14:36.954069 master-0 kubenswrapper[28120]: I0220 15:14:36.947760 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kmd7x" Feb 20 15:14:36.988983 master-0 kubenswrapper[28120]: I0220 15:14:36.988069 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cbwxj" Feb 20 15:14:37.355387 master-0 kubenswrapper[28120]: I0220 15:14:37.355288 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-kkrd6" Feb 20 15:14:37.409358 master-0 kubenswrapper[28120]: I0220 15:14:37.408655 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-g75bk" Feb 20 15:14:37.448557 master-0 kubenswrapper[28120]: I0220 15:14:37.448488 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pz7sc" Feb 20 15:14:37.500956 master-0 kubenswrapper[28120]: I0220 15:14:37.500750 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-p22m8" Feb 20 15:14:37.599703 master-0 kubenswrapper[28120]: I0220 15:14:37.599643 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-94rgn" Feb 20 15:14:37.727404 master-0 kubenswrapper[28120]: I0220 15:14:37.727304 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cdwb7" Feb 20 15:14:37.739794 master-0 kubenswrapper[28120]: I0220 15:14:37.739749 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-7dxcp" Feb 20 15:14:37.794967 master-0 kubenswrapper[28120]: I0220 15:14:37.794887 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-nk84g" Feb 20 15:14:38.995858 master-0 kubenswrapper[28120]: I0220 15:14:38.995767 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-9pk5l\" (UID: \"86ae9025-ec1e-4e3a-b990-cb73f94ef8f6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" Feb 20 15:14:39.001520 master-0 kubenswrapper[28120]: I0220 15:14:39.001439 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86ae9025-ec1e-4e3a-b990-cb73f94ef8f6-cert\") pod \"infra-operator-controller-manager-5f879c76b6-9pk5l\" (UID: \"86ae9025-ec1e-4e3a-b990-cb73f94ef8f6\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" Feb 20 15:14:39.293833 master-0 kubenswrapper[28120]: I0220 15:14:39.293647 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" Feb 20 15:14:39.301530 master-0 kubenswrapper[28120]: I0220 15:14:39.301475 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n\" (UID: \"f4f68dcb-1812-4a20-944f-c5d9256a6f97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" Feb 20 15:14:39.308951 master-0 kubenswrapper[28120]: I0220 15:14:39.308091 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4f68dcb-1812-4a20-944f-c5d9256a6f97-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n\" (UID: \"f4f68dcb-1812-4a20-944f-c5d9256a6f97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" Feb 20 15:14:39.511193 master-0 kubenswrapper[28120]: I0220 15:14:39.511059 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:39.518033 master-0 kubenswrapper[28120]: I0220 15:14:39.517983 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6dafbcda-4fec-47a9-b4d1-950010f65d3e-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-gssk4\" (UID: \"6dafbcda-4fec-47a9-b4d1-950010f65d3e\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:39.565252 master-0 kubenswrapper[28120]: I0220 15:14:39.565081 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" Feb 20 15:14:39.709656 master-0 kubenswrapper[28120]: I0220 15:14:39.709581 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:39.816816 master-0 kubenswrapper[28120]: I0220 15:14:39.816609 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l"] Feb 20 15:14:40.156084 master-0 kubenswrapper[28120]: I0220 15:14:40.155952 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n"] Feb 20 15:14:40.190147 master-0 kubenswrapper[28120]: I0220 15:14:40.190038 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4"] Feb 20 15:14:40.193375 master-0 kubenswrapper[28120]: W0220 15:14:40.193232 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dafbcda_4fec_47a9_b4d1_950010f65d3e.slice/crio-750f0ffc3b69a6ee909ceafe82576f6f7a544d53f2a115542214125e08da1be4 WatchSource:0}: Error finding container 750f0ffc3b69a6ee909ceafe82576f6f7a544d53f2a115542214125e08da1be4: Status 404 returned error can't find the container with id 750f0ffc3b69a6ee909ceafe82576f6f7a544d53f2a115542214125e08da1be4 Feb 20 15:14:40.331294 master-0 kubenswrapper[28120]: I0220 15:14:40.330668 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" event={"ID":"f4f68dcb-1812-4a20-944f-c5d9256a6f97","Type":"ContainerStarted","Data":"83c08e97182f0f1eb43afda2880a6e715061ed79b47443ed4b9e767f2a9ed3dc"} Feb 20 15:14:40.333102 master-0 kubenswrapper[28120]: I0220 15:14:40.332821 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" event={"ID":"6dafbcda-4fec-47a9-b4d1-950010f65d3e","Type":"ContainerStarted","Data":"750f0ffc3b69a6ee909ceafe82576f6f7a544d53f2a115542214125e08da1be4"} Feb 20 15:14:40.334654 master-0 kubenswrapper[28120]: I0220 15:14:40.334562 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" event={"ID":"86ae9025-ec1e-4e3a-b990-cb73f94ef8f6","Type":"ContainerStarted","Data":"8ef1bdbb646d8078357ee345783e904fe87dfb04365d5179bb680dd8fd65203d"} Feb 20 15:14:41.347227 master-0 kubenswrapper[28120]: I0220 15:14:41.347134 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" event={"ID":"6dafbcda-4fec-47a9-b4d1-950010f65d3e","Type":"ContainerStarted","Data":"f621ef0349d06f704719ca58d35e22301808bbd61d3359c8822ffaee8d387ded"} Feb 20 15:14:41.348083 master-0 kubenswrapper[28120]: I0220 15:14:41.347311 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:14:41.392958 master-0 kubenswrapper[28120]: I0220 15:14:41.392842 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" podStartSLOduration=34.392822232 podStartE2EDuration="34.392822232s" podCreationTimestamp="2026-02-20 15:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:14:41.380908825 +0000 UTC m=+819.641702448" watchObservedRunningTime="2026-02-20 15:14:41.392822232 +0000 UTC m=+819.653615785" Feb 20 15:14:43.376064 master-0 kubenswrapper[28120]: I0220 15:14:43.375997 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" event={"ID":"86ae9025-ec1e-4e3a-b990-cb73f94ef8f6","Type":"ContainerStarted","Data":"149cfe7e74c34b33d8067f07a3bd89ca0bf91b104b0c015896d46e40d6496543"} Feb 20 15:14:43.376915 master-0 kubenswrapper[28120]: I0220 15:14:43.376234 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" Feb 20 15:14:43.378096 master-0 kubenswrapper[28120]: I0220 15:14:43.378036 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" event={"ID":"f4f68dcb-1812-4a20-944f-c5d9256a6f97","Type":"ContainerStarted","Data":"6c646dce8b477c222da4b2b2471cd75aaa63253dc6f2894e2cc217117f41c6c6"} Feb 20 15:14:43.379181 master-0 kubenswrapper[28120]: I0220 15:14:43.379108 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" Feb 20 15:14:43.406683 master-0 kubenswrapper[28120]: I0220 15:14:43.403297 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" podStartSLOduration=34.726623139 podStartE2EDuration="37.403273968s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:39.849853953 +0000 UTC m=+818.110647536" lastFinishedPulling="2026-02-20 15:14:42.526504802 +0000 UTC m=+820.787298365" observedRunningTime="2026-02-20 15:14:43.399815641 +0000 UTC m=+821.660609254" watchObservedRunningTime="2026-02-20 15:14:43.403273968 +0000 UTC m=+821.664067541" Feb 20 15:14:43.459480 master-0 kubenswrapper[28120]: I0220 15:14:43.459253 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" podStartSLOduration=35.070427652 podStartE2EDuration="37.459221133s" podCreationTimestamp="2026-02-20 15:14:06 +0000 UTC" firstStartedPulling="2026-02-20 15:14:40.13927129 +0000 UTC m=+818.400064903" lastFinishedPulling="2026-02-20 15:14:42.528064811 +0000 UTC m=+820.788858384" observedRunningTime="2026-02-20 15:14:43.44947651 +0000 UTC m=+821.710270073" watchObservedRunningTime="2026-02-20 15:14:43.459221133 +0000 UTC m=+821.720014736" Feb 20 15:14:49.304492 master-0 kubenswrapper[28120]: I0220 15:14:49.304432 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-9pk5l" Feb 20 15:14:49.577646 master-0 kubenswrapper[28120]: I0220 15:14:49.577537 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-hvd6n" Feb 20 15:14:49.717360 master-0 kubenswrapper[28120]: I0220 15:14:49.717306 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-gssk4" Feb 20 15:15:00.198487 master-0 kubenswrapper[28120]: I0220 15:15:00.198195 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl"] Feb 20 15:15:00.199671 master-0 kubenswrapper[28120]: I0220 15:15:00.199629 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl" Feb 20 15:15:00.201865 master-0 kubenswrapper[28120]: I0220 15:15:00.201826 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 20 15:15:00.202159 master-0 kubenswrapper[28120]: I0220 15:15:00.201868 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-glnt8" Feb 20 15:15:00.226243 master-0 kubenswrapper[28120]: I0220 15:15:00.226210 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl"] Feb 20 15:15:00.370301 master-0 kubenswrapper[28120]: I0220 15:15:00.370208 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-secret-volume\") pod \"collect-profiles-29526675-thwjl\" (UID: \"b3ce1797-e963-4778-a45c-64b6c8aa5ab7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl" Feb 20 15:15:00.370550 master-0 kubenswrapper[28120]: I0220 15:15:00.370384 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jltl\" (UniqueName: \"kubernetes.io/projected/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-kube-api-access-4jltl\") pod \"collect-profiles-29526675-thwjl\" (UID: \"b3ce1797-e963-4778-a45c-64b6c8aa5ab7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl" Feb 20 15:15:00.370550 master-0 kubenswrapper[28120]: I0220 15:15:00.370420 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-config-volume\") pod \"collect-profiles-29526675-thwjl\" (UID: \"b3ce1797-e963-4778-a45c-64b6c8aa5ab7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl" Feb 20 15:15:00.472066 master-0 kubenswrapper[28120]: I0220 15:15:00.471899 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-secret-volume\") pod \"collect-profiles-29526675-thwjl\" (UID: \"b3ce1797-e963-4778-a45c-64b6c8aa5ab7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl" Feb 20 15:15:00.472066 master-0 kubenswrapper[28120]: I0220 15:15:00.472021 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jltl\" (UniqueName: \"kubernetes.io/projected/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-kube-api-access-4jltl\") pod \"collect-profiles-29526675-thwjl\" (UID: \"b3ce1797-e963-4778-a45c-64b6c8aa5ab7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl" Feb 20 15:15:00.472066 master-0 kubenswrapper[28120]: I0220 15:15:00.472044 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-config-volume\") pod \"collect-profiles-29526675-thwjl\" (UID: \"b3ce1797-e963-4778-a45c-64b6c8aa5ab7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl" Feb 20 15:15:00.472896 master-0 kubenswrapper[28120]: I0220 15:15:00.472870 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-config-volume\") pod \"collect-profiles-29526675-thwjl\" (UID: \"b3ce1797-e963-4778-a45c-64b6c8aa5ab7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl" Feb 20 15:15:00.478236 master-0 kubenswrapper[28120]: I0220 15:15:00.478169 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-secret-volume\") pod \"collect-profiles-29526675-thwjl\" (UID: \"b3ce1797-e963-4778-a45c-64b6c8aa5ab7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl" Feb 20 15:15:00.491081 master-0 kubenswrapper[28120]: I0220 15:15:00.490994 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jltl\" (UniqueName: \"kubernetes.io/projected/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-kube-api-access-4jltl\") pod \"collect-profiles-29526675-thwjl\" (UID: \"b3ce1797-e963-4778-a45c-64b6c8aa5ab7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl" Feb 20 15:15:00.536655 master-0 kubenswrapper[28120]: I0220 15:15:00.536554 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl" Feb 20 15:15:01.652328 master-0 kubenswrapper[28120]: I0220 15:15:01.652276 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl"] Feb 20 15:15:01.663019 master-0 kubenswrapper[28120]: W0220 15:15:01.662978 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3ce1797_e963_4778_a45c_64b6c8aa5ab7.slice/crio-d6e5baea82d75e3844e7bdd0a99f0413cd42b73f9a2a730b3108850e2ac0e04e WatchSource:0}: Error finding container d6e5baea82d75e3844e7bdd0a99f0413cd42b73f9a2a730b3108850e2ac0e04e: Status 404 returned error can't find the container with id d6e5baea82d75e3844e7bdd0a99f0413cd42b73f9a2a730b3108850e2ac0e04e Feb 20 15:15:02.632615 master-0 kubenswrapper[28120]: I0220 15:15:02.632571 28120 generic.go:334] "Generic (PLEG): container finished" podID="b3ce1797-e963-4778-a45c-64b6c8aa5ab7" containerID="a39c7e3f18efbcc01096ddddb6bfdba6fddad8d88e2fa7d71fa497c357a408ba" exitCode=0 Feb 20 15:15:02.632615 master-0 kubenswrapper[28120]: I0220 15:15:02.632623 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl" event={"ID":"b3ce1797-e963-4778-a45c-64b6c8aa5ab7","Type":"ContainerDied","Data":"a39c7e3f18efbcc01096ddddb6bfdba6fddad8d88e2fa7d71fa497c357a408ba"} Feb 20 15:15:02.632864 master-0 kubenswrapper[28120]: I0220 15:15:02.632649 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl" event={"ID":"b3ce1797-e963-4778-a45c-64b6c8aa5ab7","Type":"ContainerStarted","Data":"d6e5baea82d75e3844e7bdd0a99f0413cd42b73f9a2a730b3108850e2ac0e04e"} Feb 20 15:15:04.003949 master-0 kubenswrapper[28120]: I0220 15:15:04.003872 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl" Feb 20 15:15:04.101637 master-0 kubenswrapper[28120]: I0220 15:15:04.101550 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jltl\" (UniqueName: \"kubernetes.io/projected/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-kube-api-access-4jltl\") pod \"b3ce1797-e963-4778-a45c-64b6c8aa5ab7\" (UID: \"b3ce1797-e963-4778-a45c-64b6c8aa5ab7\") " Feb 20 15:15:04.101960 master-0 kubenswrapper[28120]: I0220 15:15:04.101713 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-config-volume\") pod \"b3ce1797-e963-4778-a45c-64b6c8aa5ab7\" (UID: \"b3ce1797-e963-4778-a45c-64b6c8aa5ab7\") " Feb 20 15:15:04.101960 master-0 kubenswrapper[28120]: I0220 15:15:04.101843 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-secret-volume\") pod \"b3ce1797-e963-4778-a45c-64b6c8aa5ab7\" (UID: \"b3ce1797-e963-4778-a45c-64b6c8aa5ab7\") " Feb 20 15:15:04.102235 master-0 kubenswrapper[28120]: I0220 15:15:04.102181 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-config-volume" (OuterVolumeSpecName: "config-volume") pod "b3ce1797-e963-4778-a45c-64b6c8aa5ab7" (UID: "b3ce1797-e963-4778-a45c-64b6c8aa5ab7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:15:04.102630 master-0 kubenswrapper[28120]: I0220 15:15:04.102590 28120 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 15:15:04.104541 master-0 kubenswrapper[28120]: I0220 15:15:04.104498 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b3ce1797-e963-4778-a45c-64b6c8aa5ab7" (UID: "b3ce1797-e963-4778-a45c-64b6c8aa5ab7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:15:04.104751 master-0 kubenswrapper[28120]: I0220 15:15:04.104680 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-kube-api-access-4jltl" (OuterVolumeSpecName: "kube-api-access-4jltl") pod "b3ce1797-e963-4778-a45c-64b6c8aa5ab7" (UID: "b3ce1797-e963-4778-a45c-64b6c8aa5ab7"). InnerVolumeSpecName "kube-api-access-4jltl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:15:04.206101 master-0 kubenswrapper[28120]: I0220 15:15:04.205999 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jltl\" (UniqueName: \"kubernetes.io/projected/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-kube-api-access-4jltl\") on node \"master-0\" DevicePath \"\"" Feb 20 15:15:04.206101 master-0 kubenswrapper[28120]: I0220 15:15:04.206054 28120 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3ce1797-e963-4778-a45c-64b6c8aa5ab7-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 20 15:15:04.656487 master-0 kubenswrapper[28120]: I0220 15:15:04.656398 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl" event={"ID":"b3ce1797-e963-4778-a45c-64b6c8aa5ab7","Type":"ContainerDied","Data":"d6e5baea82d75e3844e7bdd0a99f0413cd42b73f9a2a730b3108850e2ac0e04e"} Feb 20 15:15:04.656487 master-0 kubenswrapper[28120]: I0220 15:15:04.656461 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6e5baea82d75e3844e7bdd0a99f0413cd42b73f9a2a730b3108850e2ac0e04e" Feb 20 15:15:04.656487 master-0 kubenswrapper[28120]: I0220 15:15:04.656474 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29526675-thwjl" Feb 20 15:15:34.723170 master-0 kubenswrapper[28120]: I0220 15:15:34.723117 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-mm726"] Feb 20 15:15:34.723785 master-0 kubenswrapper[28120]: E0220 15:15:34.723517 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ce1797-e963-4778-a45c-64b6c8aa5ab7" containerName="collect-profiles" Feb 20 15:15:34.723785 master-0 kubenswrapper[28120]: I0220 15:15:34.723530 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ce1797-e963-4778-a45c-64b6c8aa5ab7" containerName="collect-profiles" Feb 20 15:15:34.723785 master-0 kubenswrapper[28120]: I0220 15:15:34.723736 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ce1797-e963-4778-a45c-64b6c8aa5ab7" containerName="collect-profiles" Feb 20 15:15:34.726954 master-0 kubenswrapper[28120]: I0220 15:15:34.724786 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-mm726" Feb 20 15:15:34.728976 master-0 kubenswrapper[28120]: I0220 15:15:34.727770 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 20 15:15:34.728976 master-0 kubenswrapper[28120]: I0220 15:15:34.728154 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 20 15:15:34.728976 master-0 kubenswrapper[28120]: I0220 15:15:34.728360 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 20 15:15:34.737439 master-0 kubenswrapper[28120]: I0220 15:15:34.737373 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-mm726"] Feb 20 15:15:34.809704 master-0 kubenswrapper[28120]: I0220 15:15:34.809635 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d78499c-dn4fc"] Feb 20 15:15:34.811299 master-0 kubenswrapper[28120]: I0220 15:15:34.811269 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-dn4fc" Feb 20 15:15:34.816374 master-0 kubenswrapper[28120]: I0220 15:15:34.816346 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 20 15:15:34.827076 master-0 kubenswrapper[28120]: I0220 15:15:34.826892 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4zqg\" (UniqueName: \"kubernetes.io/projected/76a10caa-8597-4073-a879-ddebee1dbd76-kube-api-access-r4zqg\") pod \"dnsmasq-dns-5c7b6fb887-mm726\" (UID: \"76a10caa-8597-4073-a879-ddebee1dbd76\") " pod="openstack/dnsmasq-dns-5c7b6fb887-mm726" Feb 20 15:15:34.827428 master-0 kubenswrapper[28120]: I0220 15:15:34.827410 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a10caa-8597-4073-a879-ddebee1dbd76-config\") pod \"dnsmasq-dns-5c7b6fb887-mm726\" (UID: \"76a10caa-8597-4073-a879-ddebee1dbd76\") " pod="openstack/dnsmasq-dns-5c7b6fb887-mm726" Feb 20 15:15:34.842986 master-0 kubenswrapper[28120]: I0220 15:15:34.838176 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-dn4fc"] Feb 20 15:15:34.934394 master-0 kubenswrapper[28120]: I0220 15:15:34.933964 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f516ada9-a716-4f69-9f2b-130599c34ebe-config\") pod \"dnsmasq-dns-7d78499c-dn4fc\" (UID: \"f516ada9-a716-4f69-9f2b-130599c34ebe\") " pod="openstack/dnsmasq-dns-7d78499c-dn4fc" Feb 20 15:15:34.934394 master-0 kubenswrapper[28120]: I0220 15:15:34.934079 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4zqg\" (UniqueName: \"kubernetes.io/projected/76a10caa-8597-4073-a879-ddebee1dbd76-kube-api-access-r4zqg\") pod \"dnsmasq-dns-5c7b6fb887-mm726\" (UID: \"76a10caa-8597-4073-a879-ddebee1dbd76\") " pod="openstack/dnsmasq-dns-5c7b6fb887-mm726" Feb 20 15:15:34.934394 master-0 kubenswrapper[28120]: I0220 15:15:34.934306 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bh9j\" (UniqueName: \"kubernetes.io/projected/f516ada9-a716-4f69-9f2b-130599c34ebe-kube-api-access-5bh9j\") pod \"dnsmasq-dns-7d78499c-dn4fc\" (UID: \"f516ada9-a716-4f69-9f2b-130599c34ebe\") " pod="openstack/dnsmasq-dns-7d78499c-dn4fc" Feb 20 15:15:34.934394 master-0 kubenswrapper[28120]: I0220 15:15:34.934375 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a10caa-8597-4073-a879-ddebee1dbd76-config\") pod \"dnsmasq-dns-5c7b6fb887-mm726\" (UID: \"76a10caa-8597-4073-a879-ddebee1dbd76\") " pod="openstack/dnsmasq-dns-5c7b6fb887-mm726" Feb 20 15:15:34.934794 master-0 kubenswrapper[28120]: I0220 15:15:34.934410 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f516ada9-a716-4f69-9f2b-130599c34ebe-dns-svc\") pod \"dnsmasq-dns-7d78499c-dn4fc\" (UID: \"f516ada9-a716-4f69-9f2b-130599c34ebe\") " pod="openstack/dnsmasq-dns-7d78499c-dn4fc" Feb 20 15:15:34.935386 master-0 kubenswrapper[28120]: I0220 15:15:34.935332 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a10caa-8597-4073-a879-ddebee1dbd76-config\") pod \"dnsmasq-dns-5c7b6fb887-mm726\" (UID: \"76a10caa-8597-4073-a879-ddebee1dbd76\") " pod="openstack/dnsmasq-dns-5c7b6fb887-mm726" Feb 20 15:15:34.958943 master-0 kubenswrapper[28120]: I0220 15:15:34.958868 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4zqg\" (UniqueName: \"kubernetes.io/projected/76a10caa-8597-4073-a879-ddebee1dbd76-kube-api-access-r4zqg\") pod \"dnsmasq-dns-5c7b6fb887-mm726\" (UID: \"76a10caa-8597-4073-a879-ddebee1dbd76\") " pod="openstack/dnsmasq-dns-5c7b6fb887-mm726" Feb 20 15:15:35.035975 master-0 kubenswrapper[28120]: I0220 15:15:35.035906 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bh9j\" (UniqueName: \"kubernetes.io/projected/f516ada9-a716-4f69-9f2b-130599c34ebe-kube-api-access-5bh9j\") pod \"dnsmasq-dns-7d78499c-dn4fc\" (UID: \"f516ada9-a716-4f69-9f2b-130599c34ebe\") " pod="openstack/dnsmasq-dns-7d78499c-dn4fc" Feb 20 15:15:35.036225 master-0 kubenswrapper[28120]: I0220 15:15:35.036005 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f516ada9-a716-4f69-9f2b-130599c34ebe-dns-svc\") pod \"dnsmasq-dns-7d78499c-dn4fc\" (UID: \"f516ada9-a716-4f69-9f2b-130599c34ebe\") " pod="openstack/dnsmasq-dns-7d78499c-dn4fc" Feb 20 15:15:35.036225 master-0 kubenswrapper[28120]: I0220 15:15:35.036041 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f516ada9-a716-4f69-9f2b-130599c34ebe-config\") pod \"dnsmasq-dns-7d78499c-dn4fc\" (UID: \"f516ada9-a716-4f69-9f2b-130599c34ebe\") " pod="openstack/dnsmasq-dns-7d78499c-dn4fc" Feb 20 15:15:35.036949 master-0 kubenswrapper[28120]: I0220 15:15:35.036908 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f516ada9-a716-4f69-9f2b-130599c34ebe-config\") pod \"dnsmasq-dns-7d78499c-dn4fc\" (UID: \"f516ada9-a716-4f69-9f2b-130599c34ebe\") " pod="openstack/dnsmasq-dns-7d78499c-dn4fc" Feb 20 15:15:35.037144 master-0 kubenswrapper[28120]: I0220 15:15:35.037099 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f516ada9-a716-4f69-9f2b-130599c34ebe-dns-svc\") pod \"dnsmasq-dns-7d78499c-dn4fc\" (UID: \"f516ada9-a716-4f69-9f2b-130599c34ebe\") " pod="openstack/dnsmasq-dns-7d78499c-dn4fc" Feb 20 15:15:35.053670 master-0 kubenswrapper[28120]: I0220 15:15:35.053605 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-mm726" Feb 20 15:15:35.058749 master-0 kubenswrapper[28120]: I0220 15:15:35.058721 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bh9j\" (UniqueName: \"kubernetes.io/projected/f516ada9-a716-4f69-9f2b-130599c34ebe-kube-api-access-5bh9j\") pod \"dnsmasq-dns-7d78499c-dn4fc\" (UID: \"f516ada9-a716-4f69-9f2b-130599c34ebe\") " pod="openstack/dnsmasq-dns-7d78499c-dn4fc" Feb 20 15:15:35.149848 master-0 kubenswrapper[28120]: I0220 15:15:35.148631 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-dn4fc" Feb 20 15:15:35.541981 master-0 kubenswrapper[28120]: I0220 15:15:35.541910 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-mm726"] Feb 20 15:15:35.629230 master-0 kubenswrapper[28120]: I0220 15:15:35.628639 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-dn4fc"] Feb 20 15:15:35.632311 master-0 kubenswrapper[28120]: W0220 15:15:35.632260 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf516ada9_a716_4f69_9f2b_130599c34ebe.slice/crio-ccf3c1369a4a10c749d3af486e114eccd9ed7818e4db6ff835b2387dd4d92ae0 WatchSource:0}: Error finding container ccf3c1369a4a10c749d3af486e114eccd9ed7818e4db6ff835b2387dd4d92ae0: Status 404 returned error can't find the container with id ccf3c1369a4a10c749d3af486e114eccd9ed7818e4db6ff835b2387dd4d92ae0 Feb 20 15:15:36.286409 master-0 kubenswrapper[28120]: I0220 15:15:36.286242 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-dn4fc" event={"ID":"f516ada9-a716-4f69-9f2b-130599c34ebe","Type":"ContainerStarted","Data":"ccf3c1369a4a10c749d3af486e114eccd9ed7818e4db6ff835b2387dd4d92ae0"} Feb 20 15:15:36.291155 master-0 kubenswrapper[28120]: I0220 15:15:36.290827 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-mm726" event={"ID":"76a10caa-8597-4073-a879-ddebee1dbd76","Type":"ContainerStarted","Data":"04a1ce2ec6770cd0c5c8c8f7610b8c886d556118c4dc2902472cafa8664a0bac"} Feb 20 15:15:36.807636 master-0 kubenswrapper[28120]: I0220 15:15:36.807497 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-mm726"] Feb 20 15:15:36.867065 master-0 kubenswrapper[28120]: I0220 15:15:36.865986 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-c5b5l"] Feb 20 15:15:36.868381 master-0 kubenswrapper[28120]: I0220 15:15:36.868337 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" Feb 20 15:15:36.873159 master-0 kubenswrapper[28120]: I0220 15:15:36.873102 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-c5b5l"] Feb 20 15:15:36.975580 master-0 kubenswrapper[28120]: I0220 15:15:36.972818 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thvjk\" (UniqueName: \"kubernetes.io/projected/91310aca-57ce-4308-afcc-95511e83dc27-kube-api-access-thvjk\") pod \"dnsmasq-dns-5bcd98d69f-c5b5l\" (UID: \"91310aca-57ce-4308-afcc-95511e83dc27\") " pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" Feb 20 15:15:36.975580 master-0 kubenswrapper[28120]: I0220 15:15:36.972955 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91310aca-57ce-4308-afcc-95511e83dc27-config\") pod \"dnsmasq-dns-5bcd98d69f-c5b5l\" (UID: \"91310aca-57ce-4308-afcc-95511e83dc27\") " pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" Feb 20 15:15:36.975580 master-0 kubenswrapper[28120]: I0220 15:15:36.973009 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91310aca-57ce-4308-afcc-95511e83dc27-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-c5b5l\" (UID: \"91310aca-57ce-4308-afcc-95511e83dc27\") " pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" Feb 20 15:15:37.081252 master-0 kubenswrapper[28120]: I0220 15:15:37.080744 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thvjk\" (UniqueName: \"kubernetes.io/projected/91310aca-57ce-4308-afcc-95511e83dc27-kube-api-access-thvjk\") pod \"dnsmasq-dns-5bcd98d69f-c5b5l\" (UID: \"91310aca-57ce-4308-afcc-95511e83dc27\") " pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" Feb 20 15:15:37.081252 master-0 kubenswrapper[28120]: I0220 15:15:37.081125 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91310aca-57ce-4308-afcc-95511e83dc27-config\") pod \"dnsmasq-dns-5bcd98d69f-c5b5l\" (UID: \"91310aca-57ce-4308-afcc-95511e83dc27\") " pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" Feb 20 15:15:37.081252 master-0 kubenswrapper[28120]: I0220 15:15:37.081185 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91310aca-57ce-4308-afcc-95511e83dc27-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-c5b5l\" (UID: \"91310aca-57ce-4308-afcc-95511e83dc27\") " pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" Feb 20 15:15:37.083048 master-0 kubenswrapper[28120]: I0220 15:15:37.082880 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91310aca-57ce-4308-afcc-95511e83dc27-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-c5b5l\" (UID: \"91310aca-57ce-4308-afcc-95511e83dc27\") " pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" Feb 20 15:15:37.084218 master-0 kubenswrapper[28120]: I0220 15:15:37.083793 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91310aca-57ce-4308-afcc-95511e83dc27-config\") pod \"dnsmasq-dns-5bcd98d69f-c5b5l\" (UID: \"91310aca-57ce-4308-afcc-95511e83dc27\") " pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" Feb 20 15:15:37.110457 master-0 kubenswrapper[28120]: I0220 15:15:37.107238 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thvjk\" (UniqueName: \"kubernetes.io/projected/91310aca-57ce-4308-afcc-95511e83dc27-kube-api-access-thvjk\") pod \"dnsmasq-dns-5bcd98d69f-c5b5l\" (UID: \"91310aca-57ce-4308-afcc-95511e83dc27\") " pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" Feb 20 15:15:37.202036 master-0 kubenswrapper[28120]: I0220 15:15:37.198496 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" Feb 20 15:15:37.225359 master-0 kubenswrapper[28120]: I0220 15:15:37.225223 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-dn4fc"] Feb 20 15:15:37.260814 master-0 kubenswrapper[28120]: I0220 15:15:37.260762 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-nf55m"] Feb 20 15:15:37.274960 master-0 kubenswrapper[28120]: I0220 15:15:37.274898 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" Feb 20 15:15:37.276843 master-0 kubenswrapper[28120]: I0220 15:15:37.275979 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-nf55m"] Feb 20 15:15:37.398495 master-0 kubenswrapper[28120]: I0220 15:15:37.398439 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8742b7a0-5156-4b20-b683-feb84b330828-config\") pod \"dnsmasq-dns-6b98d7b55c-nf55m\" (UID: \"8742b7a0-5156-4b20-b683-feb84b330828\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" Feb 20 15:15:37.398944 master-0 kubenswrapper[28120]: I0220 15:15:37.398709 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvpbc\" (UniqueName: \"kubernetes.io/projected/8742b7a0-5156-4b20-b683-feb84b330828-kube-api-access-vvpbc\") pod \"dnsmasq-dns-6b98d7b55c-nf55m\" (UID: \"8742b7a0-5156-4b20-b683-feb84b330828\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" Feb 20 15:15:37.399026 master-0 kubenswrapper[28120]: I0220 15:15:37.398988 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8742b7a0-5156-4b20-b683-feb84b330828-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-nf55m\" (UID: \"8742b7a0-5156-4b20-b683-feb84b330828\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" Feb 20 15:15:37.502939 master-0 kubenswrapper[28120]: I0220 15:15:37.501861 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvpbc\" (UniqueName: \"kubernetes.io/projected/8742b7a0-5156-4b20-b683-feb84b330828-kube-api-access-vvpbc\") pod \"dnsmasq-dns-6b98d7b55c-nf55m\" (UID: \"8742b7a0-5156-4b20-b683-feb84b330828\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" Feb 20 15:15:37.502939 master-0 kubenswrapper[28120]: I0220 15:15:37.501986 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8742b7a0-5156-4b20-b683-feb84b330828-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-nf55m\" (UID: \"8742b7a0-5156-4b20-b683-feb84b330828\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" Feb 20 15:15:37.502939 master-0 kubenswrapper[28120]: I0220 15:15:37.502024 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8742b7a0-5156-4b20-b683-feb84b330828-config\") pod \"dnsmasq-dns-6b98d7b55c-nf55m\" (UID: \"8742b7a0-5156-4b20-b683-feb84b330828\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" Feb 20 15:15:37.524944 master-0 kubenswrapper[28120]: I0220 15:15:37.510350 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8742b7a0-5156-4b20-b683-feb84b330828-config\") pod \"dnsmasq-dns-6b98d7b55c-nf55m\" (UID: \"8742b7a0-5156-4b20-b683-feb84b330828\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" Feb 20 15:15:37.524944 master-0 kubenswrapper[28120]: I0220 15:15:37.515369 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8742b7a0-5156-4b20-b683-feb84b330828-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-nf55m\" (UID: \"8742b7a0-5156-4b20-b683-feb84b330828\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" Feb 20 15:15:37.547944 master-0 kubenswrapper[28120]: I0220 15:15:37.544867 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvpbc\" (UniqueName: \"kubernetes.io/projected/8742b7a0-5156-4b20-b683-feb84b330828-kube-api-access-vvpbc\") pod \"dnsmasq-dns-6b98d7b55c-nf55m\" (UID: \"8742b7a0-5156-4b20-b683-feb84b330828\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" Feb 20 15:15:37.661706 master-0 kubenswrapper[28120]: I0220 15:15:37.661583 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" Feb 20 15:15:37.833210 master-0 kubenswrapper[28120]: I0220 15:15:37.833156 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-c5b5l"] Feb 20 15:15:37.854128 master-0 kubenswrapper[28120]: W0220 15:15:37.853914 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91310aca_57ce_4308_afcc_95511e83dc27.slice/crio-d77a03b9f73bbf68b17db4fab61a89567298c871a6236628fe9fc99794d16db4 WatchSource:0}: Error finding container d77a03b9f73bbf68b17db4fab61a89567298c871a6236628fe9fc99794d16db4: Status 404 returned error can't find the container with id d77a03b9f73bbf68b17db4fab61a89567298c871a6236628fe9fc99794d16db4 Feb 20 15:15:38.166226 master-0 kubenswrapper[28120]: I0220 15:15:38.166173 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-nf55m"] Feb 20 15:15:38.167329 master-0 kubenswrapper[28120]: W0220 15:15:38.167288 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8742b7a0_5156_4b20_b683_feb84b330828.slice/crio-f7be75fb7df5e97c782927638f00a2324ffbe4556c4df5bc62521b9b776908ef WatchSource:0}: Error finding container f7be75fb7df5e97c782927638f00a2324ffbe4556c4df5bc62521b9b776908ef: Status 404 returned error can't find the container with id f7be75fb7df5e97c782927638f00a2324ffbe4556c4df5bc62521b9b776908ef Feb 20 15:15:38.312465 master-0 kubenswrapper[28120]: I0220 15:15:38.312410 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" event={"ID":"91310aca-57ce-4308-afcc-95511e83dc27","Type":"ContainerStarted","Data":"d77a03b9f73bbf68b17db4fab61a89567298c871a6236628fe9fc99794d16db4"} Feb 20 15:15:38.314066 master-0 kubenswrapper[28120]: I0220 15:15:38.314043 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" event={"ID":"8742b7a0-5156-4b20-b683-feb84b330828","Type":"ContainerStarted","Data":"f7be75fb7df5e97c782927638f00a2324ffbe4556c4df5bc62521b9b776908ef"} Feb 20 15:15:41.002251 master-0 kubenswrapper[28120]: I0220 15:15:41.002181 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 20 15:15:41.009046 master-0 kubenswrapper[28120]: I0220 15:15:41.009008 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.011506 master-0 kubenswrapper[28120]: I0220 15:15:41.011213 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 20 15:15:41.011506 master-0 kubenswrapper[28120]: I0220 15:15:41.011424 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 20 15:15:41.012143 master-0 kubenswrapper[28120]: I0220 15:15:41.012033 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 20 15:15:41.012380 master-0 kubenswrapper[28120]: I0220 15:15:41.012347 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 20 15:15:41.012881 master-0 kubenswrapper[28120]: I0220 15:15:41.012856 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 20 15:15:41.013224 master-0 kubenswrapper[28120]: I0220 15:15:41.012901 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 20 15:15:41.017861 master-0 kubenswrapper[28120]: I0220 15:15:41.017558 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 20 15:15:41.195416 master-0 kubenswrapper[28120]: I0220 15:15:41.193825 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.195416 master-0 kubenswrapper[28120]: I0220 15:15:41.193943 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.195416 master-0 kubenswrapper[28120]: I0220 15:15:41.193979 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.195416 master-0 kubenswrapper[28120]: I0220 15:15:41.194011 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.195416 master-0 kubenswrapper[28120]: I0220 15:15:41.194030 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.195416 master-0 kubenswrapper[28120]: I0220 15:15:41.194059 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-config-data\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.195416 master-0 kubenswrapper[28120]: I0220 15:15:41.194081 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57cxw\" (UniqueName: \"kubernetes.io/projected/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-kube-api-access-57cxw\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.195416 master-0 kubenswrapper[28120]: I0220 15:15:41.194116 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.195416 master-0 kubenswrapper[28120]: I0220 15:15:41.194137 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-627b02da-0050-44fc-99dd-b9edf35d00da\" (UniqueName: \"kubernetes.io/csi/topolvm.io^18a838cb-4f88-4235-b3f2-83b8e4f40191\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.195416 master-0 kubenswrapper[28120]: I0220 15:15:41.194152 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.195416 master-0 kubenswrapper[28120]: I0220 15:15:41.194819 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.296552 master-0 kubenswrapper[28120]: I0220 15:15:41.296409 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-config-data\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.296552 master-0 kubenswrapper[28120]: I0220 15:15:41.296511 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57cxw\" (UniqueName: \"kubernetes.io/projected/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-kube-api-access-57cxw\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.296760 master-0 kubenswrapper[28120]: I0220 15:15:41.296568 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.296760 master-0 kubenswrapper[28120]: I0220 15:15:41.296603 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-627b02da-0050-44fc-99dd-b9edf35d00da\" (UniqueName: \"kubernetes.io/csi/topolvm.io^18a838cb-4f88-4235-b3f2-83b8e4f40191\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.296760 master-0 kubenswrapper[28120]: I0220 15:15:41.296627 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.296760 master-0 kubenswrapper[28120]: I0220 15:15:41.296664 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.296760 master-0 kubenswrapper[28120]: I0220 15:15:41.296727 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.296910 master-0 kubenswrapper[28120]: I0220 15:15:41.296783 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.296910 master-0 kubenswrapper[28120]: I0220 15:15:41.296819 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.296910 master-0 kubenswrapper[28120]: I0220 15:15:41.296849 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.296910 master-0 kubenswrapper[28120]: I0220 15:15:41.296875 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.297474 master-0 kubenswrapper[28120]: I0220 15:15:41.297439 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.298913 master-0 kubenswrapper[28120]: I0220 15:15:41.298875 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.299981 master-0 kubenswrapper[28120]: I0220 15:15:41.299951 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.306241 master-0 kubenswrapper[28120]: I0220 15:15:41.306183 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.306705 master-0 kubenswrapper[28120]: I0220 15:15:41.306665 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.308052 master-0 kubenswrapper[28120]: I0220 15:15:41.307828 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-config-data\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.314984 master-0 kubenswrapper[28120]: I0220 15:15:41.314656 28120 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 15:15:41.314984 master-0 kubenswrapper[28120]: I0220 15:15:41.314696 28120 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-627b02da-0050-44fc-99dd-b9edf35d00da\" (UniqueName: \"kubernetes.io/csi/topolvm.io^18a838cb-4f88-4235-b3f2-83b8e4f40191\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/7e041097e31a3f9d34559f5a2356d7c8d6f1c8ad38476609130392ac92eacb96/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.320337 master-0 kubenswrapper[28120]: I0220 15:15:41.320293 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.320791 master-0 kubenswrapper[28120]: I0220 15:15:41.320740 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.321633 master-0 kubenswrapper[28120]: I0220 15:15:41.321604 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.325625 master-0 kubenswrapper[28120]: I0220 15:15:41.325593 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57cxw\" (UniqueName: \"kubernetes.io/projected/86fa1b6a-d104-4787-b6d5-21dfd3a324f8-kube-api-access-57cxw\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:41.351767 master-0 kubenswrapper[28120]: I0220 15:15:41.351717 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 20 15:15:41.356212 master-0 kubenswrapper[28120]: I0220 15:15:41.356091 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 20 15:15:41.360968 master-0 kubenswrapper[28120]: I0220 15:15:41.358562 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 20 15:15:41.360968 master-0 kubenswrapper[28120]: I0220 15:15:41.358606 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 20 15:15:41.370906 master-0 kubenswrapper[28120]: I0220 15:15:41.370817 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 20 15:15:41.384963 master-0 kubenswrapper[28120]: I0220 15:15:41.384805 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 20 15:15:41.386664 master-0 kubenswrapper[28120]: I0220 15:15:41.386411 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.389604 master-0 kubenswrapper[28120]: I0220 15:15:41.389547 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 20 15:15:41.390653 master-0 kubenswrapper[28120]: I0220 15:15:41.390454 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 20 15:15:41.390653 master-0 kubenswrapper[28120]: I0220 15:15:41.390517 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 20 15:15:41.390653 master-0 kubenswrapper[28120]: I0220 15:15:41.390550 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 20 15:15:41.390847 master-0 kubenswrapper[28120]: I0220 15:15:41.390677 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 20 15:15:41.391038 master-0 kubenswrapper[28120]: I0220 15:15:41.390961 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 20 15:15:41.392798 master-0 kubenswrapper[28120]: I0220 15:15:41.390873 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 20 15:15:41.569451 master-0 kubenswrapper[28120]: I0220 15:15:41.569325 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 20 15:15:41.609740 master-0 kubenswrapper[28120]: I0220 15:15:41.609521 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/041ad820-183f-41ee-b690-b1687d55e12e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.609740 master-0 kubenswrapper[28120]: I0220 15:15:41.609573 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/041ad820-183f-41ee-b690-b1687d55e12e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.609740 master-0 kubenswrapper[28120]: I0220 15:15:41.609612 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/041ad820-183f-41ee-b690-b1687d55e12e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.609740 master-0 kubenswrapper[28120]: I0220 15:15:41.609648 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/041ad820-183f-41ee-b690-b1687d55e12e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.610290 master-0 kubenswrapper[28120]: I0220 15:15:41.609888 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kj92\" (UniqueName: \"kubernetes.io/projected/041ad820-183f-41ee-b690-b1687d55e12e-kube-api-access-8kj92\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.610290 master-0 kubenswrapper[28120]: I0220 15:15:41.610030 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg2cv\" (UniqueName: \"kubernetes.io/projected/d6ef9865-9c4d-489b-a84b-7e444820e473-kube-api-access-vg2cv\") pod \"memcached-0\" (UID: \"d6ef9865-9c4d-489b-a84b-7e444820e473\") " pod="openstack/memcached-0" Feb 20 15:15:41.610290 master-0 kubenswrapper[28120]: I0220 15:15:41.610071 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/041ad820-183f-41ee-b690-b1687d55e12e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.610290 master-0 kubenswrapper[28120]: I0220 15:15:41.610146 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ef9865-9c4d-489b-a84b-7e444820e473-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d6ef9865-9c4d-489b-a84b-7e444820e473\") " pod="openstack/memcached-0" Feb 20 15:15:41.610290 master-0 kubenswrapper[28120]: I0220 15:15:41.610205 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/041ad820-183f-41ee-b690-b1687d55e12e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.610290 master-0 kubenswrapper[28120]: I0220 15:15:41.610273 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c5eff32d-b5d9-463e-afe4-4203e2b8d464\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fa523174-471c-462e-8a76-c6e65d3dc011\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.611088 master-0 kubenswrapper[28120]: I0220 15:15:41.610645 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d6ef9865-9c4d-489b-a84b-7e444820e473-kolla-config\") pod \"memcached-0\" (UID: \"d6ef9865-9c4d-489b-a84b-7e444820e473\") " pod="openstack/memcached-0" Feb 20 15:15:41.611088 master-0 kubenswrapper[28120]: I0220 15:15:41.610697 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/041ad820-183f-41ee-b690-b1687d55e12e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.611088 master-0 kubenswrapper[28120]: I0220 15:15:41.610783 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6ef9865-9c4d-489b-a84b-7e444820e473-config-data\") pod \"memcached-0\" (UID: \"d6ef9865-9c4d-489b-a84b-7e444820e473\") " pod="openstack/memcached-0" Feb 20 15:15:41.611088 master-0 kubenswrapper[28120]: I0220 15:15:41.610826 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/041ad820-183f-41ee-b690-b1687d55e12e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.611088 master-0 kubenswrapper[28120]: I0220 15:15:41.610954 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ef9865-9c4d-489b-a84b-7e444820e473-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d6ef9865-9c4d-489b-a84b-7e444820e473\") " pod="openstack/memcached-0" Feb 20 15:15:41.611088 master-0 kubenswrapper[28120]: I0220 15:15:41.611055 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/041ad820-183f-41ee-b690-b1687d55e12e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.713688 master-0 kubenswrapper[28120]: I0220 15:15:41.713458 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/041ad820-183f-41ee-b690-b1687d55e12e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.713688 master-0 kubenswrapper[28120]: I0220 15:15:41.713527 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c5eff32d-b5d9-463e-afe4-4203e2b8d464\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fa523174-471c-462e-8a76-c6e65d3dc011\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.713688 master-0 kubenswrapper[28120]: I0220 15:15:41.713556 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d6ef9865-9c4d-489b-a84b-7e444820e473-kolla-config\") pod \"memcached-0\" (UID: \"d6ef9865-9c4d-489b-a84b-7e444820e473\") " pod="openstack/memcached-0" Feb 20 15:15:41.713688 master-0 kubenswrapper[28120]: I0220 15:15:41.713578 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/041ad820-183f-41ee-b690-b1687d55e12e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.713688 master-0 kubenswrapper[28120]: I0220 15:15:41.713632 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6ef9865-9c4d-489b-a84b-7e444820e473-config-data\") pod \"memcached-0\" (UID: \"d6ef9865-9c4d-489b-a84b-7e444820e473\") " pod="openstack/memcached-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.714361 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/041ad820-183f-41ee-b690-b1687d55e12e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.714406 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ef9865-9c4d-489b-a84b-7e444820e473-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d6ef9865-9c4d-489b-a84b-7e444820e473\") " pod="openstack/memcached-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.714458 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/041ad820-183f-41ee-b690-b1687d55e12e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.714491 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/041ad820-183f-41ee-b690-b1687d55e12e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.714514 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/041ad820-183f-41ee-b690-b1687d55e12e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.714582 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/041ad820-183f-41ee-b690-b1687d55e12e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.715220 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/041ad820-183f-41ee-b690-b1687d55e12e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.715281 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/041ad820-183f-41ee-b690-b1687d55e12e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.715326 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/041ad820-183f-41ee-b690-b1687d55e12e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.715373 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6ef9865-9c4d-489b-a84b-7e444820e473-config-data\") pod \"memcached-0\" (UID: \"d6ef9865-9c4d-489b-a84b-7e444820e473\") " pod="openstack/memcached-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.715396 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kj92\" (UniqueName: \"kubernetes.io/projected/041ad820-183f-41ee-b690-b1687d55e12e-kube-api-access-8kj92\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.715456 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg2cv\" (UniqueName: \"kubernetes.io/projected/d6ef9865-9c4d-489b-a84b-7e444820e473-kube-api-access-vg2cv\") pod \"memcached-0\" (UID: \"d6ef9865-9c4d-489b-a84b-7e444820e473\") " pod="openstack/memcached-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.715504 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/041ad820-183f-41ee-b690-b1687d55e12e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.715575 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ef9865-9c4d-489b-a84b-7e444820e473-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d6ef9865-9c4d-489b-a84b-7e444820e473\") " pod="openstack/memcached-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.715732 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/041ad820-183f-41ee-b690-b1687d55e12e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.715800 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/041ad820-183f-41ee-b690-b1687d55e12e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.715905 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d6ef9865-9c4d-489b-a84b-7e444820e473-kolla-config\") pod \"memcached-0\" (UID: \"d6ef9865-9c4d-489b-a84b-7e444820e473\") " pod="openstack/memcached-0" Feb 20 15:15:41.717250 master-0 kubenswrapper[28120]: I0220 15:15:41.716512 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/041ad820-183f-41ee-b690-b1687d55e12e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.720534 master-0 kubenswrapper[28120]: I0220 15:15:41.719108 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6ef9865-9c4d-489b-a84b-7e444820e473-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d6ef9865-9c4d-489b-a84b-7e444820e473\") " pod="openstack/memcached-0" Feb 20 15:15:41.720534 master-0 kubenswrapper[28120]: I0220 15:15:41.720023 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/041ad820-183f-41ee-b690-b1687d55e12e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.722001 master-0 kubenswrapper[28120]: I0220 15:15:41.721592 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/041ad820-183f-41ee-b690-b1687d55e12e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.722567 master-0 kubenswrapper[28120]: I0220 15:15:41.722542 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6ef9865-9c4d-489b-a84b-7e444820e473-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d6ef9865-9c4d-489b-a84b-7e444820e473\") " pod="openstack/memcached-0" Feb 20 15:15:41.722999 master-0 kubenswrapper[28120]: I0220 15:15:41.722939 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/041ad820-183f-41ee-b690-b1687d55e12e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.725980 master-0 kubenswrapper[28120]: I0220 15:15:41.725894 28120 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 15:15:41.725980 master-0 kubenswrapper[28120]: I0220 15:15:41.725933 28120 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c5eff32d-b5d9-463e-afe4-4203e2b8d464\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fa523174-471c-462e-8a76-c6e65d3dc011\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/00c6709971950b15f26c1513f2f84d673eb31517e321fdea201d8aebf27f14a5/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.728644 master-0 kubenswrapper[28120]: I0220 15:15:41.728605 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/041ad820-183f-41ee-b690-b1687d55e12e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.732379 master-0 kubenswrapper[28120]: I0220 15:15:41.732345 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kj92\" (UniqueName: \"kubernetes.io/projected/041ad820-183f-41ee-b690-b1687d55e12e-kube-api-access-8kj92\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:41.734564 master-0 kubenswrapper[28120]: I0220 15:15:41.734526 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg2cv\" (UniqueName: \"kubernetes.io/projected/d6ef9865-9c4d-489b-a84b-7e444820e473-kube-api-access-vg2cv\") pod \"memcached-0\" (UID: \"d6ef9865-9c4d-489b-a84b-7e444820e473\") " pod="openstack/memcached-0" Feb 20 15:15:42.017141 master-0 kubenswrapper[28120]: I0220 15:15:42.017093 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 20 15:15:42.656235 master-0 kubenswrapper[28120]: I0220 15:15:42.655316 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 20 15:15:42.659744 master-0 kubenswrapper[28120]: I0220 15:15:42.659715 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 20 15:15:42.663108 master-0 kubenswrapper[28120]: I0220 15:15:42.663083 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 20 15:15:42.663436 master-0 kubenswrapper[28120]: I0220 15:15:42.663418 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 20 15:15:42.667550 master-0 kubenswrapper[28120]: I0220 15:15:42.667527 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 20 15:15:42.673323 master-0 kubenswrapper[28120]: I0220 15:15:42.673146 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 20 15:15:42.734738 master-0 kubenswrapper[28120]: I0220 15:15:42.734655 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/59d78ff4-b331-48f2-bc79-f89b4647e69c-config-data-default\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.735110 master-0 kubenswrapper[28120]: I0220 15:15:42.734768 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/59d78ff4-b331-48f2-bc79-f89b4647e69c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.735110 master-0 kubenswrapper[28120]: I0220 15:15:42.735016 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d78ff4-b331-48f2-bc79-f89b4647e69c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.735110 master-0 kubenswrapper[28120]: I0220 15:15:42.735083 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbzfg\" (UniqueName: \"kubernetes.io/projected/59d78ff4-b331-48f2-bc79-f89b4647e69c-kube-api-access-nbzfg\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.736492 master-0 kubenswrapper[28120]: I0220 15:15:42.735153 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dd59958b-044b-4d1c-a8ca-8b787d6fdf77\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72933879-4523-4473-a289-96ff8a8e008e\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.736492 master-0 kubenswrapper[28120]: I0220 15:15:42.735237 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59d78ff4-b331-48f2-bc79-f89b4647e69c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.736492 master-0 kubenswrapper[28120]: I0220 15:15:42.735280 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/59d78ff4-b331-48f2-bc79-f89b4647e69c-kolla-config\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.736492 master-0 kubenswrapper[28120]: I0220 15:15:42.735297 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/59d78ff4-b331-48f2-bc79-f89b4647e69c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.836889 master-0 kubenswrapper[28120]: I0220 15:15:42.836834 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/59d78ff4-b331-48f2-bc79-f89b4647e69c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.837110 master-0 kubenswrapper[28120]: I0220 15:15:42.837073 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d78ff4-b331-48f2-bc79-f89b4647e69c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.837329 master-0 kubenswrapper[28120]: I0220 15:15:42.837282 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbzfg\" (UniqueName: \"kubernetes.io/projected/59d78ff4-b331-48f2-bc79-f89b4647e69c-kube-api-access-nbzfg\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.837375 master-0 kubenswrapper[28120]: I0220 15:15:42.837332 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/59d78ff4-b331-48f2-bc79-f89b4647e69c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.837537 master-0 kubenswrapper[28120]: I0220 15:15:42.837495 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dd59958b-044b-4d1c-a8ca-8b787d6fdf77\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72933879-4523-4473-a289-96ff8a8e008e\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.837635 master-0 kubenswrapper[28120]: I0220 15:15:42.837615 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59d78ff4-b331-48f2-bc79-f89b4647e69c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.837675 master-0 kubenswrapper[28120]: I0220 15:15:42.837665 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/59d78ff4-b331-48f2-bc79-f89b4647e69c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.838017 master-0 kubenswrapper[28120]: I0220 15:15:42.837876 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/59d78ff4-b331-48f2-bc79-f89b4647e69c-kolla-config\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.838127 master-0 kubenswrapper[28120]: I0220 15:15:42.838107 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/59d78ff4-b331-48f2-bc79-f89b4647e69c-config-data-default\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.838961 master-0 kubenswrapper[28120]: I0220 15:15:42.838867 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/59d78ff4-b331-48f2-bc79-f89b4647e69c-kolla-config\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.839218 master-0 kubenswrapper[28120]: I0220 15:15:42.839195 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59d78ff4-b331-48f2-bc79-f89b4647e69c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.840450 master-0 kubenswrapper[28120]: I0220 15:15:42.840396 28120 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 15:15:42.840450 master-0 kubenswrapper[28120]: I0220 15:15:42.840448 28120 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dd59958b-044b-4d1c-a8ca-8b787d6fdf77\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72933879-4523-4473-a289-96ff8a8e008e\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/215da04a52d7789a7c8ee8f38b504aaa77d60377dfc2652afed53e4520f130c8/globalmount\"" pod="openstack/openstack-galera-0" Feb 20 15:15:42.841109 master-0 kubenswrapper[28120]: I0220 15:15:42.841076 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/59d78ff4-b331-48f2-bc79-f89b4647e69c-config-data-default\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.841737 master-0 kubenswrapper[28120]: I0220 15:15:42.841699 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d78ff4-b331-48f2-bc79-f89b4647e69c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.844049 master-0 kubenswrapper[28120]: I0220 15:15:42.843610 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/59d78ff4-b331-48f2-bc79-f89b4647e69c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.857772 master-0 kubenswrapper[28120]: I0220 15:15:42.857723 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbzfg\" (UniqueName: \"kubernetes.io/projected/59d78ff4-b331-48f2-bc79-f89b4647e69c-kube-api-access-nbzfg\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:42.935037 master-0 kubenswrapper[28120]: I0220 15:15:42.934916 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-627b02da-0050-44fc-99dd-b9edf35d00da\" (UniqueName: \"kubernetes.io/csi/topolvm.io^18a838cb-4f88-4235-b3f2-83b8e4f40191\") pod \"rabbitmq-server-0\" (UID: \"86fa1b6a-d104-4787-b6d5-21dfd3a324f8\") " pod="openstack/rabbitmq-server-0" Feb 20 15:15:43.138189 master-0 kubenswrapper[28120]: I0220 15:15:43.138131 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 20 15:15:44.114511 master-0 kubenswrapper[28120]: I0220 15:15:44.114372 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 20 15:15:44.119934 master-0 kubenswrapper[28120]: I0220 15:15:44.119854 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.123370 master-0 kubenswrapper[28120]: I0220 15:15:44.122257 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 20 15:15:44.123370 master-0 kubenswrapper[28120]: I0220 15:15:44.122450 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 20 15:15:44.123757 master-0 kubenswrapper[28120]: I0220 15:15:44.123693 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 20 15:15:44.125731 master-0 kubenswrapper[28120]: I0220 15:15:44.125680 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 20 15:15:44.178784 master-0 kubenswrapper[28120]: I0220 15:15:44.178687 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f53073f3-712a-4f4f-9d29-18e123a19303-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.179320 master-0 kubenswrapper[28120]: I0220 15:15:44.178795 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfhx7\" (UniqueName: \"kubernetes.io/projected/f53073f3-712a-4f4f-9d29-18e123a19303-kube-api-access-nfhx7\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.179320 master-0 kubenswrapper[28120]: I0220 15:15:44.178842 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f53073f3-712a-4f4f-9d29-18e123a19303-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.179320 master-0 kubenswrapper[28120]: I0220 15:15:44.179029 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f53073f3-712a-4f4f-9d29-18e123a19303-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.179320 master-0 kubenswrapper[28120]: I0220 15:15:44.179068 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f53073f3-712a-4f4f-9d29-18e123a19303-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.179320 master-0 kubenswrapper[28120]: I0220 15:15:44.179247 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-11e555f5-c004-4cc8-ac4d-68dba64bcffb\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e78be351-3ae3-4ff7-99d1-fc6791ccd5b9\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.179533 master-0 kubenswrapper[28120]: I0220 15:15:44.179450 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f53073f3-712a-4f4f-9d29-18e123a19303-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.179533 master-0 kubenswrapper[28120]: I0220 15:15:44.179508 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f53073f3-712a-4f4f-9d29-18e123a19303-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.254457 master-0 kubenswrapper[28120]: I0220 15:15:44.254404 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c5eff32d-b5d9-463e-afe4-4203e2b8d464\" (UniqueName: \"kubernetes.io/csi/topolvm.io^fa523174-471c-462e-8a76-c6e65d3dc011\") pod \"rabbitmq-cell1-server-0\" (UID: \"041ad820-183f-41ee-b690-b1687d55e12e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:44.281694 master-0 kubenswrapper[28120]: I0220 15:15:44.281622 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f53073f3-712a-4f4f-9d29-18e123a19303-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.281694 master-0 kubenswrapper[28120]: I0220 15:15:44.281680 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f53073f3-712a-4f4f-9d29-18e123a19303-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.281951 master-0 kubenswrapper[28120]: I0220 15:15:44.281756 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-11e555f5-c004-4cc8-ac4d-68dba64bcffb\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e78be351-3ae3-4ff7-99d1-fc6791ccd5b9\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.281951 master-0 kubenswrapper[28120]: I0220 15:15:44.281778 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f53073f3-712a-4f4f-9d29-18e123a19303-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.281951 master-0 kubenswrapper[28120]: I0220 15:15:44.281795 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f53073f3-712a-4f4f-9d29-18e123a19303-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.281951 master-0 kubenswrapper[28120]: I0220 15:15:44.281824 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f53073f3-712a-4f4f-9d29-18e123a19303-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.281951 master-0 kubenswrapper[28120]: I0220 15:15:44.281842 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfhx7\" (UniqueName: \"kubernetes.io/projected/f53073f3-712a-4f4f-9d29-18e123a19303-kube-api-access-nfhx7\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.281951 master-0 kubenswrapper[28120]: I0220 15:15:44.281864 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f53073f3-712a-4f4f-9d29-18e123a19303-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.282420 master-0 kubenswrapper[28120]: I0220 15:15:44.282387 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f53073f3-712a-4f4f-9d29-18e123a19303-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.283133 master-0 kubenswrapper[28120]: I0220 15:15:44.283091 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f53073f3-712a-4f4f-9d29-18e123a19303-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.283273 master-0 kubenswrapper[28120]: I0220 15:15:44.283244 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f53073f3-712a-4f4f-9d29-18e123a19303-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.283795 master-0 kubenswrapper[28120]: I0220 15:15:44.283763 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f53073f3-712a-4f4f-9d29-18e123a19303-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.285696 master-0 kubenswrapper[28120]: I0220 15:15:44.285665 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f53073f3-712a-4f4f-9d29-18e123a19303-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.286347 master-0 kubenswrapper[28120]: I0220 15:15:44.286322 28120 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 15:15:44.286397 master-0 kubenswrapper[28120]: I0220 15:15:44.286356 28120 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-11e555f5-c004-4cc8-ac4d-68dba64bcffb\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e78be351-3ae3-4ff7-99d1-fc6791ccd5b9\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/e42de0ce67af61b5e2bcc3055d6d1af93e0eaa70b7986795b4f44b8fd6217985/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.290456 master-0 kubenswrapper[28120]: I0220 15:15:44.289735 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f53073f3-712a-4f4f-9d29-18e123a19303-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.305352 master-0 kubenswrapper[28120]: I0220 15:15:44.301880 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfhx7\" (UniqueName: \"kubernetes.io/projected/f53073f3-712a-4f4f-9d29-18e123a19303-kube-api-access-nfhx7\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:44.423006 master-0 kubenswrapper[28120]: I0220 15:15:44.422854 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:15:45.327961 master-0 kubenswrapper[28120]: I0220 15:15:45.324475 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dd59958b-044b-4d1c-a8ca-8b787d6fdf77\" (UniqueName: \"kubernetes.io/csi/topolvm.io^72933879-4523-4473-a289-96ff8a8e008e\") pod \"openstack-galera-0\" (UID: \"59d78ff4-b331-48f2-bc79-f89b4647e69c\") " pod="openstack/openstack-galera-0" Feb 20 15:15:45.392338 master-0 kubenswrapper[28120]: I0220 15:15:45.391900 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 20 15:15:46.277126 master-0 kubenswrapper[28120]: I0220 15:15:46.277030 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mdjwf"] Feb 20 15:15:46.278797 master-0 kubenswrapper[28120]: I0220 15:15:46.278762 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.281031 master-0 kubenswrapper[28120]: I0220 15:15:46.280973 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 20 15:15:46.281540 master-0 kubenswrapper[28120]: I0220 15:15:46.281497 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 20 15:15:46.331934 master-0 kubenswrapper[28120]: I0220 15:15:46.330835 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-5lchw"] Feb 20 15:15:46.332917 master-0 kubenswrapper[28120]: I0220 15:15:46.332858 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.345849 master-0 kubenswrapper[28120]: I0220 15:15:46.345582 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mdjwf"] Feb 20 15:15:46.386128 master-0 kubenswrapper[28120]: I0220 15:15:46.386061 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5lchw"] Feb 20 15:15:46.407685 master-0 kubenswrapper[28120]: I0220 15:15:46.407622 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-11e555f5-c004-4cc8-ac4d-68dba64bcffb\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e78be351-3ae3-4ff7-99d1-fc6791ccd5b9\") pod \"openstack-cell1-galera-0\" (UID: \"f53073f3-712a-4f4f-9d29-18e123a19303\") " pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:46.438325 master-0 kubenswrapper[28120]: I0220 15:15:46.438258 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a840047-42da-4d9c-81e2-8a4da0c3997f-scripts\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.438325 master-0 kubenswrapper[28120]: I0220 15:15:46.438329 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/08ee872d-0365-4096-bc4e-387262807443-var-log\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.438649 master-0 kubenswrapper[28120]: I0220 15:15:46.438369 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/08ee872d-0365-4096-bc4e-387262807443-etc-ovs\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.438649 master-0 kubenswrapper[28120]: I0220 15:15:46.438395 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4a840047-42da-4d9c-81e2-8a4da0c3997f-var-log-ovn\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.438649 master-0 kubenswrapper[28120]: I0220 15:15:46.438427 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx4wq\" (UniqueName: \"kubernetes.io/projected/08ee872d-0365-4096-bc4e-387262807443-kube-api-access-sx4wq\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.438649 master-0 kubenswrapper[28120]: I0220 15:15:46.438607 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28kqc\" (UniqueName: \"kubernetes.io/projected/4a840047-42da-4d9c-81e2-8a4da0c3997f-kube-api-access-28kqc\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.438823 master-0 kubenswrapper[28120]: I0220 15:15:46.438689 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a840047-42da-4d9c-81e2-8a4da0c3997f-combined-ca-bundle\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.438823 master-0 kubenswrapper[28120]: I0220 15:15:46.438752 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a840047-42da-4d9c-81e2-8a4da0c3997f-ovn-controller-tls-certs\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.438823 master-0 kubenswrapper[28120]: I0220 15:15:46.438794 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a840047-42da-4d9c-81e2-8a4da0c3997f-var-run-ovn\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.438987 master-0 kubenswrapper[28120]: I0220 15:15:46.438836 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4a840047-42da-4d9c-81e2-8a4da0c3997f-var-run\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.438987 master-0 kubenswrapper[28120]: I0220 15:15:46.438897 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08ee872d-0365-4096-bc4e-387262807443-scripts\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.438987 master-0 kubenswrapper[28120]: I0220 15:15:46.438944 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08ee872d-0365-4096-bc4e-387262807443-var-run\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.439126 master-0 kubenswrapper[28120]: I0220 15:15:46.439078 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/08ee872d-0365-4096-bc4e-387262807443-var-lib\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.544616 master-0 kubenswrapper[28120]: I0220 15:15:46.544453 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 20 15:15:46.545240 master-0 kubenswrapper[28120]: I0220 15:15:46.545190 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28kqc\" (UniqueName: \"kubernetes.io/projected/4a840047-42da-4d9c-81e2-8a4da0c3997f-kube-api-access-28kqc\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.545335 master-0 kubenswrapper[28120]: I0220 15:15:46.545267 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a840047-42da-4d9c-81e2-8a4da0c3997f-combined-ca-bundle\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.545335 master-0 kubenswrapper[28120]: I0220 15:15:46.545313 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a840047-42da-4d9c-81e2-8a4da0c3997f-ovn-controller-tls-certs\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.545455 master-0 kubenswrapper[28120]: I0220 15:15:46.545351 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a840047-42da-4d9c-81e2-8a4da0c3997f-var-run-ovn\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.545455 master-0 kubenswrapper[28120]: I0220 15:15:46.545411 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4a840047-42da-4d9c-81e2-8a4da0c3997f-var-run\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.545455 master-0 kubenswrapper[28120]: I0220 15:15:46.545450 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08ee872d-0365-4096-bc4e-387262807443-scripts\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.546125 master-0 kubenswrapper[28120]: I0220 15:15:46.545477 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08ee872d-0365-4096-bc4e-387262807443-var-run\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.546125 master-0 kubenswrapper[28120]: I0220 15:15:46.545532 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/08ee872d-0365-4096-bc4e-387262807443-var-lib\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.546125 master-0 kubenswrapper[28120]: I0220 15:15:46.545592 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a840047-42da-4d9c-81e2-8a4da0c3997f-scripts\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.546125 master-0 kubenswrapper[28120]: I0220 15:15:46.545614 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/08ee872d-0365-4096-bc4e-387262807443-var-log\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.546125 master-0 kubenswrapper[28120]: I0220 15:15:46.545640 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/08ee872d-0365-4096-bc4e-387262807443-etc-ovs\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.546125 master-0 kubenswrapper[28120]: I0220 15:15:46.545664 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4a840047-42da-4d9c-81e2-8a4da0c3997f-var-log-ovn\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.546125 master-0 kubenswrapper[28120]: I0220 15:15:46.545693 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx4wq\" (UniqueName: \"kubernetes.io/projected/08ee872d-0365-4096-bc4e-387262807443-kube-api-access-sx4wq\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.547146 master-0 kubenswrapper[28120]: I0220 15:15:46.546703 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08ee872d-0365-4096-bc4e-387262807443-var-run\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.547146 master-0 kubenswrapper[28120]: I0220 15:15:46.546783 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/08ee872d-0365-4096-bc4e-387262807443-var-log\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.547146 master-0 kubenswrapper[28120]: I0220 15:15:46.546873 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/08ee872d-0365-4096-bc4e-387262807443-etc-ovs\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.547146 master-0 kubenswrapper[28120]: I0220 15:15:46.547044 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/08ee872d-0365-4096-bc4e-387262807443-var-lib\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.547146 master-0 kubenswrapper[28120]: I0220 15:15:46.547072 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4a840047-42da-4d9c-81e2-8a4da0c3997f-var-log-ovn\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.549245 master-0 kubenswrapper[28120]: I0220 15:15:46.548173 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08ee872d-0365-4096-bc4e-387262807443-scripts\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.549245 master-0 kubenswrapper[28120]: I0220 15:15:46.548315 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4a840047-42da-4d9c-81e2-8a4da0c3997f-var-run-ovn\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.549245 master-0 kubenswrapper[28120]: I0220 15:15:46.548362 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4a840047-42da-4d9c-81e2-8a4da0c3997f-var-run\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.550240 master-0 kubenswrapper[28120]: I0220 15:15:46.550181 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a840047-42da-4d9c-81e2-8a4da0c3997f-scripts\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.562745 master-0 kubenswrapper[28120]: I0220 15:15:46.562681 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a840047-42da-4d9c-81e2-8a4da0c3997f-combined-ca-bundle\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.569523 master-0 kubenswrapper[28120]: I0220 15:15:46.569485 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx4wq\" (UniqueName: \"kubernetes.io/projected/08ee872d-0365-4096-bc4e-387262807443-kube-api-access-sx4wq\") pod \"ovn-controller-ovs-5lchw\" (UID: \"08ee872d-0365-4096-bc4e-387262807443\") " pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:46.578779 master-0 kubenswrapper[28120]: I0220 15:15:46.578720 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a840047-42da-4d9c-81e2-8a4da0c3997f-ovn-controller-tls-certs\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.584899 master-0 kubenswrapper[28120]: I0220 15:15:46.584832 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28kqc\" (UniqueName: \"kubernetes.io/projected/4a840047-42da-4d9c-81e2-8a4da0c3997f-kube-api-access-28kqc\") pod \"ovn-controller-mdjwf\" (UID: \"4a840047-42da-4d9c-81e2-8a4da0c3997f\") " pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.634947 master-0 kubenswrapper[28120]: I0220 15:15:46.631428 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mdjwf" Feb 20 15:15:46.705908 master-0 kubenswrapper[28120]: I0220 15:15:46.705836 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:15:51.729945 master-0 kubenswrapper[28120]: I0220 15:15:51.729692 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 20 15:15:51.734936 master-0 kubenswrapper[28120]: I0220 15:15:51.731436 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.734936 master-0 kubenswrapper[28120]: I0220 15:15:51.733810 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 20 15:15:51.738974 master-0 kubenswrapper[28120]: I0220 15:15:51.736657 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 20 15:15:51.738974 master-0 kubenswrapper[28120]: I0220 15:15:51.737032 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 20 15:15:51.738974 master-0 kubenswrapper[28120]: I0220 15:15:51.737505 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 20 15:15:51.776539 master-0 kubenswrapper[28120]: I0220 15:15:51.776453 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 20 15:15:51.873034 master-0 kubenswrapper[28120]: I0220 15:15:51.872877 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zqwb\" (UniqueName: \"kubernetes.io/projected/c1bd517d-8804-484c-a8b8-bc08705b9479-kube-api-access-7zqwb\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.873034 master-0 kubenswrapper[28120]: I0220 15:15:51.872948 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-752aab34-b75b-4297-9016-7afeb9fecd0f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^95468e0f-dad4-4bc8-bd3a-9aa8c6875915\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.873034 master-0 kubenswrapper[28120]: I0220 15:15:51.872972 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1bd517d-8804-484c-a8b8-bc08705b9479-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.873034 master-0 kubenswrapper[28120]: I0220 15:15:51.873002 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1bd517d-8804-484c-a8b8-bc08705b9479-config\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.873597 master-0 kubenswrapper[28120]: I0220 15:15:51.873525 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1bd517d-8804-484c-a8b8-bc08705b9479-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.873597 master-0 kubenswrapper[28120]: I0220 15:15:51.873569 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1bd517d-8804-484c-a8b8-bc08705b9479-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.873837 master-0 kubenswrapper[28120]: I0220 15:15:51.873788 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1bd517d-8804-484c-a8b8-bc08705b9479-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.874083 master-0 kubenswrapper[28120]: I0220 15:15:51.874018 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1bd517d-8804-484c-a8b8-bc08705b9479-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.934079 master-0 kubenswrapper[28120]: I0220 15:15:51.934019 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 20 15:15:51.936135 master-0 kubenswrapper[28120]: I0220 15:15:51.936097 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:51.968565 master-0 kubenswrapper[28120]: I0220 15:15:51.968509 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 20 15:15:51.970545 master-0 kubenswrapper[28120]: I0220 15:15:51.970515 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 20 15:15:51.971280 master-0 kubenswrapper[28120]: I0220 15:15:51.970688 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 20 15:15:51.971280 master-0 kubenswrapper[28120]: I0220 15:15:51.970753 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 20 15:15:51.976434 master-0 kubenswrapper[28120]: I0220 15:15:51.976061 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1bd517d-8804-484c-a8b8-bc08705b9479-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.976434 master-0 kubenswrapper[28120]: I0220 15:15:51.976137 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1bd517d-8804-484c-a8b8-bc08705b9479-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.976434 master-0 kubenswrapper[28120]: I0220 15:15:51.976202 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1bd517d-8804-484c-a8b8-bc08705b9479-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.976434 master-0 kubenswrapper[28120]: I0220 15:15:51.976234 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1bd517d-8804-484c-a8b8-bc08705b9479-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.976434 master-0 kubenswrapper[28120]: I0220 15:15:51.976354 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zqwb\" (UniqueName: \"kubernetes.io/projected/c1bd517d-8804-484c-a8b8-bc08705b9479-kube-api-access-7zqwb\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.976434 master-0 kubenswrapper[28120]: I0220 15:15:51.976393 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-752aab34-b75b-4297-9016-7afeb9fecd0f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^95468e0f-dad4-4bc8-bd3a-9aa8c6875915\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.976434 master-0 kubenswrapper[28120]: I0220 15:15:51.976418 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1bd517d-8804-484c-a8b8-bc08705b9479-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.976850 master-0 kubenswrapper[28120]: I0220 15:15:51.976453 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1bd517d-8804-484c-a8b8-bc08705b9479-config\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.977544 master-0 kubenswrapper[28120]: I0220 15:15:51.977517 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1bd517d-8804-484c-a8b8-bc08705b9479-config\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.978423 master-0 kubenswrapper[28120]: I0220 15:15:51.978406 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1bd517d-8804-484c-a8b8-bc08705b9479-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.979945 master-0 kubenswrapper[28120]: I0220 15:15:51.979879 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1bd517d-8804-484c-a8b8-bc08705b9479-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.980532 master-0 kubenswrapper[28120]: I0220 15:15:51.980433 28120 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 15:15:51.980532 master-0 kubenswrapper[28120]: I0220 15:15:51.980470 28120 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-752aab34-b75b-4297-9016-7afeb9fecd0f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^95468e0f-dad4-4bc8-bd3a-9aa8c6875915\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/af377acafee8540f4e344d8a43dbc63223f81ab73e54e099ed1f618448b921f3/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.983615 master-0 kubenswrapper[28120]: I0220 15:15:51.983578 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1bd517d-8804-484c-a8b8-bc08705b9479-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.986496 master-0 kubenswrapper[28120]: I0220 15:15:51.985395 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1bd517d-8804-484c-a8b8-bc08705b9479-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.995076 master-0 kubenswrapper[28120]: I0220 15:15:51.994810 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1bd517d-8804-484c-a8b8-bc08705b9479-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:51.998840 master-0 kubenswrapper[28120]: I0220 15:15:51.998798 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zqwb\" (UniqueName: \"kubernetes.io/projected/c1bd517d-8804-484c-a8b8-bc08705b9479-kube-api-access-7zqwb\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:52.078242 master-0 kubenswrapper[28120]: I0220 15:15:52.078158 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6cc641-5305-4446-bcaf-7a5ebe004556-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.078470 master-0 kubenswrapper[28120]: I0220 15:15:52.078264 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c6cc641-5305-4446-bcaf-7a5ebe004556-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.078470 master-0 kubenswrapper[28120]: I0220 15:15:52.078305 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6cc641-5305-4446-bcaf-7a5ebe004556-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.078470 master-0 kubenswrapper[28120]: I0220 15:15:52.078368 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdz8t\" (UniqueName: \"kubernetes.io/projected/0c6cc641-5305-4446-bcaf-7a5ebe004556-kube-api-access-wdz8t\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.078470 master-0 kubenswrapper[28120]: I0220 15:15:52.078422 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b433270a-b358-413d-b89b-8c75e9250d55\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e638f11d-a674-40c1-99d6-1ea9e1681697\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.078470 master-0 kubenswrapper[28120]: I0220 15:15:52.078473 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0c6cc641-5305-4446-bcaf-7a5ebe004556-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.078702 master-0 kubenswrapper[28120]: I0220 15:15:52.078523 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6cc641-5305-4446-bcaf-7a5ebe004556-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.078702 master-0 kubenswrapper[28120]: I0220 15:15:52.078542 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c6cc641-5305-4446-bcaf-7a5ebe004556-config\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.179869 master-0 kubenswrapper[28120]: I0220 15:15:52.179820 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdz8t\" (UniqueName: \"kubernetes.io/projected/0c6cc641-5305-4446-bcaf-7a5ebe004556-kube-api-access-wdz8t\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.180103 master-0 kubenswrapper[28120]: I0220 15:15:52.179895 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b433270a-b358-413d-b89b-8c75e9250d55\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e638f11d-a674-40c1-99d6-1ea9e1681697\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.180103 master-0 kubenswrapper[28120]: I0220 15:15:52.179977 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0c6cc641-5305-4446-bcaf-7a5ebe004556-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.180103 master-0 kubenswrapper[28120]: I0220 15:15:52.180022 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6cc641-5305-4446-bcaf-7a5ebe004556-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.180103 master-0 kubenswrapper[28120]: I0220 15:15:52.180038 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c6cc641-5305-4446-bcaf-7a5ebe004556-config\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.180103 master-0 kubenswrapper[28120]: I0220 15:15:52.180091 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6cc641-5305-4446-bcaf-7a5ebe004556-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.180263 master-0 kubenswrapper[28120]: I0220 15:15:52.180123 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c6cc641-5305-4446-bcaf-7a5ebe004556-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.181477 master-0 kubenswrapper[28120]: I0220 15:15:52.181093 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6cc641-5305-4446-bcaf-7a5ebe004556-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.181477 master-0 kubenswrapper[28120]: I0220 15:15:52.181201 28120 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 15:15:52.181477 master-0 kubenswrapper[28120]: I0220 15:15:52.181230 28120 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b433270a-b358-413d-b89b-8c75e9250d55\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e638f11d-a674-40c1-99d6-1ea9e1681697\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/148664c0f24fa2aa3f22d28668f00ab75d39ed65b3e6f49f36b1d00c5c8cd933/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.181477 master-0 kubenswrapper[28120]: I0220 15:15:52.181329 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0c6cc641-5305-4446-bcaf-7a5ebe004556-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.181849 master-0 kubenswrapper[28120]: I0220 15:15:52.181787 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c6cc641-5305-4446-bcaf-7a5ebe004556-config\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.182966 master-0 kubenswrapper[28120]: I0220 15:15:52.182946 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c6cc641-5305-4446-bcaf-7a5ebe004556-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.183413 master-0 kubenswrapper[28120]: I0220 15:15:52.183387 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6cc641-5305-4446-bcaf-7a5ebe004556-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.184135 master-0 kubenswrapper[28120]: I0220 15:15:52.184110 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6cc641-5305-4446-bcaf-7a5ebe004556-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.186797 master-0 kubenswrapper[28120]: I0220 15:15:52.186764 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6cc641-5305-4446-bcaf-7a5ebe004556-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:52.202773 master-0 kubenswrapper[28120]: I0220 15:15:52.202730 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdz8t\" (UniqueName: \"kubernetes.io/projected/0c6cc641-5305-4446-bcaf-7a5ebe004556-kube-api-access-wdz8t\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:53.302088 master-0 kubenswrapper[28120]: I0220 15:15:53.297499 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 20 15:15:53.628084 master-0 kubenswrapper[28120]: I0220 15:15:53.624273 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-752aab34-b75b-4297-9016-7afeb9fecd0f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^95468e0f-dad4-4bc8-bd3a-9aa8c6875915\") pod \"ovsdbserver-nb-0\" (UID: \"c1bd517d-8804-484c-a8b8-bc08705b9479\") " pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:53.865932 master-0 kubenswrapper[28120]: I0220 15:15:53.865849 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 20 15:15:54.053288 master-0 kubenswrapper[28120]: W0220 15:15:54.053185 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6ef9865_9c4d_489b_a84b_7e444820e473.slice/crio-74eec7f49965209afaf3b3899987bf3a2dc8e77de10f483f24e41e938456f63a WatchSource:0}: Error finding container 74eec7f49965209afaf3b3899987bf3a2dc8e77de10f483f24e41e938456f63a: Status 404 returned error can't find the container with id 74eec7f49965209afaf3b3899987bf3a2dc8e77de10f483f24e41e938456f63a Feb 20 15:15:54.067687 master-0 kubenswrapper[28120]: I0220 15:15:54.067029 28120 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 15:15:54.537807 master-0 kubenswrapper[28120]: I0220 15:15:54.537723 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d6ef9865-9c4d-489b-a84b-7e444820e473","Type":"ContainerStarted","Data":"74eec7f49965209afaf3b3899987bf3a2dc8e77de10f483f24e41e938456f63a"} Feb 20 15:15:54.634871 master-0 kubenswrapper[28120]: I0220 15:15:54.634188 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 20 15:15:54.833461 master-0 kubenswrapper[28120]: W0220 15:15:54.833326 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod041ad820_183f_41ee_b690_b1687d55e12e.slice/crio-d2f098ccb4c528a98c38d2809562091b37b452a285c2e86999d7642a3759deed WatchSource:0}: Error finding container d2f098ccb4c528a98c38d2809562091b37b452a285c2e86999d7642a3759deed: Status 404 returned error can't find the container with id d2f098ccb4c528a98c38d2809562091b37b452a285c2e86999d7642a3759deed Feb 20 15:15:54.977890 master-0 kubenswrapper[28120]: I0220 15:15:54.976310 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b433270a-b358-413d-b89b-8c75e9250d55\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e638f11d-a674-40c1-99d6-1ea9e1681697\") pod \"ovsdbserver-sb-0\" (UID: \"0c6cc641-5305-4446-bcaf-7a5ebe004556\") " pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:55.065317 master-0 kubenswrapper[28120]: I0220 15:15:55.065248 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 20 15:15:55.235640 master-0 kubenswrapper[28120]: I0220 15:15:55.235268 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mdjwf"] Feb 20 15:15:55.246841 master-0 kubenswrapper[28120]: I0220 15:15:55.246768 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 20 15:15:55.256628 master-0 kubenswrapper[28120]: I0220 15:15:55.256544 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 20 15:15:55.264253 master-0 kubenswrapper[28120]: I0220 15:15:55.264206 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 20 15:15:55.317016 master-0 kubenswrapper[28120]: W0220 15:15:55.315854 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59d78ff4_b331_48f2_bc79_f89b4647e69c.slice/crio-b36de329eaab0a11c5235b153c97b498e7f2fd52ed22ae3d58ead47c17f4001b WatchSource:0}: Error finding container b36de329eaab0a11c5235b153c97b498e7f2fd52ed22ae3d58ead47c17f4001b: Status 404 returned error can't find the container with id b36de329eaab0a11c5235b153c97b498e7f2fd52ed22ae3d58ead47c17f4001b Feb 20 15:15:55.548359 master-0 kubenswrapper[28120]: I0220 15:15:55.548306 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f53073f3-712a-4f4f-9d29-18e123a19303","Type":"ContainerStarted","Data":"cf092117d6c79fbee9236742ed344674028b9cebf61cc76bf47e7c2760ca57c3"} Feb 20 15:15:55.549677 master-0 kubenswrapper[28120]: I0220 15:15:55.549622 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mdjwf" event={"ID":"4a840047-42da-4d9c-81e2-8a4da0c3997f","Type":"ContainerStarted","Data":"a5ee4a441d04c7f93f6f45ae896169a0bcf0f4ed898c361c115cae650f27b304"} Feb 20 15:15:55.551666 master-0 kubenswrapper[28120]: I0220 15:15:55.551628 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"59d78ff4-b331-48f2-bc79-f89b4647e69c","Type":"ContainerStarted","Data":"b36de329eaab0a11c5235b153c97b498e7f2fd52ed22ae3d58ead47c17f4001b"} Feb 20 15:15:55.553138 master-0 kubenswrapper[28120]: I0220 15:15:55.553106 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86fa1b6a-d104-4787-b6d5-21dfd3a324f8","Type":"ContainerStarted","Data":"5b11cb020c963688547caf138d9e85a13240cc1587ab4e22ce7fbe8767dffc8d"} Feb 20 15:15:55.554323 master-0 kubenswrapper[28120]: I0220 15:15:55.554293 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"041ad820-183f-41ee-b690-b1687d55e12e","Type":"ContainerStarted","Data":"d2f098ccb4c528a98c38d2809562091b37b452a285c2e86999d7642a3759deed"} Feb 20 15:15:55.789415 master-0 kubenswrapper[28120]: I0220 15:15:55.789374 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5lchw"] Feb 20 15:15:55.881484 master-0 kubenswrapper[28120]: W0220 15:15:55.881414 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08ee872d_0365_4096_bc4e_387262807443.slice/crio-ba188e2e8880005f6fb791504a034788a2f75d6915f17848262ef565c5070404 WatchSource:0}: Error finding container ba188e2e8880005f6fb791504a034788a2f75d6915f17848262ef565c5070404: Status 404 returned error can't find the container with id ba188e2e8880005f6fb791504a034788a2f75d6915f17848262ef565c5070404 Feb 20 15:15:56.545467 master-0 kubenswrapper[28120]: I0220 15:15:56.545213 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 20 15:15:56.576692 master-0 kubenswrapper[28120]: I0220 15:15:56.576590 28120 generic.go:334] "Generic (PLEG): container finished" podID="f516ada9-a716-4f69-9f2b-130599c34ebe" containerID="1539cfa3adcc5bd6e066deb607a53af688e87411a2d794669ea7f690a0f23849" exitCode=0 Feb 20 15:15:56.578427 master-0 kubenswrapper[28120]: I0220 15:15:56.576881 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-dn4fc" event={"ID":"f516ada9-a716-4f69-9f2b-130599c34ebe","Type":"ContainerDied","Data":"1539cfa3adcc5bd6e066deb607a53af688e87411a2d794669ea7f690a0f23849"} Feb 20 15:15:56.582913 master-0 kubenswrapper[28120]: I0220 15:15:56.582723 28120 generic.go:334] "Generic (PLEG): container finished" podID="8742b7a0-5156-4b20-b683-feb84b330828" containerID="595b6c9cfd35e711d528aa481fd547dd6157cc02b9449fbccd3c8c7535dafd76" exitCode=0 Feb 20 15:15:56.582913 master-0 kubenswrapper[28120]: I0220 15:15:56.582811 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" event={"ID":"8742b7a0-5156-4b20-b683-feb84b330828","Type":"ContainerDied","Data":"595b6c9cfd35e711d528aa481fd547dd6157cc02b9449fbccd3c8c7535dafd76"} Feb 20 15:15:56.585074 master-0 kubenswrapper[28120]: I0220 15:15:56.584850 28120 generic.go:334] "Generic (PLEG): container finished" podID="76a10caa-8597-4073-a879-ddebee1dbd76" containerID="1e996db88b2725c767f476c20224583abd72c9708d84198c03ddcbdd9c3b3bd8" exitCode=0 Feb 20 15:15:56.585074 master-0 kubenswrapper[28120]: I0220 15:15:56.584907 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-mm726" event={"ID":"76a10caa-8597-4073-a879-ddebee1dbd76","Type":"ContainerDied","Data":"1e996db88b2725c767f476c20224583abd72c9708d84198c03ddcbdd9c3b3bd8"} Feb 20 15:15:56.587243 master-0 kubenswrapper[28120]: I0220 15:15:56.587153 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5lchw" event={"ID":"08ee872d-0365-4096-bc4e-387262807443","Type":"ContainerStarted","Data":"ba188e2e8880005f6fb791504a034788a2f75d6915f17848262ef565c5070404"} Feb 20 15:15:56.590334 master-0 kubenswrapper[28120]: I0220 15:15:56.590240 28120 generic.go:334] "Generic (PLEG): container finished" podID="91310aca-57ce-4308-afcc-95511e83dc27" containerID="d3e5ceaa912a7025433e8ada9af585c23e6549332d866a1b998298f085e56d2e" exitCode=0 Feb 20 15:15:56.590334 master-0 kubenswrapper[28120]: I0220 15:15:56.590276 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" event={"ID":"91310aca-57ce-4308-afcc-95511e83dc27","Type":"ContainerDied","Data":"d3e5ceaa912a7025433e8ada9af585c23e6549332d866a1b998298f085e56d2e"} Feb 20 15:15:57.140137 master-0 kubenswrapper[28120]: I0220 15:15:57.140067 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 20 15:15:57.582457 master-0 kubenswrapper[28120]: I0220 15:15:57.582365 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-mm726" Feb 20 15:15:57.608025 master-0 kubenswrapper[28120]: I0220 15:15:57.607436 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-mm726" event={"ID":"76a10caa-8597-4073-a879-ddebee1dbd76","Type":"ContainerDied","Data":"04a1ce2ec6770cd0c5c8c8f7610b8c886d556118c4dc2902472cafa8664a0bac"} Feb 20 15:15:57.608025 master-0 kubenswrapper[28120]: I0220 15:15:57.607535 28120 scope.go:117] "RemoveContainer" containerID="1e996db88b2725c767f476c20224583abd72c9708d84198c03ddcbdd9c3b3bd8" Feb 20 15:15:57.608025 master-0 kubenswrapper[28120]: I0220 15:15:57.607742 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-mm726" Feb 20 15:15:57.744411 master-0 kubenswrapper[28120]: I0220 15:15:57.744356 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4zqg\" (UniqueName: \"kubernetes.io/projected/76a10caa-8597-4073-a879-ddebee1dbd76-kube-api-access-r4zqg\") pod \"76a10caa-8597-4073-a879-ddebee1dbd76\" (UID: \"76a10caa-8597-4073-a879-ddebee1dbd76\") " Feb 20 15:15:57.744631 master-0 kubenswrapper[28120]: I0220 15:15:57.744576 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a10caa-8597-4073-a879-ddebee1dbd76-config\") pod \"76a10caa-8597-4073-a879-ddebee1dbd76\" (UID: \"76a10caa-8597-4073-a879-ddebee1dbd76\") " Feb 20 15:15:57.750539 master-0 kubenswrapper[28120]: I0220 15:15:57.750418 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76a10caa-8597-4073-a879-ddebee1dbd76-kube-api-access-r4zqg" (OuterVolumeSpecName: "kube-api-access-r4zqg") pod "76a10caa-8597-4073-a879-ddebee1dbd76" (UID: "76a10caa-8597-4073-a879-ddebee1dbd76"). InnerVolumeSpecName "kube-api-access-r4zqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:15:57.779763 master-0 kubenswrapper[28120]: I0220 15:15:57.777768 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76a10caa-8597-4073-a879-ddebee1dbd76-config" (OuterVolumeSpecName: "config") pod "76a10caa-8597-4073-a879-ddebee1dbd76" (UID: "76a10caa-8597-4073-a879-ddebee1dbd76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:15:57.846911 master-0 kubenswrapper[28120]: I0220 15:15:57.846851 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4zqg\" (UniqueName: \"kubernetes.io/projected/76a10caa-8597-4073-a879-ddebee1dbd76-kube-api-access-r4zqg\") on node \"master-0\" DevicePath \"\"" Feb 20 15:15:57.846911 master-0 kubenswrapper[28120]: I0220 15:15:57.846904 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a10caa-8597-4073-a879-ddebee1dbd76-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:15:57.975510 master-0 kubenswrapper[28120]: I0220 15:15:57.975451 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-mm726"] Feb 20 15:15:57.985017 master-0 kubenswrapper[28120]: I0220 15:15:57.984518 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-mm726"] Feb 20 15:15:58.068298 master-0 kubenswrapper[28120]: I0220 15:15:58.068186 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76a10caa-8597-4073-a879-ddebee1dbd76" path="/var/lib/kubelet/pods/76a10caa-8597-4073-a879-ddebee1dbd76/volumes" Feb 20 15:15:58.708279 master-0 kubenswrapper[28120]: W0220 15:15:58.708120 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c6cc641_5305_4446_bcaf_7a5ebe004556.slice/crio-d382226d8be1cd46c9091d2baafa454893a4e7ada3e6381736788630b6d3199c WatchSource:0}: Error finding container d382226d8be1cd46c9091d2baafa454893a4e7ada3e6381736788630b6d3199c: Status 404 returned error can't find the container with id d382226d8be1cd46c9091d2baafa454893a4e7ada3e6381736788630b6d3199c Feb 20 15:15:58.710473 master-0 kubenswrapper[28120]: W0220 15:15:58.710400 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1bd517d_8804_484c_a8b8_bc08705b9479.slice/crio-40ac061ceb76f87386c8809427964d405d3c9a9e7bba39063f1670357c2248f0 WatchSource:0}: Error finding container 40ac061ceb76f87386c8809427964d405d3c9a9e7bba39063f1670357c2248f0: Status 404 returned error can't find the container with id 40ac061ceb76f87386c8809427964d405d3c9a9e7bba39063f1670357c2248f0 Feb 20 15:15:58.792941 master-0 kubenswrapper[28120]: I0220 15:15:58.792889 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-dn4fc" Feb 20 15:15:58.870836 master-0 kubenswrapper[28120]: I0220 15:15:58.870751 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f516ada9-a716-4f69-9f2b-130599c34ebe-dns-svc\") pod \"f516ada9-a716-4f69-9f2b-130599c34ebe\" (UID: \"f516ada9-a716-4f69-9f2b-130599c34ebe\") " Feb 20 15:15:58.871201 master-0 kubenswrapper[28120]: I0220 15:15:58.870897 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bh9j\" (UniqueName: \"kubernetes.io/projected/f516ada9-a716-4f69-9f2b-130599c34ebe-kube-api-access-5bh9j\") pod \"f516ada9-a716-4f69-9f2b-130599c34ebe\" (UID: \"f516ada9-a716-4f69-9f2b-130599c34ebe\") " Feb 20 15:15:58.871201 master-0 kubenswrapper[28120]: I0220 15:15:58.870945 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f516ada9-a716-4f69-9f2b-130599c34ebe-config\") pod \"f516ada9-a716-4f69-9f2b-130599c34ebe\" (UID: \"f516ada9-a716-4f69-9f2b-130599c34ebe\") " Feb 20 15:15:58.874024 master-0 kubenswrapper[28120]: I0220 15:15:58.873948 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f516ada9-a716-4f69-9f2b-130599c34ebe-kube-api-access-5bh9j" (OuterVolumeSpecName: "kube-api-access-5bh9j") pod "f516ada9-a716-4f69-9f2b-130599c34ebe" (UID: "f516ada9-a716-4f69-9f2b-130599c34ebe"). InnerVolumeSpecName "kube-api-access-5bh9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:15:58.895264 master-0 kubenswrapper[28120]: I0220 15:15:58.895151 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f516ada9-a716-4f69-9f2b-130599c34ebe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f516ada9-a716-4f69-9f2b-130599c34ebe" (UID: "f516ada9-a716-4f69-9f2b-130599c34ebe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:15:58.904696 master-0 kubenswrapper[28120]: I0220 15:15:58.904609 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f516ada9-a716-4f69-9f2b-130599c34ebe-config" (OuterVolumeSpecName: "config") pod "f516ada9-a716-4f69-9f2b-130599c34ebe" (UID: "f516ada9-a716-4f69-9f2b-130599c34ebe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:15:58.975073 master-0 kubenswrapper[28120]: I0220 15:15:58.974805 28120 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f516ada9-a716-4f69-9f2b-130599c34ebe-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:15:58.975073 master-0 kubenswrapper[28120]: I0220 15:15:58.974865 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bh9j\" (UniqueName: \"kubernetes.io/projected/f516ada9-a716-4f69-9f2b-130599c34ebe-kube-api-access-5bh9j\") on node \"master-0\" DevicePath \"\"" Feb 20 15:15:58.975073 master-0 kubenswrapper[28120]: I0220 15:15:58.974903 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f516ada9-a716-4f69-9f2b-130599c34ebe-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:15:59.641856 master-0 kubenswrapper[28120]: I0220 15:15:59.641745 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c1bd517d-8804-484c-a8b8-bc08705b9479","Type":"ContainerStarted","Data":"40ac061ceb76f87386c8809427964d405d3c9a9e7bba39063f1670357c2248f0"} Feb 20 15:15:59.644427 master-0 kubenswrapper[28120]: I0220 15:15:59.644346 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-dn4fc" event={"ID":"f516ada9-a716-4f69-9f2b-130599c34ebe","Type":"ContainerDied","Data":"ccf3c1369a4a10c749d3af486e114eccd9ed7818e4db6ff835b2387dd4d92ae0"} Feb 20 15:15:59.644427 master-0 kubenswrapper[28120]: I0220 15:15:59.644400 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-dn4fc" Feb 20 15:15:59.644427 master-0 kubenswrapper[28120]: I0220 15:15:59.644429 28120 scope.go:117] "RemoveContainer" containerID="1539cfa3adcc5bd6e066deb607a53af688e87411a2d794669ea7f690a0f23849" Feb 20 15:15:59.651445 master-0 kubenswrapper[28120]: I0220 15:15:59.651381 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0c6cc641-5305-4446-bcaf-7a5ebe004556","Type":"ContainerStarted","Data":"d382226d8be1cd46c9091d2baafa454893a4e7ada3e6381736788630b6d3199c"} Feb 20 15:15:59.727530 master-0 kubenswrapper[28120]: I0220 15:15:59.727430 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-dn4fc"] Feb 20 15:15:59.735076 master-0 kubenswrapper[28120]: I0220 15:15:59.734948 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-dn4fc"] Feb 20 15:16:00.068059 master-0 kubenswrapper[28120]: I0220 15:16:00.068007 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f516ada9-a716-4f69-9f2b-130599c34ebe" path="/var/lib/kubelet/pods/f516ada9-a716-4f69-9f2b-130599c34ebe/volumes" Feb 20 15:16:00.121120 master-0 kubenswrapper[28120]: I0220 15:16:00.121052 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-5wrbz"] Feb 20 15:16:00.122240 master-0 kubenswrapper[28120]: E0220 15:16:00.121533 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f516ada9-a716-4f69-9f2b-130599c34ebe" containerName="init" Feb 20 15:16:00.122240 master-0 kubenswrapper[28120]: I0220 15:16:00.121546 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f516ada9-a716-4f69-9f2b-130599c34ebe" containerName="init" Feb 20 15:16:00.122240 master-0 kubenswrapper[28120]: E0220 15:16:00.121581 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76a10caa-8597-4073-a879-ddebee1dbd76" containerName="init" Feb 20 15:16:00.122240 master-0 kubenswrapper[28120]: I0220 15:16:00.121587 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="76a10caa-8597-4073-a879-ddebee1dbd76" containerName="init" Feb 20 15:16:00.122240 master-0 kubenswrapper[28120]: I0220 15:16:00.121777 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="f516ada9-a716-4f69-9f2b-130599c34ebe" containerName="init" Feb 20 15:16:00.122240 master-0 kubenswrapper[28120]: I0220 15:16:00.121794 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="76a10caa-8597-4073-a879-ddebee1dbd76" containerName="init" Feb 20 15:16:00.122848 master-0 kubenswrapper[28120]: I0220 15:16:00.122492 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.125189 master-0 kubenswrapper[28120]: I0220 15:16:00.125159 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 20 15:16:00.142635 master-0 kubenswrapper[28120]: I0220 15:16:00.142585 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5wrbz"] Feb 20 15:16:00.226896 master-0 kubenswrapper[28120]: I0220 15:16:00.226251 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfnjv\" (UniqueName: \"kubernetes.io/projected/5f4eec79-14a8-4bd7-9356-ba082bd01f91-kube-api-access-tfnjv\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.226896 master-0 kubenswrapper[28120]: I0220 15:16:00.226484 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f4eec79-14a8-4bd7-9356-ba082bd01f91-config\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.226896 master-0 kubenswrapper[28120]: I0220 15:16:00.226570 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5f4eec79-14a8-4bd7-9356-ba082bd01f91-ovs-rundir\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.226896 master-0 kubenswrapper[28120]: I0220 15:16:00.226609 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f4eec79-14a8-4bd7-9356-ba082bd01f91-combined-ca-bundle\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.227852 master-0 kubenswrapper[28120]: I0220 15:16:00.227384 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f4eec79-14a8-4bd7-9356-ba082bd01f91-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.227909 master-0 kubenswrapper[28120]: I0220 15:16:00.227850 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5f4eec79-14a8-4bd7-9356-ba082bd01f91-ovn-rundir\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.329717 master-0 kubenswrapper[28120]: I0220 15:16:00.329579 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f4eec79-14a8-4bd7-9356-ba082bd01f91-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.329717 master-0 kubenswrapper[28120]: I0220 15:16:00.329708 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5f4eec79-14a8-4bd7-9356-ba082bd01f91-ovn-rundir\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.329943 master-0 kubenswrapper[28120]: I0220 15:16:00.329778 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfnjv\" (UniqueName: \"kubernetes.io/projected/5f4eec79-14a8-4bd7-9356-ba082bd01f91-kube-api-access-tfnjv\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.329943 master-0 kubenswrapper[28120]: I0220 15:16:00.329810 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f4eec79-14a8-4bd7-9356-ba082bd01f91-config\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.329943 master-0 kubenswrapper[28120]: I0220 15:16:00.329849 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5f4eec79-14a8-4bd7-9356-ba082bd01f91-ovs-rundir\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.329943 master-0 kubenswrapper[28120]: I0220 15:16:00.329877 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f4eec79-14a8-4bd7-9356-ba082bd01f91-combined-ca-bundle\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.332107 master-0 kubenswrapper[28120]: I0220 15:16:00.330786 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5f4eec79-14a8-4bd7-9356-ba082bd01f91-ovn-rundir\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.332107 master-0 kubenswrapper[28120]: I0220 15:16:00.331167 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5f4eec79-14a8-4bd7-9356-ba082bd01f91-ovs-rundir\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.332485 master-0 kubenswrapper[28120]: I0220 15:16:00.332442 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f4eec79-14a8-4bd7-9356-ba082bd01f91-config\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.334434 master-0 kubenswrapper[28120]: I0220 15:16:00.334386 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f4eec79-14a8-4bd7-9356-ba082bd01f91-combined-ca-bundle\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.336642 master-0 kubenswrapper[28120]: I0220 15:16:00.336558 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f4eec79-14a8-4bd7-9356-ba082bd01f91-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.366873 master-0 kubenswrapper[28120]: I0220 15:16:00.366774 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfnjv\" (UniqueName: \"kubernetes.io/projected/5f4eec79-14a8-4bd7-9356-ba082bd01f91-kube-api-access-tfnjv\") pod \"ovn-controller-metrics-5wrbz\" (UID: \"5f4eec79-14a8-4bd7-9356-ba082bd01f91\") " pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.393966 master-0 kubenswrapper[28120]: I0220 15:16:00.377454 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-c5b5l"] Feb 20 15:16:00.446951 master-0 kubenswrapper[28120]: I0220 15:16:00.445895 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-lznrf"] Feb 20 15:16:00.461951 master-0 kubenswrapper[28120]: I0220 15:16:00.447650 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:00.461951 master-0 kubenswrapper[28120]: I0220 15:16:00.453315 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 20 15:16:00.461951 master-0 kubenswrapper[28120]: I0220 15:16:00.457076 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-lznrf"] Feb 20 15:16:00.507131 master-0 kubenswrapper[28120]: I0220 15:16:00.507079 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5wrbz" Feb 20 15:16:00.539578 master-0 kubenswrapper[28120]: I0220 15:16:00.538688 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-dns-svc\") pod \"dnsmasq-dns-7c8cfc46bf-lznrf\" (UID: \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:00.539578 master-0 kubenswrapper[28120]: I0220 15:16:00.538955 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cfc46bf-lznrf\" (UID: \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:00.539578 master-0 kubenswrapper[28120]: I0220 15:16:00.539111 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hmqf\" (UniqueName: \"kubernetes.io/projected/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-kube-api-access-9hmqf\") pod \"dnsmasq-dns-7c8cfc46bf-lznrf\" (UID: \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:00.539578 master-0 kubenswrapper[28120]: I0220 15:16:00.539202 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-config\") pod \"dnsmasq-dns-7c8cfc46bf-lznrf\" (UID: \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:00.583023 master-0 kubenswrapper[28120]: I0220 15:16:00.573932 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-nf55m"] Feb 20 15:16:00.622655 master-0 kubenswrapper[28120]: I0220 15:16:00.618657 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-6fsrj"] Feb 20 15:16:00.626758 master-0 kubenswrapper[28120]: I0220 15:16:00.626404 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:00.628954 master-0 kubenswrapper[28120]: I0220 15:16:00.628898 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 20 15:16:00.639853 master-0 kubenswrapper[28120]: I0220 15:16:00.639808 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-6fsrj"] Feb 20 15:16:00.641037 master-0 kubenswrapper[28120]: I0220 15:16:00.640482 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-dns-svc\") pod \"dnsmasq-dns-7c8cfc46bf-lznrf\" (UID: \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:00.641245 master-0 kubenswrapper[28120]: I0220 15:16:00.641213 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-dns-svc\") pod \"dnsmasq-dns-7c8cfc46bf-lznrf\" (UID: \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:00.641749 master-0 kubenswrapper[28120]: I0220 15:16:00.641728 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cfc46bf-lznrf\" (UID: \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:00.642059 master-0 kubenswrapper[28120]: I0220 15:16:00.642037 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hmqf\" (UniqueName: \"kubernetes.io/projected/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-kube-api-access-9hmqf\") pod \"dnsmasq-dns-7c8cfc46bf-lznrf\" (UID: \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:00.642272 master-0 kubenswrapper[28120]: I0220 15:16:00.642254 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-config\") pod \"dnsmasq-dns-7c8cfc46bf-lznrf\" (UID: \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:00.643644 master-0 kubenswrapper[28120]: I0220 15:16:00.643625 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-config\") pod \"dnsmasq-dns-7c8cfc46bf-lznrf\" (UID: \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:00.644903 master-0 kubenswrapper[28120]: I0220 15:16:00.644876 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cfc46bf-lznrf\" (UID: \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:00.662419 master-0 kubenswrapper[28120]: I0220 15:16:00.662386 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hmqf\" (UniqueName: \"kubernetes.io/projected/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-kube-api-access-9hmqf\") pod \"dnsmasq-dns-7c8cfc46bf-lznrf\" (UID: \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:00.744474 master-0 kubenswrapper[28120]: I0220 15:16:00.744397 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-config\") pod \"dnsmasq-dns-7b9694dd79-6fsrj\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:00.744474 master-0 kubenswrapper[28120]: I0220 15:16:00.744460 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtcp7\" (UniqueName: \"kubernetes.io/projected/6d6be79c-d0e2-452f-8462-dd8022e4232d-kube-api-access-jtcp7\") pod \"dnsmasq-dns-7b9694dd79-6fsrj\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:00.745112 master-0 kubenswrapper[28120]: I0220 15:16:00.744546 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-dns-svc\") pod \"dnsmasq-dns-7b9694dd79-6fsrj\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:00.745112 master-0 kubenswrapper[28120]: I0220 15:16:00.744618 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9694dd79-6fsrj\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:00.745112 master-0 kubenswrapper[28120]: I0220 15:16:00.744676 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9694dd79-6fsrj\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:00.778741 master-0 kubenswrapper[28120]: I0220 15:16:00.778672 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:00.846962 master-0 kubenswrapper[28120]: I0220 15:16:00.846803 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-dns-svc\") pod \"dnsmasq-dns-7b9694dd79-6fsrj\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:00.846962 master-0 kubenswrapper[28120]: I0220 15:16:00.846941 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9694dd79-6fsrj\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:00.847258 master-0 kubenswrapper[28120]: I0220 15:16:00.847002 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9694dd79-6fsrj\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:00.847258 master-0 kubenswrapper[28120]: I0220 15:16:00.847082 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-config\") pod \"dnsmasq-dns-7b9694dd79-6fsrj\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:00.847258 master-0 kubenswrapper[28120]: I0220 15:16:00.847113 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtcp7\" (UniqueName: \"kubernetes.io/projected/6d6be79c-d0e2-452f-8462-dd8022e4232d-kube-api-access-jtcp7\") pod \"dnsmasq-dns-7b9694dd79-6fsrj\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:00.848737 master-0 kubenswrapper[28120]: I0220 15:16:00.848701 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-dns-svc\") pod \"dnsmasq-dns-7b9694dd79-6fsrj\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:00.848956 master-0 kubenswrapper[28120]: I0220 15:16:00.848884 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9694dd79-6fsrj\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:00.849721 master-0 kubenswrapper[28120]: I0220 15:16:00.849687 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-config\") pod \"dnsmasq-dns-7b9694dd79-6fsrj\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:00.849960 master-0 kubenswrapper[28120]: I0220 15:16:00.849902 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9694dd79-6fsrj\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:00.862311 master-0 kubenswrapper[28120]: I0220 15:16:00.862276 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtcp7\" (UniqueName: \"kubernetes.io/projected/6d6be79c-d0e2-452f-8462-dd8022e4232d-kube-api-access-jtcp7\") pod \"dnsmasq-dns-7b9694dd79-6fsrj\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:00.953935 master-0 kubenswrapper[28120]: I0220 15:16:00.953881 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:03.045109 master-0 kubenswrapper[28120]: I0220 15:16:03.045048 28120 trace.go:236] Trace[738557151]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (20-Feb-2026 15:16:02.026) (total time: 1018ms): Feb 20 15:16:03.045109 master-0 kubenswrapper[28120]: Trace[738557151]: [1.018026453s] [1.018026453s] END Feb 20 15:16:04.024808 master-0 kubenswrapper[28120]: I0220 15:16:04.023681 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-6fsrj"] Feb 20 15:16:04.073458 master-0 kubenswrapper[28120]: W0220 15:16:04.067421 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d6be79c_d0e2_452f_8462_dd8022e4232d.slice/crio-01a82a8e6c0178f58e63c1f5fb5283f31a25bc4bb9f74ccbfee20bde93d606cb WatchSource:0}: Error finding container 01a82a8e6c0178f58e63c1f5fb5283f31a25bc4bb9f74ccbfee20bde93d606cb: Status 404 returned error can't find the container with id 01a82a8e6c0178f58e63c1f5fb5283f31a25bc4bb9f74ccbfee20bde93d606cb Feb 20 15:16:04.090377 master-0 kubenswrapper[28120]: I0220 15:16:04.082661 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5wrbz"] Feb 20 15:16:04.167245 master-0 kubenswrapper[28120]: W0220 15:16:04.167205 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f4eec79_14a8_4bd7_9356_ba082bd01f91.slice/crio-c154920a4db5e4ce205707e6040da67dc8433f7be6a6cb74ec22c5c952d45591 WatchSource:0}: Error finding container c154920a4db5e4ce205707e6040da67dc8433f7be6a6cb74ec22c5c952d45591: Status 404 returned error can't find the container with id c154920a4db5e4ce205707e6040da67dc8433f7be6a6cb74ec22c5c952d45591 Feb 20 15:16:04.215786 master-0 kubenswrapper[28120]: I0220 15:16:04.215729 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-lznrf"] Feb 20 15:16:04.251292 master-0 kubenswrapper[28120]: W0220 15:16:04.251187 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8e6f35c_0dcb_4fc0_a236_7245736b3ae7.slice/crio-6936dac0b6be8d2ea3967369ac557fab2956a4ae36f3c348fa874c8ba443546e WatchSource:0}: Error finding container 6936dac0b6be8d2ea3967369ac557fab2956a4ae36f3c348fa874c8ba443546e: Status 404 returned error can't find the container with id 6936dac0b6be8d2ea3967369ac557fab2956a4ae36f3c348fa874c8ba443546e Feb 20 15:16:04.717456 master-0 kubenswrapper[28120]: I0220 15:16:04.717398 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" event={"ID":"8742b7a0-5156-4b20-b683-feb84b330828","Type":"ContainerStarted","Data":"0fb4806d1b043d5aaf8d4c25cf1ccbbf03789ab393117a8cbb2a6cc598d90370"} Feb 20 15:16:04.717948 master-0 kubenswrapper[28120]: I0220 15:16:04.717873 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" Feb 20 15:16:04.718249 master-0 kubenswrapper[28120]: I0220 15:16:04.717521 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" podUID="8742b7a0-5156-4b20-b683-feb84b330828" containerName="dnsmasq-dns" containerID="cri-o://0fb4806d1b043d5aaf8d4c25cf1ccbbf03789ab393117a8cbb2a6cc598d90370" gracePeriod=10 Feb 20 15:16:04.720260 master-0 kubenswrapper[28120]: I0220 15:16:04.720033 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" event={"ID":"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7","Type":"ContainerStarted","Data":"6936dac0b6be8d2ea3967369ac557fab2956a4ae36f3c348fa874c8ba443546e"} Feb 20 15:16:04.723094 master-0 kubenswrapper[28120]: I0220 15:16:04.723043 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"59d78ff4-b331-48f2-bc79-f89b4647e69c","Type":"ContainerStarted","Data":"440521a044786574d14827d220b65b9316b09a9baea6ddefac1fc44b747ec913"} Feb 20 15:16:04.725520 master-0 kubenswrapper[28120]: I0220 15:16:04.725480 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" event={"ID":"6d6be79c-d0e2-452f-8462-dd8022e4232d","Type":"ContainerStarted","Data":"01a82a8e6c0178f58e63c1f5fb5283f31a25bc4bb9f74ccbfee20bde93d606cb"} Feb 20 15:16:04.729029 master-0 kubenswrapper[28120]: I0220 15:16:04.728990 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" event={"ID":"91310aca-57ce-4308-afcc-95511e83dc27","Type":"ContainerStarted","Data":"2a9532e178f7b876aadf0d969d53fad0dc254f611abe6f1de158279f217ee230"} Feb 20 15:16:04.729397 master-0 kubenswrapper[28120]: I0220 15:16:04.729362 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" podUID="91310aca-57ce-4308-afcc-95511e83dc27" containerName="dnsmasq-dns" containerID="cri-o://2a9532e178f7b876aadf0d969d53fad0dc254f611abe6f1de158279f217ee230" gracePeriod=10 Feb 20 15:16:04.729651 master-0 kubenswrapper[28120]: I0220 15:16:04.729625 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" Feb 20 15:16:04.737330 master-0 kubenswrapper[28120]: I0220 15:16:04.737191 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d6ef9865-9c4d-489b-a84b-7e444820e473","Type":"ContainerStarted","Data":"61efa8e5744f805bd4f87d2c26ec9664354b3054c420d065ec76ad1bf4cd1914"} Feb 20 15:16:04.737453 master-0 kubenswrapper[28120]: I0220 15:16:04.737375 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 20 15:16:04.740663 master-0 kubenswrapper[28120]: I0220 15:16:04.740591 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f53073f3-712a-4f4f-9d29-18e123a19303","Type":"ContainerStarted","Data":"6c690bff11bedbb5f44965cb7ce3bcd2880f45f3cacd5db13929a5b9b8681abf"} Feb 20 15:16:04.742800 master-0 kubenswrapper[28120]: I0220 15:16:04.742726 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5wrbz" event={"ID":"5f4eec79-14a8-4bd7-9356-ba082bd01f91","Type":"ContainerStarted","Data":"c154920a4db5e4ce205707e6040da67dc8433f7be6a6cb74ec22c5c952d45591"} Feb 20 15:16:04.763321 master-0 kubenswrapper[28120]: I0220 15:16:04.762758 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" podStartSLOduration=10.539655832 podStartE2EDuration="27.76273025s" podCreationTimestamp="2026-02-20 15:15:37 +0000 UTC" firstStartedPulling="2026-02-20 15:15:38.170859659 +0000 UTC m=+876.431653222" lastFinishedPulling="2026-02-20 15:15:55.393934077 +0000 UTC m=+893.654727640" observedRunningTime="2026-02-20 15:16:04.749787607 +0000 UTC m=+903.010581210" watchObservedRunningTime="2026-02-20 15:16:04.76273025 +0000 UTC m=+903.023523853" Feb 20 15:16:04.836197 master-0 kubenswrapper[28120]: I0220 15:16:04.836045 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" podStartSLOduration=10.851031923 podStartE2EDuration="28.836020276s" podCreationTimestamp="2026-02-20 15:15:36 +0000 UTC" firstStartedPulling="2026-02-20 15:15:37.858031527 +0000 UTC m=+876.118825090" lastFinishedPulling="2026-02-20 15:15:55.84301988 +0000 UTC m=+894.103813443" observedRunningTime="2026-02-20 15:16:04.818261664 +0000 UTC m=+903.079055267" watchObservedRunningTime="2026-02-20 15:16:04.836020276 +0000 UTC m=+903.096813879" Feb 20 15:16:04.853071 master-0 kubenswrapper[28120]: I0220 15:16:04.852981 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.063603056 podStartE2EDuration="23.852967379s" podCreationTimestamp="2026-02-20 15:15:41 +0000 UTC" firstStartedPulling="2026-02-20 15:15:54.066952303 +0000 UTC m=+892.327745866" lastFinishedPulling="2026-02-20 15:16:02.856316626 +0000 UTC m=+901.117110189" observedRunningTime="2026-02-20 15:16:04.850283922 +0000 UTC m=+903.111077485" watchObservedRunningTime="2026-02-20 15:16:04.852967379 +0000 UTC m=+903.113760942" Feb 20 15:16:05.732275 master-0 kubenswrapper[28120]: I0220 15:16:05.732233 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" Feb 20 15:16:05.772114 master-0 kubenswrapper[28120]: I0220 15:16:05.772044 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mdjwf" event={"ID":"4a840047-42da-4d9c-81e2-8a4da0c3997f","Type":"ContainerStarted","Data":"47405c27f2014057e82aa9fc927970ff3363aaa9332020d856bbece40f66cb5b"} Feb 20 15:16:05.776471 master-0 kubenswrapper[28120]: I0220 15:16:05.773038 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-mdjwf" Feb 20 15:16:05.781737 master-0 kubenswrapper[28120]: I0220 15:16:05.781690 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0c6cc641-5305-4446-bcaf-7a5ebe004556","Type":"ContainerStarted","Data":"a9446b8a474e46820f290e5964d535d00728956d8067143d57557824c57cd580"} Feb 20 15:16:05.787437 master-0 kubenswrapper[28120]: I0220 15:16:05.787388 28120 generic.go:334] "Generic (PLEG): container finished" podID="91310aca-57ce-4308-afcc-95511e83dc27" containerID="2a9532e178f7b876aadf0d969d53fad0dc254f611abe6f1de158279f217ee230" exitCode=0 Feb 20 15:16:05.787565 master-0 kubenswrapper[28120]: I0220 15:16:05.787507 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" event={"ID":"91310aca-57ce-4308-afcc-95511e83dc27","Type":"ContainerDied","Data":"2a9532e178f7b876aadf0d969d53fad0dc254f611abe6f1de158279f217ee230"} Feb 20 15:16:05.787630 master-0 kubenswrapper[28120]: I0220 15:16:05.787577 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" event={"ID":"91310aca-57ce-4308-afcc-95511e83dc27","Type":"ContainerDied","Data":"d77a03b9f73bbf68b17db4fab61a89567298c871a6236628fe9fc99794d16db4"} Feb 20 15:16:05.787680 master-0 kubenswrapper[28120]: I0220 15:16:05.787603 28120 scope.go:117] "RemoveContainer" containerID="2a9532e178f7b876aadf0d969d53fad0dc254f611abe6f1de158279f217ee230" Feb 20 15:16:05.787775 master-0 kubenswrapper[28120]: I0220 15:16:05.787752 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-c5b5l" Feb 20 15:16:05.801943 master-0 kubenswrapper[28120]: I0220 15:16:05.800138 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86fa1b6a-d104-4787-b6d5-21dfd3a324f8","Type":"ContainerStarted","Data":"8adaee0af42d8c571d79f832d25c27da679ac0d768ebd2e1584a2e78f559823f"} Feb 20 15:16:05.804412 master-0 kubenswrapper[28120]: I0220 15:16:05.804345 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c1bd517d-8804-484c-a8b8-bc08705b9479","Type":"ContainerStarted","Data":"05b99d44655b9fb01005542d8f4f489bdfc3ef25484755deca1691c4671f228f"} Feb 20 15:16:05.811164 master-0 kubenswrapper[28120]: I0220 15:16:05.811109 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"041ad820-183f-41ee-b690-b1687d55e12e","Type":"ContainerStarted","Data":"fa167e79bf213365d37983b936c0fa20c6cdd368d2f3e767dce20eed925344bb"} Feb 20 15:16:05.815541 master-0 kubenswrapper[28120]: I0220 15:16:05.815511 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91310aca-57ce-4308-afcc-95511e83dc27-dns-svc\") pod \"91310aca-57ce-4308-afcc-95511e83dc27\" (UID: \"91310aca-57ce-4308-afcc-95511e83dc27\") " Feb 20 15:16:05.815849 master-0 kubenswrapper[28120]: I0220 15:16:05.815835 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91310aca-57ce-4308-afcc-95511e83dc27-config\") pod \"91310aca-57ce-4308-afcc-95511e83dc27\" (UID: \"91310aca-57ce-4308-afcc-95511e83dc27\") " Feb 20 15:16:05.815988 master-0 kubenswrapper[28120]: I0220 15:16:05.815974 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thvjk\" (UniqueName: \"kubernetes.io/projected/91310aca-57ce-4308-afcc-95511e83dc27-kube-api-access-thvjk\") pod \"91310aca-57ce-4308-afcc-95511e83dc27\" (UID: \"91310aca-57ce-4308-afcc-95511e83dc27\") " Feb 20 15:16:05.824028 master-0 kubenswrapper[28120]: I0220 15:16:05.823915 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91310aca-57ce-4308-afcc-95511e83dc27-kube-api-access-thvjk" (OuterVolumeSpecName: "kube-api-access-thvjk") pod "91310aca-57ce-4308-afcc-95511e83dc27" (UID: "91310aca-57ce-4308-afcc-95511e83dc27"). InnerVolumeSpecName "kube-api-access-thvjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:05.835397 master-0 kubenswrapper[28120]: I0220 15:16:05.832590 28120 generic.go:334] "Generic (PLEG): container finished" podID="8742b7a0-5156-4b20-b683-feb84b330828" containerID="0fb4806d1b043d5aaf8d4c25cf1ccbbf03789ab393117a8cbb2a6cc598d90370" exitCode=0 Feb 20 15:16:05.835397 master-0 kubenswrapper[28120]: I0220 15:16:05.832664 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" event={"ID":"8742b7a0-5156-4b20-b683-feb84b330828","Type":"ContainerDied","Data":"0fb4806d1b043d5aaf8d4c25cf1ccbbf03789ab393117a8cbb2a6cc598d90370"} Feb 20 15:16:05.835397 master-0 kubenswrapper[28120]: I0220 15:16:05.834574 28120 generic.go:334] "Generic (PLEG): container finished" podID="08ee872d-0365-4096-bc4e-387262807443" containerID="955dedc47cf0a09c4c42255f9184802e70bfc299c6dc8c94d316fe471142f859" exitCode=0 Feb 20 15:16:05.835397 master-0 kubenswrapper[28120]: I0220 15:16:05.834913 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5lchw" event={"ID":"08ee872d-0365-4096-bc4e-387262807443","Type":"ContainerDied","Data":"955dedc47cf0a09c4c42255f9184802e70bfc299c6dc8c94d316fe471142f859"} Feb 20 15:16:05.872490 master-0 kubenswrapper[28120]: I0220 15:16:05.870815 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mdjwf" podStartSLOduration=11.818460605 podStartE2EDuration="19.870785917s" podCreationTimestamp="2026-02-20 15:15:46 +0000 UTC" firstStartedPulling="2026-02-20 15:15:55.355806287 +0000 UTC m=+893.616599850" lastFinishedPulling="2026-02-20 15:16:03.408131549 +0000 UTC m=+901.668925162" observedRunningTime="2026-02-20 15:16:05.818418182 +0000 UTC m=+904.079211765" watchObservedRunningTime="2026-02-20 15:16:05.870785917 +0000 UTC m=+904.131579520" Feb 20 15:16:05.882796 master-0 kubenswrapper[28120]: I0220 15:16:05.882619 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91310aca-57ce-4308-afcc-95511e83dc27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "91310aca-57ce-4308-afcc-95511e83dc27" (UID: "91310aca-57ce-4308-afcc-95511e83dc27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:05.895750 master-0 kubenswrapper[28120]: I0220 15:16:05.882912 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91310aca-57ce-4308-afcc-95511e83dc27-config" (OuterVolumeSpecName: "config") pod "91310aca-57ce-4308-afcc-95511e83dc27" (UID: "91310aca-57ce-4308-afcc-95511e83dc27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:05.924293 master-0 kubenswrapper[28120]: I0220 15:16:05.923910 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thvjk\" (UniqueName: \"kubernetes.io/projected/91310aca-57ce-4308-afcc-95511e83dc27-kube-api-access-thvjk\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:05.924293 master-0 kubenswrapper[28120]: I0220 15:16:05.924067 28120 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91310aca-57ce-4308-afcc-95511e83dc27-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:05.924293 master-0 kubenswrapper[28120]: I0220 15:16:05.924082 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91310aca-57ce-4308-afcc-95511e83dc27-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:06.129890 master-0 kubenswrapper[28120]: I0220 15:16:06.129794 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-c5b5l"] Feb 20 15:16:06.146220 master-0 kubenswrapper[28120]: I0220 15:16:06.146154 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-c5b5l"] Feb 20 15:16:06.310453 master-0 kubenswrapper[28120]: E0220 15:16:06.310289 28120 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91310aca_57ce_4308_afcc_95511e83dc27.slice/crio-d77a03b9f73bbf68b17db4fab61a89567298c871a6236628fe9fc99794d16db4\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91310aca_57ce_4308_afcc_95511e83dc27.slice\": RecentStats: unable to find data in memory cache]" Feb 20 15:16:06.483580 master-0 kubenswrapper[28120]: I0220 15:16:06.483533 28120 scope.go:117] "RemoveContainer" containerID="d3e5ceaa912a7025433e8ada9af585c23e6549332d866a1b998298f085e56d2e" Feb 20 15:16:06.566236 master-0 kubenswrapper[28120]: I0220 15:16:06.565817 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" Feb 20 15:16:06.594151 master-0 kubenswrapper[28120]: I0220 15:16:06.593987 28120 scope.go:117] "RemoveContainer" containerID="2a9532e178f7b876aadf0d969d53fad0dc254f611abe6f1de158279f217ee230" Feb 20 15:16:06.606158 master-0 kubenswrapper[28120]: E0220 15:16:06.603863 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a9532e178f7b876aadf0d969d53fad0dc254f611abe6f1de158279f217ee230\": container with ID starting with 2a9532e178f7b876aadf0d969d53fad0dc254f611abe6f1de158279f217ee230 not found: ID does not exist" containerID="2a9532e178f7b876aadf0d969d53fad0dc254f611abe6f1de158279f217ee230" Feb 20 15:16:06.606536 master-0 kubenswrapper[28120]: I0220 15:16:06.606170 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9532e178f7b876aadf0d969d53fad0dc254f611abe6f1de158279f217ee230"} err="failed to get container status \"2a9532e178f7b876aadf0d969d53fad0dc254f611abe6f1de158279f217ee230\": rpc error: code = NotFound desc = could not find container \"2a9532e178f7b876aadf0d969d53fad0dc254f611abe6f1de158279f217ee230\": container with ID starting with 2a9532e178f7b876aadf0d969d53fad0dc254f611abe6f1de158279f217ee230 not found: ID does not exist" Feb 20 15:16:06.606536 master-0 kubenswrapper[28120]: I0220 15:16:06.606231 28120 scope.go:117] "RemoveContainer" containerID="d3e5ceaa912a7025433e8ada9af585c23e6549332d866a1b998298f085e56d2e" Feb 20 15:16:06.610264 master-0 kubenswrapper[28120]: E0220 15:16:06.610100 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3e5ceaa912a7025433e8ada9af585c23e6549332d866a1b998298f085e56d2e\": container with ID starting with d3e5ceaa912a7025433e8ada9af585c23e6549332d866a1b998298f085e56d2e not found: ID does not exist" containerID="d3e5ceaa912a7025433e8ada9af585c23e6549332d866a1b998298f085e56d2e" Feb 20 15:16:06.610264 master-0 kubenswrapper[28120]: I0220 15:16:06.610211 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3e5ceaa912a7025433e8ada9af585c23e6549332d866a1b998298f085e56d2e"} err="failed to get container status \"d3e5ceaa912a7025433e8ada9af585c23e6549332d866a1b998298f085e56d2e\": rpc error: code = NotFound desc = could not find container \"d3e5ceaa912a7025433e8ada9af585c23e6549332d866a1b998298f085e56d2e\": container with ID starting with d3e5ceaa912a7025433e8ada9af585c23e6549332d866a1b998298f085e56d2e not found: ID does not exist" Feb 20 15:16:06.742518 master-0 kubenswrapper[28120]: I0220 15:16:06.742478 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8742b7a0-5156-4b20-b683-feb84b330828-dns-svc\") pod \"8742b7a0-5156-4b20-b683-feb84b330828\" (UID: \"8742b7a0-5156-4b20-b683-feb84b330828\") " Feb 20 15:16:06.742941 master-0 kubenswrapper[28120]: I0220 15:16:06.742553 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8742b7a0-5156-4b20-b683-feb84b330828-config\") pod \"8742b7a0-5156-4b20-b683-feb84b330828\" (UID: \"8742b7a0-5156-4b20-b683-feb84b330828\") " Feb 20 15:16:06.743057 master-0 kubenswrapper[28120]: I0220 15:16:06.743028 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvpbc\" (UniqueName: \"kubernetes.io/projected/8742b7a0-5156-4b20-b683-feb84b330828-kube-api-access-vvpbc\") pod \"8742b7a0-5156-4b20-b683-feb84b330828\" (UID: \"8742b7a0-5156-4b20-b683-feb84b330828\") " Feb 20 15:16:06.746413 master-0 kubenswrapper[28120]: I0220 15:16:06.746369 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8742b7a0-5156-4b20-b683-feb84b330828-kube-api-access-vvpbc" (OuterVolumeSpecName: "kube-api-access-vvpbc") pod "8742b7a0-5156-4b20-b683-feb84b330828" (UID: "8742b7a0-5156-4b20-b683-feb84b330828"). InnerVolumeSpecName "kube-api-access-vvpbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:06.821437 master-0 kubenswrapper[28120]: I0220 15:16:06.820242 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8742b7a0-5156-4b20-b683-feb84b330828-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8742b7a0-5156-4b20-b683-feb84b330828" (UID: "8742b7a0-5156-4b20-b683-feb84b330828"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:06.829316 master-0 kubenswrapper[28120]: I0220 15:16:06.829279 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8742b7a0-5156-4b20-b683-feb84b330828-config" (OuterVolumeSpecName: "config") pod "8742b7a0-5156-4b20-b683-feb84b330828" (UID: "8742b7a0-5156-4b20-b683-feb84b330828"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:06.850073 master-0 kubenswrapper[28120]: I0220 15:16:06.845075 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c1bd517d-8804-484c-a8b8-bc08705b9479","Type":"ContainerStarted","Data":"085a96cb57d82705e78620632335e051c4003cca09c237973cb80c1a81dcfc3c"} Feb 20 15:16:06.850073 master-0 kubenswrapper[28120]: I0220 15:16:06.845261 28120 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8742b7a0-5156-4b20-b683-feb84b330828-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:06.850073 master-0 kubenswrapper[28120]: I0220 15:16:06.845287 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8742b7a0-5156-4b20-b683-feb84b330828-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:06.850073 master-0 kubenswrapper[28120]: I0220 15:16:06.845301 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvpbc\" (UniqueName: \"kubernetes.io/projected/8742b7a0-5156-4b20-b683-feb84b330828-kube-api-access-vvpbc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:06.853482 master-0 kubenswrapper[28120]: I0220 15:16:06.851126 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" event={"ID":"8742b7a0-5156-4b20-b683-feb84b330828","Type":"ContainerDied","Data":"f7be75fb7df5e97c782927638f00a2324ffbe4556c4df5bc62521b9b776908ef"} Feb 20 15:16:06.853482 master-0 kubenswrapper[28120]: I0220 15:16:06.851182 28120 scope.go:117] "RemoveContainer" containerID="0fb4806d1b043d5aaf8d4c25cf1ccbbf03789ab393117a8cbb2a6cc598d90370" Feb 20 15:16:06.853482 master-0 kubenswrapper[28120]: I0220 15:16:06.851279 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-nf55m" Feb 20 15:16:06.859441 master-0 kubenswrapper[28120]: I0220 15:16:06.859402 28120 generic.go:334] "Generic (PLEG): container finished" podID="c8e6f35c-0dcb-4fc0-a236-7245736b3ae7" containerID="24ae547f022d6f5ee6a086e436e3927f70b26bd810a8bf76e3b3045c260f61d6" exitCode=0 Feb 20 15:16:06.859776 master-0 kubenswrapper[28120]: I0220 15:16:06.859732 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" event={"ID":"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7","Type":"ContainerDied","Data":"24ae547f022d6f5ee6a086e436e3927f70b26bd810a8bf76e3b3045c260f61d6"} Feb 20 15:16:06.863954 master-0 kubenswrapper[28120]: I0220 15:16:06.863854 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0c6cc641-5305-4446-bcaf-7a5ebe004556","Type":"ContainerStarted","Data":"b91eff92bbf9ef5bccc921826020860640ee307ddb69ce3760c226a9f1e6816e"} Feb 20 15:16:06.865491 master-0 kubenswrapper[28120]: I0220 15:16:06.865347 28120 generic.go:334] "Generic (PLEG): container finished" podID="6d6be79c-d0e2-452f-8462-dd8022e4232d" containerID="31f1e4f9df70da049bccae6494df64c1e2ede1af79b55f996521e7050fac4930" exitCode=0 Feb 20 15:16:06.865491 master-0 kubenswrapper[28120]: I0220 15:16:06.865410 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" event={"ID":"6d6be79c-d0e2-452f-8462-dd8022e4232d","Type":"ContainerDied","Data":"31f1e4f9df70da049bccae6494df64c1e2ede1af79b55f996521e7050fac4930"} Feb 20 15:16:06.876984 master-0 kubenswrapper[28120]: I0220 15:16:06.875646 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.002623167 podStartE2EDuration="17.875627121s" podCreationTimestamp="2026-02-20 15:15:49 +0000 UTC" firstStartedPulling="2026-02-20 15:15:58.741885989 +0000 UTC m=+897.002679552" lastFinishedPulling="2026-02-20 15:16:06.614889943 +0000 UTC m=+904.875683506" observedRunningTime="2026-02-20 15:16:06.874893533 +0000 UTC m=+905.135687106" watchObservedRunningTime="2026-02-20 15:16:06.875627121 +0000 UTC m=+905.136420704" Feb 20 15:16:06.962365 master-0 kubenswrapper[28120]: I0220 15:16:06.962272 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.080895978 podStartE2EDuration="17.96225009s" podCreationTimestamp="2026-02-20 15:15:49 +0000 UTC" firstStartedPulling="2026-02-20 15:15:58.741718175 +0000 UTC m=+897.002511728" lastFinishedPulling="2026-02-20 15:16:06.623072277 +0000 UTC m=+904.883865840" observedRunningTime="2026-02-20 15:16:06.959817159 +0000 UTC m=+905.220610722" watchObservedRunningTime="2026-02-20 15:16:06.96225009 +0000 UTC m=+905.223043773" Feb 20 15:16:06.999941 master-0 kubenswrapper[28120]: I0220 15:16:06.996751 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-nf55m"] Feb 20 15:16:07.002645 master-0 kubenswrapper[28120]: I0220 15:16:07.002167 28120 scope.go:117] "RemoveContainer" containerID="595b6c9cfd35e711d528aa481fd547dd6157cc02b9449fbccd3c8c7535dafd76" Feb 20 15:16:07.005105 master-0 kubenswrapper[28120]: I0220 15:16:07.004442 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-nf55m"] Feb 20 15:16:07.066380 master-0 kubenswrapper[28120]: I0220 15:16:07.066280 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 20 15:16:07.878140 master-0 kubenswrapper[28120]: I0220 15:16:07.877663 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5wrbz" event={"ID":"5f4eec79-14a8-4bd7-9356-ba082bd01f91","Type":"ContainerStarted","Data":"2af66b7f76e8096b202e835ad2faf32e4ba37a8d7c2dae38985e85f5e2a4b6c3"} Feb 20 15:16:07.881513 master-0 kubenswrapper[28120]: I0220 15:16:07.881481 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" event={"ID":"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7","Type":"ContainerStarted","Data":"53e018325ffcc3cc698d08257b35affd29c14be51833c264ab08b53cf5bc0a65"} Feb 20 15:16:07.881844 master-0 kubenswrapper[28120]: I0220 15:16:07.881692 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:07.883859 master-0 kubenswrapper[28120]: I0220 15:16:07.883808 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5lchw" event={"ID":"08ee872d-0365-4096-bc4e-387262807443","Type":"ContainerStarted","Data":"98cf91194f5f0be57eac0d2ce417aa1bdf56b6e1cdafdd0dd9ff2e29c7347a3a"} Feb 20 15:16:07.884017 master-0 kubenswrapper[28120]: I0220 15:16:07.883989 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5lchw" event={"ID":"08ee872d-0365-4096-bc4e-387262807443","Type":"ContainerStarted","Data":"dd6ad85673422cc7d02ed087a9c54cd4d0ebb0c5922e8a751c7e567a917e05c1"} Feb 20 15:16:07.884094 master-0 kubenswrapper[28120]: I0220 15:16:07.884070 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:16:07.884144 master-0 kubenswrapper[28120]: I0220 15:16:07.884105 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:16:07.886507 master-0 kubenswrapper[28120]: I0220 15:16:07.886451 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" event={"ID":"6d6be79c-d0e2-452f-8462-dd8022e4232d","Type":"ContainerStarted","Data":"cc23bd8f6aaf698af526ae0e1de2e0ec9809914fedcc831557c5b562562dbc3f"} Feb 20 15:16:07.887077 master-0 kubenswrapper[28120]: I0220 15:16:07.886994 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:07.944063 master-0 kubenswrapper[28120]: I0220 15:16:07.933123 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-5wrbz" podStartSLOduration=5.479076234 podStartE2EDuration="7.933092106s" podCreationTimestamp="2026-02-20 15:16:00 +0000 UTC" firstStartedPulling="2026-02-20 15:16:04.169118786 +0000 UTC m=+902.429912349" lastFinishedPulling="2026-02-20 15:16:06.623134658 +0000 UTC m=+904.883928221" observedRunningTime="2026-02-20 15:16:07.926794299 +0000 UTC m=+906.187587892" watchObservedRunningTime="2026-02-20 15:16:07.933092106 +0000 UTC m=+906.193885689" Feb 20 15:16:08.069688 master-0 kubenswrapper[28120]: I0220 15:16:08.069626 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8742b7a0-5156-4b20-b683-feb84b330828" path="/var/lib/kubelet/pods/8742b7a0-5156-4b20-b683-feb84b330828/volumes" Feb 20 15:16:08.070355 master-0 kubenswrapper[28120]: I0220 15:16:08.070325 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91310aca-57ce-4308-afcc-95511e83dc27" path="/var/lib/kubelet/pods/91310aca-57ce-4308-afcc-95511e83dc27/volumes" Feb 20 15:16:08.107061 master-0 kubenswrapper[28120]: I0220 15:16:08.106960 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-5lchw" podStartSLOduration=14.835965642 podStartE2EDuration="22.106915359s" podCreationTimestamp="2026-02-20 15:15:46 +0000 UTC" firstStartedPulling="2026-02-20 15:15:55.884026142 +0000 UTC m=+894.144819705" lastFinishedPulling="2026-02-20 15:16:03.154975849 +0000 UTC m=+901.415769422" observedRunningTime="2026-02-20 15:16:08.091953036 +0000 UTC m=+906.352746599" watchObservedRunningTime="2026-02-20 15:16:08.106915359 +0000 UTC m=+906.367708922" Feb 20 15:16:08.260882 master-0 kubenswrapper[28120]: I0220 15:16:08.260768 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" podStartSLOduration=8.260741752 podStartE2EDuration="8.260741752s" podCreationTimestamp="2026-02-20 15:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:16:08.245524963 +0000 UTC m=+906.506318526" watchObservedRunningTime="2026-02-20 15:16:08.260741752 +0000 UTC m=+906.521535315" Feb 20 15:16:08.453955 master-0 kubenswrapper[28120]: I0220 15:16:08.453841 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" podStartSLOduration=8.453814355 podStartE2EDuration="8.453814355s" podCreationTimestamp="2026-02-20 15:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:16:08.446847731 +0000 UTC m=+906.707641324" watchObservedRunningTime="2026-02-20 15:16:08.453814355 +0000 UTC m=+906.714607948" Feb 20 15:16:08.866304 master-0 kubenswrapper[28120]: I0220 15:16:08.866194 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 20 15:16:08.866304 master-0 kubenswrapper[28120]: I0220 15:16:08.866269 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 20 15:16:08.943446 master-0 kubenswrapper[28120]: I0220 15:16:08.943381 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 20 15:16:09.917399 master-0 kubenswrapper[28120]: I0220 15:16:09.917317 28120 generic.go:334] "Generic (PLEG): container finished" podID="f53073f3-712a-4f4f-9d29-18e123a19303" containerID="6c690bff11bedbb5f44965cb7ce3bcd2880f45f3cacd5db13929a5b9b8681abf" exitCode=0 Feb 20 15:16:09.917735 master-0 kubenswrapper[28120]: I0220 15:16:09.917458 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f53073f3-712a-4f4f-9d29-18e123a19303","Type":"ContainerDied","Data":"6c690bff11bedbb5f44965cb7ce3bcd2880f45f3cacd5db13929a5b9b8681abf"} Feb 20 15:16:09.920779 master-0 kubenswrapper[28120]: I0220 15:16:09.920091 28120 generic.go:334] "Generic (PLEG): container finished" podID="59d78ff4-b331-48f2-bc79-f89b4647e69c" containerID="440521a044786574d14827d220b65b9316b09a9baea6ddefac1fc44b747ec913" exitCode=0 Feb 20 15:16:09.920779 master-0 kubenswrapper[28120]: I0220 15:16:09.920171 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"59d78ff4-b331-48f2-bc79-f89b4647e69c","Type":"ContainerDied","Data":"440521a044786574d14827d220b65b9316b09a9baea6ddefac1fc44b747ec913"} Feb 20 15:16:10.074778 master-0 kubenswrapper[28120]: I0220 15:16:10.074428 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 20 15:16:10.138600 master-0 kubenswrapper[28120]: I0220 15:16:10.136360 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 20 15:16:10.936733 master-0 kubenswrapper[28120]: I0220 15:16:10.936562 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f53073f3-712a-4f4f-9d29-18e123a19303","Type":"ContainerStarted","Data":"3ccc88c8f5bf7fa13c08dc7ea0dedb0a82c5532b3479354928b58026b4ee735d"} Feb 20 15:16:10.941906 master-0 kubenswrapper[28120]: I0220 15:16:10.941857 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"59d78ff4-b331-48f2-bc79-f89b4647e69c","Type":"ContainerStarted","Data":"3bb7ae36c74ee13cad7aeaf30e57ddcaa792567174cdf8f8bd9134655e8e98a0"} Feb 20 15:16:10.985970 master-0 kubenswrapper[28120]: I0220 15:16:10.985802 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=22.844322837 podStartE2EDuration="30.985774091s" podCreationTimestamp="2026-02-20 15:15:40 +0000 UTC" firstStartedPulling="2026-02-20 15:15:55.312016735 +0000 UTC m=+893.572810338" lastFinishedPulling="2026-02-20 15:16:03.453468029 +0000 UTC m=+901.714261592" observedRunningTime="2026-02-20 15:16:10.975107575 +0000 UTC m=+909.235901168" watchObservedRunningTime="2026-02-20 15:16:10.985774091 +0000 UTC m=+909.246567694" Feb 20 15:16:11.032455 master-0 kubenswrapper[28120]: I0220 15:16:11.032328 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=25.231523867 podStartE2EDuration="33.03229999s" podCreationTimestamp="2026-02-20 15:15:38 +0000 UTC" firstStartedPulling="2026-02-20 15:15:55.354210157 +0000 UTC m=+893.615003720" lastFinishedPulling="2026-02-20 15:16:03.15498628 +0000 UTC m=+901.415779843" observedRunningTime="2026-02-20 15:16:11.015250805 +0000 UTC m=+909.276044378" watchObservedRunningTime="2026-02-20 15:16:11.03229999 +0000 UTC m=+909.293093593" Feb 20 15:16:11.033800 master-0 kubenswrapper[28120]: I0220 15:16:11.033728 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 20 15:16:12.018723 master-0 kubenswrapper[28120]: I0220 15:16:12.018619 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 20 15:16:13.648901 master-0 kubenswrapper[28120]: I0220 15:16:13.639363 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-6fsrj"] Feb 20 15:16:13.648901 master-0 kubenswrapper[28120]: I0220 15:16:13.639596 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" podUID="6d6be79c-d0e2-452f-8462-dd8022e4232d" containerName="dnsmasq-dns" containerID="cri-o://cc23bd8f6aaf698af526ae0e1de2e0ec9809914fedcc831557c5b562562dbc3f" gracePeriod=10 Feb 20 15:16:13.648901 master-0 kubenswrapper[28120]: I0220 15:16:13.642183 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:13.725510 master-0 kubenswrapper[28120]: I0220 15:16:13.696139 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-wjnmk"] Feb 20 15:16:13.725510 master-0 kubenswrapper[28120]: E0220 15:16:13.696699 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91310aca-57ce-4308-afcc-95511e83dc27" containerName="init" Feb 20 15:16:13.725510 master-0 kubenswrapper[28120]: I0220 15:16:13.696713 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="91310aca-57ce-4308-afcc-95511e83dc27" containerName="init" Feb 20 15:16:13.725510 master-0 kubenswrapper[28120]: E0220 15:16:13.696726 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8742b7a0-5156-4b20-b683-feb84b330828" containerName="init" Feb 20 15:16:13.725510 master-0 kubenswrapper[28120]: I0220 15:16:13.696775 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8742b7a0-5156-4b20-b683-feb84b330828" containerName="init" Feb 20 15:16:13.725510 master-0 kubenswrapper[28120]: E0220 15:16:13.696793 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8742b7a0-5156-4b20-b683-feb84b330828" containerName="dnsmasq-dns" Feb 20 15:16:13.725510 master-0 kubenswrapper[28120]: I0220 15:16:13.696799 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8742b7a0-5156-4b20-b683-feb84b330828" containerName="dnsmasq-dns" Feb 20 15:16:13.725510 master-0 kubenswrapper[28120]: E0220 15:16:13.696813 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91310aca-57ce-4308-afcc-95511e83dc27" containerName="dnsmasq-dns" Feb 20 15:16:13.725510 master-0 kubenswrapper[28120]: I0220 15:16:13.696819 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="91310aca-57ce-4308-afcc-95511e83dc27" containerName="dnsmasq-dns" Feb 20 15:16:13.725510 master-0 kubenswrapper[28120]: I0220 15:16:13.697127 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="8742b7a0-5156-4b20-b683-feb84b330828" containerName="dnsmasq-dns" Feb 20 15:16:13.725510 master-0 kubenswrapper[28120]: I0220 15:16:13.697185 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="91310aca-57ce-4308-afcc-95511e83dc27" containerName="dnsmasq-dns" Feb 20 15:16:13.725510 master-0 kubenswrapper[28120]: I0220 15:16:13.698585 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:13.725510 master-0 kubenswrapper[28120]: I0220 15:16:13.709675 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-wjnmk"] Feb 20 15:16:13.848550 master-0 kubenswrapper[28120]: I0220 15:16:13.848079 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-dns-svc\") pod \"dnsmasq-dns-6fd49994df-wjnmk\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:13.848550 master-0 kubenswrapper[28120]: I0220 15:16:13.848127 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-wjnmk\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:13.850206 master-0 kubenswrapper[28120]: I0220 15:16:13.850168 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmgcn\" (UniqueName: \"kubernetes.io/projected/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-kube-api-access-gmgcn\") pod \"dnsmasq-dns-6fd49994df-wjnmk\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:13.850279 master-0 kubenswrapper[28120]: I0220 15:16:13.850255 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-wjnmk\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:13.851258 master-0 kubenswrapper[28120]: I0220 15:16:13.851156 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-config\") pod \"dnsmasq-dns-6fd49994df-wjnmk\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:13.920974 master-0 kubenswrapper[28120]: I0220 15:16:13.920011 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 20 15:16:13.960188 master-0 kubenswrapper[28120]: I0220 15:16:13.960134 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-dns-svc\") pod \"dnsmasq-dns-6fd49994df-wjnmk\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:13.960387 master-0 kubenswrapper[28120]: I0220 15:16:13.960206 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-wjnmk\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:13.960387 master-0 kubenswrapper[28120]: I0220 15:16:13.960232 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmgcn\" (UniqueName: \"kubernetes.io/projected/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-kube-api-access-gmgcn\") pod \"dnsmasq-dns-6fd49994df-wjnmk\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:13.960387 master-0 kubenswrapper[28120]: I0220 15:16:13.960300 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-wjnmk\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:13.960483 master-0 kubenswrapper[28120]: I0220 15:16:13.960445 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-config\") pod \"dnsmasq-dns-6fd49994df-wjnmk\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:13.961714 master-0 kubenswrapper[28120]: I0220 15:16:13.961684 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-config\") pod \"dnsmasq-dns-6fd49994df-wjnmk\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:13.961855 master-0 kubenswrapper[28120]: I0220 15:16:13.961815 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-wjnmk\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:13.962430 master-0 kubenswrapper[28120]: I0220 15:16:13.962403 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-dns-svc\") pod \"dnsmasq-dns-6fd49994df-wjnmk\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:13.962891 master-0 kubenswrapper[28120]: I0220 15:16:13.962860 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-wjnmk\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:13.979006 master-0 kubenswrapper[28120]: I0220 15:16:13.978498 28120 generic.go:334] "Generic (PLEG): container finished" podID="6d6be79c-d0e2-452f-8462-dd8022e4232d" containerID="cc23bd8f6aaf698af526ae0e1de2e0ec9809914fedcc831557c5b562562dbc3f" exitCode=0 Feb 20 15:16:13.979006 master-0 kubenswrapper[28120]: I0220 15:16:13.978543 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" event={"ID":"6d6be79c-d0e2-452f-8462-dd8022e4232d","Type":"ContainerDied","Data":"cc23bd8f6aaf698af526ae0e1de2e0ec9809914fedcc831557c5b562562dbc3f"} Feb 20 15:16:13.979993 master-0 kubenswrapper[28120]: I0220 15:16:13.979961 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmgcn\" (UniqueName: \"kubernetes.io/projected/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-kube-api-access-gmgcn\") pod \"dnsmasq-dns-6fd49994df-wjnmk\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:14.133018 master-0 kubenswrapper[28120]: I0220 15:16:14.129873 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 20 15:16:14.133018 master-0 kubenswrapper[28120]: I0220 15:16:14.131939 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 20 15:16:14.136079 master-0 kubenswrapper[28120]: I0220 15:16:14.133599 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 20 15:16:14.136079 master-0 kubenswrapper[28120]: I0220 15:16:14.134267 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 20 15:16:14.145110 master-0 kubenswrapper[28120]: I0220 15:16:14.144979 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 20 15:16:14.166445 master-0 kubenswrapper[28120]: I0220 15:16:14.166113 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:14.178707 master-0 kubenswrapper[28120]: I0220 15:16:14.173669 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 20 15:16:14.192567 master-0 kubenswrapper[28120]: I0220 15:16:14.192507 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07135de-6439-4eb2-961e-7452c763ae6e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.192699 master-0 kubenswrapper[28120]: I0220 15:16:14.192595 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a07135de-6439-4eb2-961e-7452c763ae6e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.192740 master-0 kubenswrapper[28120]: I0220 15:16:14.192723 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a07135de-6439-4eb2-961e-7452c763ae6e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.192815 master-0 kubenswrapper[28120]: I0220 15:16:14.192799 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a07135de-6439-4eb2-961e-7452c763ae6e-scripts\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.192851 master-0 kubenswrapper[28120]: I0220 15:16:14.192831 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07135de-6439-4eb2-961e-7452c763ae6e-config\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.195571 master-0 kubenswrapper[28120]: I0220 15:16:14.194780 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a07135de-6439-4eb2-961e-7452c763ae6e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.195571 master-0 kubenswrapper[28120]: I0220 15:16:14.194969 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdq98\" (UniqueName: \"kubernetes.io/projected/a07135de-6439-4eb2-961e-7452c763ae6e-kube-api-access-jdq98\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.303059 master-0 kubenswrapper[28120]: I0220 15:16:14.302998 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07135de-6439-4eb2-961e-7452c763ae6e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.303059 master-0 kubenswrapper[28120]: I0220 15:16:14.303058 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a07135de-6439-4eb2-961e-7452c763ae6e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.303331 master-0 kubenswrapper[28120]: I0220 15:16:14.303092 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a07135de-6439-4eb2-961e-7452c763ae6e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.303331 master-0 kubenswrapper[28120]: I0220 15:16:14.303125 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a07135de-6439-4eb2-961e-7452c763ae6e-scripts\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.303331 master-0 kubenswrapper[28120]: I0220 15:16:14.303145 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07135de-6439-4eb2-961e-7452c763ae6e-config\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.303331 master-0 kubenswrapper[28120]: I0220 15:16:14.303192 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a07135de-6439-4eb2-961e-7452c763ae6e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.303331 master-0 kubenswrapper[28120]: I0220 15:16:14.303231 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdq98\" (UniqueName: \"kubernetes.io/projected/a07135de-6439-4eb2-961e-7452c763ae6e-kube-api-access-jdq98\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.308792 master-0 kubenswrapper[28120]: I0220 15:16:14.304952 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a07135de-6439-4eb2-961e-7452c763ae6e-scripts\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.308792 master-0 kubenswrapper[28120]: I0220 15:16:14.305546 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07135de-6439-4eb2-961e-7452c763ae6e-config\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.309101 master-0 kubenswrapper[28120]: I0220 15:16:14.309021 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a07135de-6439-4eb2-961e-7452c763ae6e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.309260 master-0 kubenswrapper[28120]: I0220 15:16:14.309227 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:14.312695 master-0 kubenswrapper[28120]: I0220 15:16:14.312661 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07135de-6439-4eb2-961e-7452c763ae6e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.315280 master-0 kubenswrapper[28120]: I0220 15:16:14.313689 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a07135de-6439-4eb2-961e-7452c763ae6e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.323050 master-0 kubenswrapper[28120]: I0220 15:16:14.322977 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a07135de-6439-4eb2-961e-7452c763ae6e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.325508 master-0 kubenswrapper[28120]: I0220 15:16:14.325476 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdq98\" (UniqueName: \"kubernetes.io/projected/a07135de-6439-4eb2-961e-7452c763ae6e-kube-api-access-jdq98\") pod \"ovn-northd-0\" (UID: \"a07135de-6439-4eb2-961e-7452c763ae6e\") " pod="openstack/ovn-northd-0" Feb 20 15:16:14.506621 master-0 kubenswrapper[28120]: I0220 15:16:14.506561 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-dns-svc\") pod \"6d6be79c-d0e2-452f-8462-dd8022e4232d\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " Feb 20 15:16:14.506621 master-0 kubenswrapper[28120]: I0220 15:16:14.506622 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-config\") pod \"6d6be79c-d0e2-452f-8462-dd8022e4232d\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " Feb 20 15:16:14.506855 master-0 kubenswrapper[28120]: I0220 15:16:14.506679 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-ovsdbserver-nb\") pod \"6d6be79c-d0e2-452f-8462-dd8022e4232d\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " Feb 20 15:16:14.506855 master-0 kubenswrapper[28120]: I0220 15:16:14.506734 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-ovsdbserver-sb\") pod \"6d6be79c-d0e2-452f-8462-dd8022e4232d\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " Feb 20 15:16:14.506855 master-0 kubenswrapper[28120]: I0220 15:16:14.506779 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtcp7\" (UniqueName: \"kubernetes.io/projected/6d6be79c-d0e2-452f-8462-dd8022e4232d-kube-api-access-jtcp7\") pod \"6d6be79c-d0e2-452f-8462-dd8022e4232d\" (UID: \"6d6be79c-d0e2-452f-8462-dd8022e4232d\") " Feb 20 15:16:14.513131 master-0 kubenswrapper[28120]: I0220 15:16:14.513070 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d6be79c-d0e2-452f-8462-dd8022e4232d-kube-api-access-jtcp7" (OuterVolumeSpecName: "kube-api-access-jtcp7") pod "6d6be79c-d0e2-452f-8462-dd8022e4232d" (UID: "6d6be79c-d0e2-452f-8462-dd8022e4232d"). InnerVolumeSpecName "kube-api-access-jtcp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:14.564957 master-0 kubenswrapper[28120]: I0220 15:16:14.564852 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6d6be79c-d0e2-452f-8462-dd8022e4232d" (UID: "6d6be79c-d0e2-452f-8462-dd8022e4232d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:14.565960 master-0 kubenswrapper[28120]: I0220 15:16:14.565905 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6d6be79c-d0e2-452f-8462-dd8022e4232d" (UID: "6d6be79c-d0e2-452f-8462-dd8022e4232d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:14.567109 master-0 kubenswrapper[28120]: I0220 15:16:14.567050 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6d6be79c-d0e2-452f-8462-dd8022e4232d" (UID: "6d6be79c-d0e2-452f-8462-dd8022e4232d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:14.568739 master-0 kubenswrapper[28120]: I0220 15:16:14.567813 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-config" (OuterVolumeSpecName: "config") pod "6d6be79c-d0e2-452f-8462-dd8022e4232d" (UID: "6d6be79c-d0e2-452f-8462-dd8022e4232d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:14.576437 master-0 kubenswrapper[28120]: I0220 15:16:14.576211 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 20 15:16:14.608858 master-0 kubenswrapper[28120]: I0220 15:16:14.608805 28120 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:14.608858 master-0 kubenswrapper[28120]: I0220 15:16:14.608843 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:14.608858 master-0 kubenswrapper[28120]: I0220 15:16:14.608855 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:14.608858 master-0 kubenswrapper[28120]: I0220 15:16:14.608866 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d6be79c-d0e2-452f-8462-dd8022e4232d-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:14.609167 master-0 kubenswrapper[28120]: I0220 15:16:14.608876 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtcp7\" (UniqueName: \"kubernetes.io/projected/6d6be79c-d0e2-452f-8462-dd8022e4232d-kube-api-access-jtcp7\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:14.671606 master-0 kubenswrapper[28120]: I0220 15:16:14.671563 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-wjnmk"] Feb 20 15:16:14.679845 master-0 kubenswrapper[28120]: W0220 15:16:14.679796 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1042f39c_bf98_4a4f_92ed_5b937bbc88f1.slice/crio-70928ee6bde31cc19bee02811d7a1d3f237ab49cca76b0e61f4b3853abba7d47 WatchSource:0}: Error finding container 70928ee6bde31cc19bee02811d7a1d3f237ab49cca76b0e61f4b3853abba7d47: Status 404 returned error can't find the container with id 70928ee6bde31cc19bee02811d7a1d3f237ab49cca76b0e61f4b3853abba7d47 Feb 20 15:16:14.990314 master-0 kubenswrapper[28120]: I0220 15:16:14.990234 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" event={"ID":"1042f39c-bf98-4a4f-92ed-5b937bbc88f1","Type":"ContainerStarted","Data":"70928ee6bde31cc19bee02811d7a1d3f237ab49cca76b0e61f4b3853abba7d47"} Feb 20 15:16:14.992709 master-0 kubenswrapper[28120]: I0220 15:16:14.992668 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" event={"ID":"6d6be79c-d0e2-452f-8462-dd8022e4232d","Type":"ContainerDied","Data":"01a82a8e6c0178f58e63c1f5fb5283f31a25bc4bb9f74ccbfee20bde93d606cb"} Feb 20 15:16:14.992709 master-0 kubenswrapper[28120]: I0220 15:16:14.992709 28120 scope.go:117] "RemoveContainer" containerID="cc23bd8f6aaf698af526ae0e1de2e0ec9809914fedcc831557c5b562562dbc3f" Feb 20 15:16:14.992848 master-0 kubenswrapper[28120]: I0220 15:16:14.992822 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-6fsrj" Feb 20 15:16:15.023479 master-0 kubenswrapper[28120]: I0220 15:16:15.023425 28120 scope.go:117] "RemoveContainer" containerID="31f1e4f9df70da049bccae6494df64c1e2ede1af79b55f996521e7050fac4930" Feb 20 15:16:15.068123 master-0 kubenswrapper[28120]: I0220 15:16:15.064685 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-6fsrj"] Feb 20 15:16:15.077315 master-0 kubenswrapper[28120]: I0220 15:16:15.073051 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-6fsrj"] Feb 20 15:16:15.077561 master-0 kubenswrapper[28120]: W0220 15:16:15.077508 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda07135de_6439_4eb2_961e_7452c763ae6e.slice/crio-0e4c6b120d7d123f7776a84eea402ed9d460c1f1b410a0dc80abc1eb80b84c22 WatchSource:0}: Error finding container 0e4c6b120d7d123f7776a84eea402ed9d460c1f1b410a0dc80abc1eb80b84c22: Status 404 returned error can't find the container with id 0e4c6b120d7d123f7776a84eea402ed9d460c1f1b410a0dc80abc1eb80b84c22 Feb 20 15:16:15.082252 master-0 kubenswrapper[28120]: I0220 15:16:15.082209 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 20 15:16:15.393021 master-0 kubenswrapper[28120]: I0220 15:16:15.392879 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 20 15:16:15.393021 master-0 kubenswrapper[28120]: I0220 15:16:15.392953 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 20 15:16:15.508471 master-0 kubenswrapper[28120]: I0220 15:16:15.508407 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 20 15:16:15.689411 master-0 kubenswrapper[28120]: I0220 15:16:15.689269 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 20 15:16:15.689957 master-0 kubenswrapper[28120]: E0220 15:16:15.689755 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d6be79c-d0e2-452f-8462-dd8022e4232d" containerName="init" Feb 20 15:16:15.689957 master-0 kubenswrapper[28120]: I0220 15:16:15.689772 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6be79c-d0e2-452f-8462-dd8022e4232d" containerName="init" Feb 20 15:16:15.689957 master-0 kubenswrapper[28120]: E0220 15:16:15.689798 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d6be79c-d0e2-452f-8462-dd8022e4232d" containerName="dnsmasq-dns" Feb 20 15:16:15.689957 master-0 kubenswrapper[28120]: I0220 15:16:15.689807 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6be79c-d0e2-452f-8462-dd8022e4232d" containerName="dnsmasq-dns" Feb 20 15:16:15.690139 master-0 kubenswrapper[28120]: I0220 15:16:15.690086 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d6be79c-d0e2-452f-8462-dd8022e4232d" containerName="dnsmasq-dns" Feb 20 15:16:15.697532 master-0 kubenswrapper[28120]: I0220 15:16:15.697467 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 20 15:16:15.700483 master-0 kubenswrapper[28120]: I0220 15:16:15.700438 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 20 15:16:15.700629 master-0 kubenswrapper[28120]: I0220 15:16:15.700568 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 20 15:16:15.710797 master-0 kubenswrapper[28120]: I0220 15:16:15.710736 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 20 15:16:15.731556 master-0 kubenswrapper[28120]: I0220 15:16:15.731485 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 20 15:16:15.786215 master-0 kubenswrapper[28120]: I0220 15:16:15.786164 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:15.861131 master-0 kubenswrapper[28120]: I0220 15:16:15.860206 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d01f0ab2-17d1-418f-960c-34572106eef6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f178b9ee-9941-4f7f-a44a-ddc5a3889366\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:15.861131 master-0 kubenswrapper[28120]: I0220 15:16:15.861106 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/742c46aa-374c-4e50-a4a1-6e46f7c13937-cache\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:15.861401 master-0 kubenswrapper[28120]: I0220 15:16:15.861159 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/742c46aa-374c-4e50-a4a1-6e46f7c13937-lock\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:15.861401 master-0 kubenswrapper[28120]: I0220 15:16:15.861358 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742c46aa-374c-4e50-a4a1-6e46f7c13937-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:15.861493 master-0 kubenswrapper[28120]: I0220 15:16:15.861416 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:15.861599 master-0 kubenswrapper[28120]: I0220 15:16:15.861566 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lfct\" (UniqueName: \"kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-kube-api-access-6lfct\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:15.965763 master-0 kubenswrapper[28120]: I0220 15:16:15.962719 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lfct\" (UniqueName: \"kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-kube-api-access-6lfct\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:15.965763 master-0 kubenswrapper[28120]: I0220 15:16:15.962814 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d01f0ab2-17d1-418f-960c-34572106eef6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f178b9ee-9941-4f7f-a44a-ddc5a3889366\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:15.965763 master-0 kubenswrapper[28120]: I0220 15:16:15.962837 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/742c46aa-374c-4e50-a4a1-6e46f7c13937-cache\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:15.965763 master-0 kubenswrapper[28120]: I0220 15:16:15.962878 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/742c46aa-374c-4e50-a4a1-6e46f7c13937-lock\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:15.965763 master-0 kubenswrapper[28120]: I0220 15:16:15.962937 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742c46aa-374c-4e50-a4a1-6e46f7c13937-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:15.965763 master-0 kubenswrapper[28120]: I0220 15:16:15.962965 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:15.965763 master-0 kubenswrapper[28120]: I0220 15:16:15.963388 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/742c46aa-374c-4e50-a4a1-6e46f7c13937-cache\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:15.965763 master-0 kubenswrapper[28120]: I0220 15:16:15.963717 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/742c46aa-374c-4e50-a4a1-6e46f7c13937-lock\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:15.965763 master-0 kubenswrapper[28120]: E0220 15:16:15.963862 28120 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 20 15:16:15.965763 master-0 kubenswrapper[28120]: E0220 15:16:15.963876 28120 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 20 15:16:15.965763 master-0 kubenswrapper[28120]: E0220 15:16:15.963909 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift podName:742c46aa-374c-4e50-a4a1-6e46f7c13937 nodeName:}" failed. No retries permitted until 2026-02-20 15:16:16.463897292 +0000 UTC m=+914.724690855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift") pod "swift-storage-0" (UID: "742c46aa-374c-4e50-a4a1-6e46f7c13937") : configmap "swift-ring-files" not found Feb 20 15:16:15.965763 master-0 kubenswrapper[28120]: I0220 15:16:15.965579 28120 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 15:16:15.965763 master-0 kubenswrapper[28120]: I0220 15:16:15.965632 28120 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d01f0ab2-17d1-418f-960c-34572106eef6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f178b9ee-9941-4f7f-a44a-ddc5a3889366\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/39e0ac2eb992f7c1a204efa39a8c2fd196e5ce277131524c46c80fd0b8f094f3/globalmount\"" pod="openstack/swift-storage-0" Feb 20 15:16:15.966660 master-0 kubenswrapper[28120]: I0220 15:16:15.966640 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742c46aa-374c-4e50-a4a1-6e46f7c13937-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:15.989576 master-0 kubenswrapper[28120]: I0220 15:16:15.989534 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lfct\" (UniqueName: \"kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-kube-api-access-6lfct\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:16.005984 master-0 kubenswrapper[28120]: I0220 15:16:16.005895 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a07135de-6439-4eb2-961e-7452c763ae6e","Type":"ContainerStarted","Data":"0e4c6b120d7d123f7776a84eea402ed9d460c1f1b410a0dc80abc1eb80b84c22"} Feb 20 15:16:16.009684 master-0 kubenswrapper[28120]: I0220 15:16:16.009641 28120 generic.go:334] "Generic (PLEG): container finished" podID="1042f39c-bf98-4a4f-92ed-5b937bbc88f1" containerID="2de49e6702d99621e5d47e0b691e94452fd66b24515c959245ce30730e84d1a3" exitCode=0 Feb 20 15:16:16.009854 master-0 kubenswrapper[28120]: I0220 15:16:16.009717 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" event={"ID":"1042f39c-bf98-4a4f-92ed-5b937bbc88f1","Type":"ContainerDied","Data":"2de49e6702d99621e5d47e0b691e94452fd66b24515c959245ce30730e84d1a3"} Feb 20 15:16:16.143044 master-0 kubenswrapper[28120]: I0220 15:16:16.140135 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d6be79c-d0e2-452f-8462-dd8022e4232d" path="/var/lib/kubelet/pods/6d6be79c-d0e2-452f-8462-dd8022e4232d/volumes" Feb 20 15:16:16.298757 master-0 kubenswrapper[28120]: I0220 15:16:16.298704 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 20 15:16:16.491328 master-0 kubenswrapper[28120]: I0220 15:16:16.491281 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:16.491527 master-0 kubenswrapper[28120]: E0220 15:16:16.491451 28120 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 20 15:16:16.491527 master-0 kubenswrapper[28120]: E0220 15:16:16.491475 28120 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 20 15:16:16.491527 master-0 kubenswrapper[28120]: E0220 15:16:16.491524 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift podName:742c46aa-374c-4e50-a4a1-6e46f7c13937 nodeName:}" failed. No retries permitted until 2026-02-20 15:16:17.491506052 +0000 UTC m=+915.752299615 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift") pod "swift-storage-0" (UID: "742c46aa-374c-4e50-a4a1-6e46f7c13937") : configmap "swift-ring-files" not found Feb 20 15:16:16.546261 master-0 kubenswrapper[28120]: I0220 15:16:16.545285 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 20 15:16:16.546261 master-0 kubenswrapper[28120]: I0220 15:16:16.545599 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 20 15:16:17.028164 master-0 kubenswrapper[28120]: I0220 15:16:17.028107 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a07135de-6439-4eb2-961e-7452c763ae6e","Type":"ContainerStarted","Data":"77469a7f8e3f21ef20a1db7d0d66ac2b0b1f179cc9189e7106d1796fd281259e"} Feb 20 15:16:17.028164 master-0 kubenswrapper[28120]: I0220 15:16:17.028158 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a07135de-6439-4eb2-961e-7452c763ae6e","Type":"ContainerStarted","Data":"9ea7c21dfdf50ed4263f805f456ba6469735593b0a00566c4a0259e7657ab37f"} Feb 20 15:16:17.029153 master-0 kubenswrapper[28120]: I0220 15:16:17.029109 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 20 15:16:17.033753 master-0 kubenswrapper[28120]: I0220 15:16:17.033552 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" event={"ID":"1042f39c-bf98-4a4f-92ed-5b937bbc88f1","Type":"ContainerStarted","Data":"c530edc6df6a6aae8c87428b8cd75716632ac75413dba47a954eba749a097089"} Feb 20 15:16:17.081587 master-0 kubenswrapper[28120]: I0220 15:16:17.075243 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.548660213 podStartE2EDuration="3.07521848s" podCreationTimestamp="2026-02-20 15:16:14 +0000 UTC" firstStartedPulling="2026-02-20 15:16:15.08030862 +0000 UTC m=+913.341102183" lastFinishedPulling="2026-02-20 15:16:16.606866887 +0000 UTC m=+914.867660450" observedRunningTime="2026-02-20 15:16:17.065499388 +0000 UTC m=+915.326292961" watchObservedRunningTime="2026-02-20 15:16:17.07521848 +0000 UTC m=+915.336012073" Feb 20 15:16:17.096950 master-0 kubenswrapper[28120]: I0220 15:16:17.094694 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" podStartSLOduration=4.094673445 podStartE2EDuration="4.094673445s" podCreationTimestamp="2026-02-20 15:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:16:17.084164703 +0000 UTC m=+915.344958266" watchObservedRunningTime="2026-02-20 15:16:17.094673445 +0000 UTC m=+915.355467008" Feb 20 15:16:17.389039 master-0 kubenswrapper[28120]: I0220 15:16:17.388568 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 20 15:16:17.515228 master-0 kubenswrapper[28120]: I0220 15:16:17.515131 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:17.515651 master-0 kubenswrapper[28120]: E0220 15:16:17.515603 28120 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 20 15:16:17.515651 master-0 kubenswrapper[28120]: E0220 15:16:17.515638 28120 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 20 15:16:17.515827 master-0 kubenswrapper[28120]: E0220 15:16:17.515701 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift podName:742c46aa-374c-4e50-a4a1-6e46f7c13937 nodeName:}" failed. No retries permitted until 2026-02-20 15:16:19.515680458 +0000 UTC m=+917.776474051 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift") pod "swift-storage-0" (UID: "742c46aa-374c-4e50-a4a1-6e46f7c13937") : configmap "swift-ring-files" not found Feb 20 15:16:17.616299 master-0 kubenswrapper[28120]: I0220 15:16:17.616230 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d01f0ab2-17d1-418f-960c-34572106eef6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f178b9ee-9941-4f7f-a44a-ddc5a3889366\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:18.049875 master-0 kubenswrapper[28120]: I0220 15:16:18.047603 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:18.170991 master-0 kubenswrapper[28120]: I0220 15:16:18.170871 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 20 15:16:18.795352 master-0 kubenswrapper[28120]: I0220 15:16:18.795282 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tk445"] Feb 20 15:16:18.796713 master-0 kubenswrapper[28120]: I0220 15:16:18.796678 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tk445" Feb 20 15:16:18.799567 master-0 kubenswrapper[28120]: I0220 15:16:18.799473 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 20 15:16:18.806421 master-0 kubenswrapper[28120]: I0220 15:16:18.806317 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-chd7d"] Feb 20 15:16:18.853615 master-0 kubenswrapper[28120]: I0220 15:16:18.809287 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.853615 master-0 kubenswrapper[28120]: I0220 15:16:18.816553 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tk445"] Feb 20 15:16:18.853615 master-0 kubenswrapper[28120]: I0220 15:16:18.816582 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 20 15:16:18.853615 master-0 kubenswrapper[28120]: I0220 15:16:18.816681 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 20 15:16:18.853615 master-0 kubenswrapper[28120]: I0220 15:16:18.816950 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 20 15:16:18.857355 master-0 kubenswrapper[28120]: I0220 15:16:18.856059 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-etc-swift\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.857597 master-0 kubenswrapper[28120]: I0220 15:16:18.857569 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-ring-data-devices\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.857802 master-0 kubenswrapper[28120]: I0220 15:16:18.857782 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-swiftconf\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.857980 master-0 kubenswrapper[28120]: I0220 15:16:18.857960 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/812afa64-06d1-4cb1-95b4-93dda2cce8eb-operator-scripts\") pod \"root-account-create-update-tk445\" (UID: \"812afa64-06d1-4cb1-95b4-93dda2cce8eb\") " pod="openstack/root-account-create-update-tk445" Feb 20 15:16:18.858108 master-0 kubenswrapper[28120]: I0220 15:16:18.858090 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhczl\" (UniqueName: \"kubernetes.io/projected/812afa64-06d1-4cb1-95b4-93dda2cce8eb-kube-api-access-zhczl\") pod \"root-account-create-update-tk445\" (UID: \"812afa64-06d1-4cb1-95b4-93dda2cce8eb\") " pod="openstack/root-account-create-update-tk445" Feb 20 15:16:18.858266 master-0 kubenswrapper[28120]: I0220 15:16:18.858253 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-scripts\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.858414 master-0 kubenswrapper[28120]: I0220 15:16:18.858401 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-dispersionconf\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.858506 master-0 kubenswrapper[28120]: I0220 15:16:18.858494 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwbqf\" (UniqueName: \"kubernetes.io/projected/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-kube-api-access-gwbqf\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.858604 master-0 kubenswrapper[28120]: I0220 15:16:18.858591 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-combined-ca-bundle\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.885845 master-0 kubenswrapper[28120]: I0220 15:16:18.885786 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-chd7d"] Feb 20 15:16:18.960716 master-0 kubenswrapper[28120]: I0220 15:16:18.960645 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-swiftconf\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.960966 master-0 kubenswrapper[28120]: I0220 15:16:18.960879 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/812afa64-06d1-4cb1-95b4-93dda2cce8eb-operator-scripts\") pod \"root-account-create-update-tk445\" (UID: \"812afa64-06d1-4cb1-95b4-93dda2cce8eb\") " pod="openstack/root-account-create-update-tk445" Feb 20 15:16:18.961028 master-0 kubenswrapper[28120]: I0220 15:16:18.960981 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhczl\" (UniqueName: \"kubernetes.io/projected/812afa64-06d1-4cb1-95b4-93dda2cce8eb-kube-api-access-zhczl\") pod \"root-account-create-update-tk445\" (UID: \"812afa64-06d1-4cb1-95b4-93dda2cce8eb\") " pod="openstack/root-account-create-update-tk445" Feb 20 15:16:18.961197 master-0 kubenswrapper[28120]: I0220 15:16:18.961167 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-scripts\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.961338 master-0 kubenswrapper[28120]: I0220 15:16:18.961317 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-dispersionconf\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.961404 master-0 kubenswrapper[28120]: I0220 15:16:18.961363 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwbqf\" (UniqueName: \"kubernetes.io/projected/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-kube-api-access-gwbqf\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.961452 master-0 kubenswrapper[28120]: I0220 15:16:18.961417 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-combined-ca-bundle\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.961506 master-0 kubenswrapper[28120]: I0220 15:16:18.961470 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-etc-swift\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.961559 master-0 kubenswrapper[28120]: I0220 15:16:18.961506 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-ring-data-devices\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.961647 master-0 kubenswrapper[28120]: I0220 15:16:18.961594 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/812afa64-06d1-4cb1-95b4-93dda2cce8eb-operator-scripts\") pod \"root-account-create-update-tk445\" (UID: \"812afa64-06d1-4cb1-95b4-93dda2cce8eb\") " pod="openstack/root-account-create-update-tk445" Feb 20 15:16:18.961888 master-0 kubenswrapper[28120]: I0220 15:16:18.961862 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-etc-swift\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.962655 master-0 kubenswrapper[28120]: I0220 15:16:18.962617 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-ring-data-devices\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.963631 master-0 kubenswrapper[28120]: I0220 15:16:18.963597 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-scripts\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.964120 master-0 kubenswrapper[28120]: I0220 15:16:18.964056 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-swiftconf\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.964806 master-0 kubenswrapper[28120]: I0220 15:16:18.964771 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-dispersionconf\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.966254 master-0 kubenswrapper[28120]: I0220 15:16:18.966207 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-combined-ca-bundle\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:18.979812 master-0 kubenswrapper[28120]: I0220 15:16:18.979758 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhczl\" (UniqueName: \"kubernetes.io/projected/812afa64-06d1-4cb1-95b4-93dda2cce8eb-kube-api-access-zhczl\") pod \"root-account-create-update-tk445\" (UID: \"812afa64-06d1-4cb1-95b4-93dda2cce8eb\") " pod="openstack/root-account-create-update-tk445" Feb 20 15:16:18.983770 master-0 kubenswrapper[28120]: I0220 15:16:18.983725 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwbqf\" (UniqueName: \"kubernetes.io/projected/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-kube-api-access-gwbqf\") pod \"swift-ring-rebalance-chd7d\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:19.173843 master-0 kubenswrapper[28120]: I0220 15:16:19.173728 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tk445" Feb 20 15:16:19.180005 master-0 kubenswrapper[28120]: I0220 15:16:19.179590 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:19.575163 master-0 kubenswrapper[28120]: I0220 15:16:19.575080 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:19.575868 master-0 kubenswrapper[28120]: E0220 15:16:19.575351 28120 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 20 15:16:19.575868 master-0 kubenswrapper[28120]: E0220 15:16:19.575407 28120 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 20 15:16:19.575868 master-0 kubenswrapper[28120]: E0220 15:16:19.575508 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift podName:742c46aa-374c-4e50-a4a1-6e46f7c13937 nodeName:}" failed. No retries permitted until 2026-02-20 15:16:23.575447774 +0000 UTC m=+921.836241337 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift") pod "swift-storage-0" (UID: "742c46aa-374c-4e50-a4a1-6e46f7c13937") : configmap "swift-ring-files" not found Feb 20 15:16:19.699078 master-0 kubenswrapper[28120]: I0220 15:16:19.698188 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tk445"] Feb 20 15:16:19.704237 master-0 kubenswrapper[28120]: W0220 15:16:19.704195 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod812afa64_06d1_4cb1_95b4_93dda2cce8eb.slice/crio-c44324ea06e821177dbf9eaf018a2e8f57f97d5e182d39044798d771e9172850 WatchSource:0}: Error finding container c44324ea06e821177dbf9eaf018a2e8f57f97d5e182d39044798d771e9172850: Status 404 returned error can't find the container with id c44324ea06e821177dbf9eaf018a2e8f57f97d5e182d39044798d771e9172850 Feb 20 15:16:19.783633 master-0 kubenswrapper[28120]: I0220 15:16:19.783585 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-chd7d"] Feb 20 15:16:20.073291 master-0 kubenswrapper[28120]: I0220 15:16:20.073186 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-chd7d" event={"ID":"e9cef7d6-6daa-4419-8bfa-4591fab1d15e","Type":"ContainerStarted","Data":"10dd15c106f2515f026ad477f562837c1b1cfacc310a9e8d71139cbb65f8f1b2"} Feb 20 15:16:20.073467 master-0 kubenswrapper[28120]: I0220 15:16:20.073246 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tk445" event={"ID":"812afa64-06d1-4cb1-95b4-93dda2cce8eb","Type":"ContainerStarted","Data":"abe9bfbd99e3ebb7cc3894ba981f0a3cb3b18e6271103e9c832b5e1615b4860d"} Feb 20 15:16:20.073467 master-0 kubenswrapper[28120]: I0220 15:16:20.073320 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tk445" event={"ID":"812afa64-06d1-4cb1-95b4-93dda2cce8eb","Type":"ContainerStarted","Data":"c44324ea06e821177dbf9eaf018a2e8f57f97d5e182d39044798d771e9172850"} Feb 20 15:16:20.102916 master-0 kubenswrapper[28120]: I0220 15:16:20.102795 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-tk445" podStartSLOduration=2.102763037 podStartE2EDuration="2.102763037s" podCreationTimestamp="2026-02-20 15:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:16:20.095754812 +0000 UTC m=+918.356548395" watchObservedRunningTime="2026-02-20 15:16:20.102763037 +0000 UTC m=+918.363556640" Feb 20 15:16:21.090291 master-0 kubenswrapper[28120]: I0220 15:16:21.089316 28120 generic.go:334] "Generic (PLEG): container finished" podID="812afa64-06d1-4cb1-95b4-93dda2cce8eb" containerID="abe9bfbd99e3ebb7cc3894ba981f0a3cb3b18e6271103e9c832b5e1615b4860d" exitCode=0 Feb 20 15:16:21.090291 master-0 kubenswrapper[28120]: I0220 15:16:21.089393 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tk445" event={"ID":"812afa64-06d1-4cb1-95b4-93dda2cce8eb","Type":"ContainerDied","Data":"abe9bfbd99e3ebb7cc3894ba981f0a3cb3b18e6271103e9c832b5e1615b4860d"} Feb 20 15:16:22.216207 master-0 kubenswrapper[28120]: I0220 15:16:22.216124 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-rccl4"] Feb 20 15:16:22.218192 master-0 kubenswrapper[28120]: I0220 15:16:22.218145 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rccl4" Feb 20 15:16:22.227457 master-0 kubenswrapper[28120]: I0220 15:16:22.226109 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rccl4"] Feb 20 15:16:22.306084 master-0 kubenswrapper[28120]: I0220 15:16:22.304976 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2588-account-create-update-2jr7b"] Feb 20 15:16:22.306319 master-0 kubenswrapper[28120]: I0220 15:16:22.306243 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2588-account-create-update-2jr7b" Feb 20 15:16:22.308302 master-0 kubenswrapper[28120]: I0220 15:16:22.308265 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 20 15:16:22.322769 master-0 kubenswrapper[28120]: I0220 15:16:22.322427 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2588-account-create-update-2jr7b"] Feb 20 15:16:22.335289 master-0 kubenswrapper[28120]: I0220 15:16:22.331348 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9r25\" (UniqueName: \"kubernetes.io/projected/15cb2c0b-9fbe-4047-b752-b96968eb5408-kube-api-access-k9r25\") pod \"glance-db-create-rccl4\" (UID: \"15cb2c0b-9fbe-4047-b752-b96968eb5408\") " pod="openstack/glance-db-create-rccl4" Feb 20 15:16:22.335289 master-0 kubenswrapper[28120]: I0220 15:16:22.331416 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5gph\" (UniqueName: \"kubernetes.io/projected/e5684c90-d24c-4dff-b5aa-ae72282b2a6f-kube-api-access-j5gph\") pod \"glance-2588-account-create-update-2jr7b\" (UID: \"e5684c90-d24c-4dff-b5aa-ae72282b2a6f\") " pod="openstack/glance-2588-account-create-update-2jr7b" Feb 20 15:16:22.335289 master-0 kubenswrapper[28120]: I0220 15:16:22.331857 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5684c90-d24c-4dff-b5aa-ae72282b2a6f-operator-scripts\") pod \"glance-2588-account-create-update-2jr7b\" (UID: \"e5684c90-d24c-4dff-b5aa-ae72282b2a6f\") " pod="openstack/glance-2588-account-create-update-2jr7b" Feb 20 15:16:22.335289 master-0 kubenswrapper[28120]: I0220 15:16:22.331968 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15cb2c0b-9fbe-4047-b752-b96968eb5408-operator-scripts\") pod \"glance-db-create-rccl4\" (UID: \"15cb2c0b-9fbe-4047-b752-b96968eb5408\") " pod="openstack/glance-db-create-rccl4" Feb 20 15:16:22.434001 master-0 kubenswrapper[28120]: I0220 15:16:22.433853 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5gph\" (UniqueName: \"kubernetes.io/projected/e5684c90-d24c-4dff-b5aa-ae72282b2a6f-kube-api-access-j5gph\") pod \"glance-2588-account-create-update-2jr7b\" (UID: \"e5684c90-d24c-4dff-b5aa-ae72282b2a6f\") " pod="openstack/glance-2588-account-create-update-2jr7b" Feb 20 15:16:22.434211 master-0 kubenswrapper[28120]: I0220 15:16:22.434088 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5684c90-d24c-4dff-b5aa-ae72282b2a6f-operator-scripts\") pod \"glance-2588-account-create-update-2jr7b\" (UID: \"e5684c90-d24c-4dff-b5aa-ae72282b2a6f\") " pod="openstack/glance-2588-account-create-update-2jr7b" Feb 20 15:16:22.434211 master-0 kubenswrapper[28120]: I0220 15:16:22.434123 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15cb2c0b-9fbe-4047-b752-b96968eb5408-operator-scripts\") pod \"glance-db-create-rccl4\" (UID: \"15cb2c0b-9fbe-4047-b752-b96968eb5408\") " pod="openstack/glance-db-create-rccl4" Feb 20 15:16:22.434211 master-0 kubenswrapper[28120]: I0220 15:16:22.434182 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9r25\" (UniqueName: \"kubernetes.io/projected/15cb2c0b-9fbe-4047-b752-b96968eb5408-kube-api-access-k9r25\") pod \"glance-db-create-rccl4\" (UID: \"15cb2c0b-9fbe-4047-b752-b96968eb5408\") " pod="openstack/glance-db-create-rccl4" Feb 20 15:16:22.435108 master-0 kubenswrapper[28120]: I0220 15:16:22.435057 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15cb2c0b-9fbe-4047-b752-b96968eb5408-operator-scripts\") pod \"glance-db-create-rccl4\" (UID: \"15cb2c0b-9fbe-4047-b752-b96968eb5408\") " pod="openstack/glance-db-create-rccl4" Feb 20 15:16:22.435214 master-0 kubenswrapper[28120]: I0220 15:16:22.435103 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5684c90-d24c-4dff-b5aa-ae72282b2a6f-operator-scripts\") pod \"glance-2588-account-create-update-2jr7b\" (UID: \"e5684c90-d24c-4dff-b5aa-ae72282b2a6f\") " pod="openstack/glance-2588-account-create-update-2jr7b" Feb 20 15:16:22.450803 master-0 kubenswrapper[28120]: I0220 15:16:22.450721 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9r25\" (UniqueName: \"kubernetes.io/projected/15cb2c0b-9fbe-4047-b752-b96968eb5408-kube-api-access-k9r25\") pod \"glance-db-create-rccl4\" (UID: \"15cb2c0b-9fbe-4047-b752-b96968eb5408\") " pod="openstack/glance-db-create-rccl4" Feb 20 15:16:22.451692 master-0 kubenswrapper[28120]: I0220 15:16:22.451631 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5gph\" (UniqueName: \"kubernetes.io/projected/e5684c90-d24c-4dff-b5aa-ae72282b2a6f-kube-api-access-j5gph\") pod \"glance-2588-account-create-update-2jr7b\" (UID: \"e5684c90-d24c-4dff-b5aa-ae72282b2a6f\") " pod="openstack/glance-2588-account-create-update-2jr7b" Feb 20 15:16:22.532408 master-0 kubenswrapper[28120]: I0220 15:16:22.532316 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rccl4" Feb 20 15:16:22.606577 master-0 kubenswrapper[28120]: I0220 15:16:22.606474 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-kcsfb"] Feb 20 15:16:22.608132 master-0 kubenswrapper[28120]: I0220 15:16:22.608094 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kcsfb" Feb 20 15:16:22.622795 master-0 kubenswrapper[28120]: I0220 15:16:22.622721 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kcsfb"] Feb 20 15:16:22.634904 master-0 kubenswrapper[28120]: I0220 15:16:22.634850 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2588-account-create-update-2jr7b" Feb 20 15:16:22.638162 master-0 kubenswrapper[28120]: I0220 15:16:22.638118 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkjkd\" (UniqueName: \"kubernetes.io/projected/6721337b-1c31-48d0-88e1-f3251edbebc3-kube-api-access-fkjkd\") pod \"keystone-db-create-kcsfb\" (UID: \"6721337b-1c31-48d0-88e1-f3251edbebc3\") " pod="openstack/keystone-db-create-kcsfb" Feb 20 15:16:22.638324 master-0 kubenswrapper[28120]: I0220 15:16:22.638255 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6721337b-1c31-48d0-88e1-f3251edbebc3-operator-scripts\") pod \"keystone-db-create-kcsfb\" (UID: \"6721337b-1c31-48d0-88e1-f3251edbebc3\") " pod="openstack/keystone-db-create-kcsfb" Feb 20 15:16:22.713028 master-0 kubenswrapper[28120]: I0220 15:16:22.712838 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3973-account-create-update-hf6wb"] Feb 20 15:16:22.717106 master-0 kubenswrapper[28120]: I0220 15:16:22.715260 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3973-account-create-update-hf6wb" Feb 20 15:16:22.718822 master-0 kubenswrapper[28120]: I0220 15:16:22.718353 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 20 15:16:22.725289 master-0 kubenswrapper[28120]: I0220 15:16:22.725223 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3973-account-create-update-hf6wb"] Feb 20 15:16:22.740515 master-0 kubenswrapper[28120]: I0220 15:16:22.740431 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrgm8\" (UniqueName: \"kubernetes.io/projected/2b332210-786b-421d-8b99-5dcfbeb5196c-kube-api-access-nrgm8\") pod \"keystone-3973-account-create-update-hf6wb\" (UID: \"2b332210-786b-421d-8b99-5dcfbeb5196c\") " pod="openstack/keystone-3973-account-create-update-hf6wb" Feb 20 15:16:22.740515 master-0 kubenswrapper[28120]: I0220 15:16:22.740490 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b332210-786b-421d-8b99-5dcfbeb5196c-operator-scripts\") pod \"keystone-3973-account-create-update-hf6wb\" (UID: \"2b332210-786b-421d-8b99-5dcfbeb5196c\") " pod="openstack/keystone-3973-account-create-update-hf6wb" Feb 20 15:16:22.740789 master-0 kubenswrapper[28120]: I0220 15:16:22.740530 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkjkd\" (UniqueName: \"kubernetes.io/projected/6721337b-1c31-48d0-88e1-f3251edbebc3-kube-api-access-fkjkd\") pod \"keystone-db-create-kcsfb\" (UID: \"6721337b-1c31-48d0-88e1-f3251edbebc3\") " pod="openstack/keystone-db-create-kcsfb" Feb 20 15:16:22.740789 master-0 kubenswrapper[28120]: I0220 15:16:22.740567 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6721337b-1c31-48d0-88e1-f3251edbebc3-operator-scripts\") pod \"keystone-db-create-kcsfb\" (UID: \"6721337b-1c31-48d0-88e1-f3251edbebc3\") " pod="openstack/keystone-db-create-kcsfb" Feb 20 15:16:22.744705 master-0 kubenswrapper[28120]: I0220 15:16:22.741867 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6721337b-1c31-48d0-88e1-f3251edbebc3-operator-scripts\") pod \"keystone-db-create-kcsfb\" (UID: \"6721337b-1c31-48d0-88e1-f3251edbebc3\") " pod="openstack/keystone-db-create-kcsfb" Feb 20 15:16:22.793662 master-0 kubenswrapper[28120]: I0220 15:16:22.793620 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkjkd\" (UniqueName: \"kubernetes.io/projected/6721337b-1c31-48d0-88e1-f3251edbebc3-kube-api-access-fkjkd\") pod \"keystone-db-create-kcsfb\" (UID: \"6721337b-1c31-48d0-88e1-f3251edbebc3\") " pod="openstack/keystone-db-create-kcsfb" Feb 20 15:16:22.799094 master-0 kubenswrapper[28120]: I0220 15:16:22.799059 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-cdlc8"] Feb 20 15:16:22.800470 master-0 kubenswrapper[28120]: I0220 15:16:22.800444 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cdlc8" Feb 20 15:16:22.813123 master-0 kubenswrapper[28120]: I0220 15:16:22.810955 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cdlc8"] Feb 20 15:16:22.842718 master-0 kubenswrapper[28120]: I0220 15:16:22.842675 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25bbf6e5-6c71-4b45-9aa6-cbfe06455f50-operator-scripts\") pod \"placement-db-create-cdlc8\" (UID: \"25bbf6e5-6c71-4b45-9aa6-cbfe06455f50\") " pod="openstack/placement-db-create-cdlc8" Feb 20 15:16:22.843451 master-0 kubenswrapper[28120]: I0220 15:16:22.843417 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrgm8\" (UniqueName: \"kubernetes.io/projected/2b332210-786b-421d-8b99-5dcfbeb5196c-kube-api-access-nrgm8\") pod \"keystone-3973-account-create-update-hf6wb\" (UID: \"2b332210-786b-421d-8b99-5dcfbeb5196c\") " pod="openstack/keystone-3973-account-create-update-hf6wb" Feb 20 15:16:22.843570 master-0 kubenswrapper[28120]: I0220 15:16:22.843541 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tl7v\" (UniqueName: \"kubernetes.io/projected/25bbf6e5-6c71-4b45-9aa6-cbfe06455f50-kube-api-access-2tl7v\") pod \"placement-db-create-cdlc8\" (UID: \"25bbf6e5-6c71-4b45-9aa6-cbfe06455f50\") " pod="openstack/placement-db-create-cdlc8" Feb 20 15:16:22.843627 master-0 kubenswrapper[28120]: I0220 15:16:22.843608 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b332210-786b-421d-8b99-5dcfbeb5196c-operator-scripts\") pod \"keystone-3973-account-create-update-hf6wb\" (UID: \"2b332210-786b-421d-8b99-5dcfbeb5196c\") " pod="openstack/keystone-3973-account-create-update-hf6wb" Feb 20 15:16:22.844307 master-0 kubenswrapper[28120]: I0220 15:16:22.844273 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b332210-786b-421d-8b99-5dcfbeb5196c-operator-scripts\") pod \"keystone-3973-account-create-update-hf6wb\" (UID: \"2b332210-786b-421d-8b99-5dcfbeb5196c\") " pod="openstack/keystone-3973-account-create-update-hf6wb" Feb 20 15:16:22.866690 master-0 kubenswrapper[28120]: I0220 15:16:22.866482 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrgm8\" (UniqueName: \"kubernetes.io/projected/2b332210-786b-421d-8b99-5dcfbeb5196c-kube-api-access-nrgm8\") pod \"keystone-3973-account-create-update-hf6wb\" (UID: \"2b332210-786b-421d-8b99-5dcfbeb5196c\") " pod="openstack/keystone-3973-account-create-update-hf6wb" Feb 20 15:16:22.945949 master-0 kubenswrapper[28120]: I0220 15:16:22.945846 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tl7v\" (UniqueName: \"kubernetes.io/projected/25bbf6e5-6c71-4b45-9aa6-cbfe06455f50-kube-api-access-2tl7v\") pod \"placement-db-create-cdlc8\" (UID: \"25bbf6e5-6c71-4b45-9aa6-cbfe06455f50\") " pod="openstack/placement-db-create-cdlc8" Feb 20 15:16:22.946203 master-0 kubenswrapper[28120]: I0220 15:16:22.945993 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25bbf6e5-6c71-4b45-9aa6-cbfe06455f50-operator-scripts\") pod \"placement-db-create-cdlc8\" (UID: \"25bbf6e5-6c71-4b45-9aa6-cbfe06455f50\") " pod="openstack/placement-db-create-cdlc8" Feb 20 15:16:22.947163 master-0 kubenswrapper[28120]: I0220 15:16:22.947136 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25bbf6e5-6c71-4b45-9aa6-cbfe06455f50-operator-scripts\") pod \"placement-db-create-cdlc8\" (UID: \"25bbf6e5-6c71-4b45-9aa6-cbfe06455f50\") " pod="openstack/placement-db-create-cdlc8" Feb 20 15:16:22.950642 master-0 kubenswrapper[28120]: I0220 15:16:22.950605 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5334-account-create-update-s44rq"] Feb 20 15:16:22.954209 master-0 kubenswrapper[28120]: I0220 15:16:22.954170 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5334-account-create-update-s44rq" Feb 20 15:16:22.957614 master-0 kubenswrapper[28120]: I0220 15:16:22.957577 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 20 15:16:22.963205 master-0 kubenswrapper[28120]: I0220 15:16:22.963127 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5334-account-create-update-s44rq"] Feb 20 15:16:22.965420 master-0 kubenswrapper[28120]: I0220 15:16:22.964890 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tl7v\" (UniqueName: \"kubernetes.io/projected/25bbf6e5-6c71-4b45-9aa6-cbfe06455f50-kube-api-access-2tl7v\") pod \"placement-db-create-cdlc8\" (UID: \"25bbf6e5-6c71-4b45-9aa6-cbfe06455f50\") " pod="openstack/placement-db-create-cdlc8" Feb 20 15:16:22.978655 master-0 kubenswrapper[28120]: I0220 15:16:22.978599 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kcsfb" Feb 20 15:16:23.046546 master-0 kubenswrapper[28120]: I0220 15:16:23.046503 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3973-account-create-update-hf6wb" Feb 20 15:16:23.047565 master-0 kubenswrapper[28120]: I0220 15:16:23.047522 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ccdb626-016f-4380-9396-9597993bc3df-operator-scripts\") pod \"placement-5334-account-create-update-s44rq\" (UID: \"2ccdb626-016f-4380-9396-9597993bc3df\") " pod="openstack/placement-5334-account-create-update-s44rq" Feb 20 15:16:23.047661 master-0 kubenswrapper[28120]: I0220 15:16:23.047602 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jflkq\" (UniqueName: \"kubernetes.io/projected/2ccdb626-016f-4380-9396-9597993bc3df-kube-api-access-jflkq\") pod \"placement-5334-account-create-update-s44rq\" (UID: \"2ccdb626-016f-4380-9396-9597993bc3df\") " pod="openstack/placement-5334-account-create-update-s44rq" Feb 20 15:16:23.092078 master-0 kubenswrapper[28120]: I0220 15:16:23.092019 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tk445" Feb 20 15:16:23.119350 master-0 kubenswrapper[28120]: I0220 15:16:23.119289 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tk445" event={"ID":"812afa64-06d1-4cb1-95b4-93dda2cce8eb","Type":"ContainerDied","Data":"c44324ea06e821177dbf9eaf018a2e8f57f97d5e182d39044798d771e9172850"} Feb 20 15:16:23.119666 master-0 kubenswrapper[28120]: I0220 15:16:23.119637 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c44324ea06e821177dbf9eaf018a2e8f57f97d5e182d39044798d771e9172850" Feb 20 15:16:23.119786 master-0 kubenswrapper[28120]: I0220 15:16:23.119592 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tk445" Feb 20 15:16:23.138642 master-0 kubenswrapper[28120]: I0220 15:16:23.138575 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cdlc8" Feb 20 15:16:23.151011 master-0 kubenswrapper[28120]: I0220 15:16:23.150916 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/812afa64-06d1-4cb1-95b4-93dda2cce8eb-operator-scripts\") pod \"812afa64-06d1-4cb1-95b4-93dda2cce8eb\" (UID: \"812afa64-06d1-4cb1-95b4-93dda2cce8eb\") " Feb 20 15:16:23.151839 master-0 kubenswrapper[28120]: I0220 15:16:23.151807 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhczl\" (UniqueName: \"kubernetes.io/projected/812afa64-06d1-4cb1-95b4-93dda2cce8eb-kube-api-access-zhczl\") pod \"812afa64-06d1-4cb1-95b4-93dda2cce8eb\" (UID: \"812afa64-06d1-4cb1-95b4-93dda2cce8eb\") " Feb 20 15:16:23.152165 master-0 kubenswrapper[28120]: I0220 15:16:23.152086 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/812afa64-06d1-4cb1-95b4-93dda2cce8eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "812afa64-06d1-4cb1-95b4-93dda2cce8eb" (UID: "812afa64-06d1-4cb1-95b4-93dda2cce8eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:23.154233 master-0 kubenswrapper[28120]: I0220 15:16:23.154196 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ccdb626-016f-4380-9396-9597993bc3df-operator-scripts\") pod \"placement-5334-account-create-update-s44rq\" (UID: \"2ccdb626-016f-4380-9396-9597993bc3df\") " pod="openstack/placement-5334-account-create-update-s44rq" Feb 20 15:16:23.154531 master-0 kubenswrapper[28120]: I0220 15:16:23.154503 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jflkq\" (UniqueName: \"kubernetes.io/projected/2ccdb626-016f-4380-9396-9597993bc3df-kube-api-access-jflkq\") pod \"placement-5334-account-create-update-s44rq\" (UID: \"2ccdb626-016f-4380-9396-9597993bc3df\") " pod="openstack/placement-5334-account-create-update-s44rq" Feb 20 15:16:23.154867 master-0 kubenswrapper[28120]: I0220 15:16:23.154840 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/812afa64-06d1-4cb1-95b4-93dda2cce8eb-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:23.158284 master-0 kubenswrapper[28120]: I0220 15:16:23.158222 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ccdb626-016f-4380-9396-9597993bc3df-operator-scripts\") pod \"placement-5334-account-create-update-s44rq\" (UID: \"2ccdb626-016f-4380-9396-9597993bc3df\") " pod="openstack/placement-5334-account-create-update-s44rq" Feb 20 15:16:23.159301 master-0 kubenswrapper[28120]: I0220 15:16:23.159168 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812afa64-06d1-4cb1-95b4-93dda2cce8eb-kube-api-access-zhczl" (OuterVolumeSpecName: "kube-api-access-zhczl") pod "812afa64-06d1-4cb1-95b4-93dda2cce8eb" (UID: "812afa64-06d1-4cb1-95b4-93dda2cce8eb"). InnerVolumeSpecName "kube-api-access-zhczl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:23.178147 master-0 kubenswrapper[28120]: I0220 15:16:23.178088 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jflkq\" (UniqueName: \"kubernetes.io/projected/2ccdb626-016f-4380-9396-9597993bc3df-kube-api-access-jflkq\") pod \"placement-5334-account-create-update-s44rq\" (UID: \"2ccdb626-016f-4380-9396-9597993bc3df\") " pod="openstack/placement-5334-account-create-update-s44rq" Feb 20 15:16:23.256741 master-0 kubenswrapper[28120]: I0220 15:16:23.256679 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhczl\" (UniqueName: \"kubernetes.io/projected/812afa64-06d1-4cb1-95b4-93dda2cce8eb-kube-api-access-zhczl\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:23.314386 master-0 kubenswrapper[28120]: I0220 15:16:23.314329 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5334-account-create-update-s44rq" Feb 20 15:16:23.578020 master-0 kubenswrapper[28120]: I0220 15:16:23.577947 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:23.578543 master-0 kubenswrapper[28120]: E0220 15:16:23.578451 28120 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 20 15:16:23.578543 master-0 kubenswrapper[28120]: E0220 15:16:23.578481 28120 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 20 15:16:23.578543 master-0 kubenswrapper[28120]: E0220 15:16:23.578526 28120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift podName:742c46aa-374c-4e50-a4a1-6e46f7c13937 nodeName:}" failed. No retries permitted until 2026-02-20 15:16:31.578510225 +0000 UTC m=+929.839303788 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift") pod "swift-storage-0" (UID: "742c46aa-374c-4e50-a4a1-6e46f7c13937") : configmap "swift-ring-files" not found Feb 20 15:16:23.934617 master-0 kubenswrapper[28120]: I0220 15:16:23.934573 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kcsfb"] Feb 20 15:16:23.950037 master-0 kubenswrapper[28120]: W0220 15:16:23.949996 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b332210_786b_421d_8b99_5dcfbeb5196c.slice/crio-d9d635fdf7b7037b535917530454f880b574c76b66cca392fd30c3ce61548a47 WatchSource:0}: Error finding container d9d635fdf7b7037b535917530454f880b574c76b66cca392fd30c3ce61548a47: Status 404 returned error can't find the container with id d9d635fdf7b7037b535917530454f880b574c76b66cca392fd30c3ce61548a47 Feb 20 15:16:23.952814 master-0 kubenswrapper[28120]: I0220 15:16:23.952774 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3973-account-create-update-hf6wb"] Feb 20 15:16:23.964300 master-0 kubenswrapper[28120]: I0220 15:16:23.964256 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2588-account-create-update-2jr7b"] Feb 20 15:16:23.968849 master-0 kubenswrapper[28120]: W0220 15:16:23.968792 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15cb2c0b_9fbe_4047_b752_b96968eb5408.slice/crio-a74f77929970879ccab44ab312756cc82091a8803ccdc76e1f03e55f4701d7d3 WatchSource:0}: Error finding container a74f77929970879ccab44ab312756cc82091a8803ccdc76e1f03e55f4701d7d3: Status 404 returned error can't find the container with id a74f77929970879ccab44ab312756cc82091a8803ccdc76e1f03e55f4701d7d3 Feb 20 15:16:23.980209 master-0 kubenswrapper[28120]: I0220 15:16:23.980149 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rccl4"] Feb 20 15:16:24.003640 master-0 kubenswrapper[28120]: I0220 15:16:24.003582 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cdlc8"] Feb 20 15:16:24.029769 master-0 kubenswrapper[28120]: W0220 15:16:24.029714 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25bbf6e5_6c71_4b45_9aa6_cbfe06455f50.slice/crio-a7c0955d6c586db45bb21cf4ddbc438ebb1d0f50a492bbda871c6bdf9a54788b WatchSource:0}: Error finding container a7c0955d6c586db45bb21cf4ddbc438ebb1d0f50a492bbda871c6bdf9a54788b: Status 404 returned error can't find the container with id a7c0955d6c586db45bb21cf4ddbc438ebb1d0f50a492bbda871c6bdf9a54788b Feb 20 15:16:24.137368 master-0 kubenswrapper[28120]: I0220 15:16:24.137152 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3973-account-create-update-hf6wb" event={"ID":"2b332210-786b-421d-8b99-5dcfbeb5196c","Type":"ContainerStarted","Data":"d9d635fdf7b7037b535917530454f880b574c76b66cca392fd30c3ce61548a47"} Feb 20 15:16:24.142947 master-0 kubenswrapper[28120]: I0220 15:16:24.142886 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rccl4" event={"ID":"15cb2c0b-9fbe-4047-b752-b96968eb5408","Type":"ContainerStarted","Data":"a74f77929970879ccab44ab312756cc82091a8803ccdc76e1f03e55f4701d7d3"} Feb 20 15:16:24.144601 master-0 kubenswrapper[28120]: I0220 15:16:24.144575 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kcsfb" event={"ID":"6721337b-1c31-48d0-88e1-f3251edbebc3","Type":"ContainerStarted","Data":"9c2a6eb6f08d30cffb052df8f29b6a77229d1c5060fbd1ffd345cbce9d8228c5"} Feb 20 15:16:24.146421 master-0 kubenswrapper[28120]: I0220 15:16:24.146345 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-chd7d" event={"ID":"e9cef7d6-6daa-4419-8bfa-4591fab1d15e","Type":"ContainerStarted","Data":"d10b3ea93bd1033515717862125ddc4d1e58a40f2f81af9d93edfccdc389873c"} Feb 20 15:16:24.147806 master-0 kubenswrapper[28120]: I0220 15:16:24.147758 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cdlc8" event={"ID":"25bbf6e5-6c71-4b45-9aa6-cbfe06455f50","Type":"ContainerStarted","Data":"a7c0955d6c586db45bb21cf4ddbc438ebb1d0f50a492bbda871c6bdf9a54788b"} Feb 20 15:16:24.149435 master-0 kubenswrapper[28120]: I0220 15:16:24.149387 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2588-account-create-update-2jr7b" event={"ID":"e5684c90-d24c-4dff-b5aa-ae72282b2a6f","Type":"ContainerStarted","Data":"5a927988120b0a398db46d6083726e6b522781b03e9753306e3cfe1b7a790fa8"} Feb 20 15:16:24.172451 master-0 kubenswrapper[28120]: I0220 15:16:24.170020 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:24.174273 master-0 kubenswrapper[28120]: I0220 15:16:24.172652 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-chd7d" podStartSLOduration=2.7119142 podStartE2EDuration="6.172639463s" podCreationTimestamp="2026-02-20 15:16:18 +0000 UTC" firstStartedPulling="2026-02-20 15:16:19.786913215 +0000 UTC m=+918.047706778" lastFinishedPulling="2026-02-20 15:16:23.247638478 +0000 UTC m=+921.508432041" observedRunningTime="2026-02-20 15:16:24.168605932 +0000 UTC m=+922.429399495" watchObservedRunningTime="2026-02-20 15:16:24.172639463 +0000 UTC m=+922.433433026" Feb 20 15:16:24.253946 master-0 kubenswrapper[28120]: I0220 15:16:24.253885 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-lznrf"] Feb 20 15:16:24.255377 master-0 kubenswrapper[28120]: I0220 15:16:24.254143 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" podUID="c8e6f35c-0dcb-4fc0-a236-7245736b3ae7" containerName="dnsmasq-dns" containerID="cri-o://53e018325ffcc3cc698d08257b35affd29c14be51833c264ab08b53cf5bc0a65" gracePeriod=10 Feb 20 15:16:24.285123 master-0 kubenswrapper[28120]: I0220 15:16:24.284592 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5334-account-create-update-s44rq"] Feb 20 15:16:24.958055 master-0 kubenswrapper[28120]: I0220 15:16:24.957999 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:25.103945 master-0 kubenswrapper[28120]: I0220 15:16:25.103222 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tk445"] Feb 20 15:16:25.116772 master-0 kubenswrapper[28120]: I0220 15:16:25.116716 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tk445"] Feb 20 15:16:25.123023 master-0 kubenswrapper[28120]: I0220 15:16:25.120468 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hmqf\" (UniqueName: \"kubernetes.io/projected/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-kube-api-access-9hmqf\") pod \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\" (UID: \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\") " Feb 20 15:16:25.123023 master-0 kubenswrapper[28120]: I0220 15:16:25.120519 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-config\") pod \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\" (UID: \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\") " Feb 20 15:16:25.123023 master-0 kubenswrapper[28120]: I0220 15:16:25.120561 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-dns-svc\") pod \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\" (UID: \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\") " Feb 20 15:16:25.123023 master-0 kubenswrapper[28120]: I0220 15:16:25.120605 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-ovsdbserver-nb\") pod \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\" (UID: \"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7\") " Feb 20 15:16:25.126053 master-0 kubenswrapper[28120]: I0220 15:16:25.125995 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-kube-api-access-9hmqf" (OuterVolumeSpecName: "kube-api-access-9hmqf") pod "c8e6f35c-0dcb-4fc0-a236-7245736b3ae7" (UID: "c8e6f35c-0dcb-4fc0-a236-7245736b3ae7"). InnerVolumeSpecName "kube-api-access-9hmqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:25.166358 master-0 kubenswrapper[28120]: I0220 15:16:25.166235 28120 generic.go:334] "Generic (PLEG): container finished" podID="e5684c90-d24c-4dff-b5aa-ae72282b2a6f" containerID="94baa803cdd56c0f788e6479e529573f2e5370046a342af01b950d2b010c996d" exitCode=0 Feb 20 15:16:25.166517 master-0 kubenswrapper[28120]: I0220 15:16:25.166334 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2588-account-create-update-2jr7b" event={"ID":"e5684c90-d24c-4dff-b5aa-ae72282b2a6f","Type":"ContainerDied","Data":"94baa803cdd56c0f788e6479e529573f2e5370046a342af01b950d2b010c996d"} Feb 20 15:16:25.169123 master-0 kubenswrapper[28120]: I0220 15:16:25.168888 28120 generic.go:334] "Generic (PLEG): container finished" podID="c8e6f35c-0dcb-4fc0-a236-7245736b3ae7" containerID="53e018325ffcc3cc698d08257b35affd29c14be51833c264ab08b53cf5bc0a65" exitCode=0 Feb 20 15:16:25.169123 master-0 kubenswrapper[28120]: I0220 15:16:25.168961 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" event={"ID":"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7","Type":"ContainerDied","Data":"53e018325ffcc3cc698d08257b35affd29c14be51833c264ab08b53cf5bc0a65"} Feb 20 15:16:25.169123 master-0 kubenswrapper[28120]: I0220 15:16:25.168979 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" event={"ID":"c8e6f35c-0dcb-4fc0-a236-7245736b3ae7","Type":"ContainerDied","Data":"6936dac0b6be8d2ea3967369ac557fab2956a4ae36f3c348fa874c8ba443546e"} Feb 20 15:16:25.169123 master-0 kubenswrapper[28120]: I0220 15:16:25.168981 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-lznrf" Feb 20 15:16:25.169123 master-0 kubenswrapper[28120]: I0220 15:16:25.169010 28120 scope.go:117] "RemoveContainer" containerID="53e018325ffcc3cc698d08257b35affd29c14be51833c264ab08b53cf5bc0a65" Feb 20 15:16:25.171875 master-0 kubenswrapper[28120]: I0220 15:16:25.171829 28120 generic.go:334] "Generic (PLEG): container finished" podID="2b332210-786b-421d-8b99-5dcfbeb5196c" containerID="2486e8350cca4ed7a3f4dccfcc686f86107ff4868f3c24c848fa9851e1f99f44" exitCode=0 Feb 20 15:16:25.171953 master-0 kubenswrapper[28120]: I0220 15:16:25.171882 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3973-account-create-update-hf6wb" event={"ID":"2b332210-786b-421d-8b99-5dcfbeb5196c","Type":"ContainerDied","Data":"2486e8350cca4ed7a3f4dccfcc686f86107ff4868f3c24c848fa9851e1f99f44"} Feb 20 15:16:25.174980 master-0 kubenswrapper[28120]: I0220 15:16:25.174937 28120 generic.go:334] "Generic (PLEG): container finished" podID="15cb2c0b-9fbe-4047-b752-b96968eb5408" containerID="23cc41505651c526a1b48385622973fe629b225cfc2a5f88335b6e56a3569e64" exitCode=0 Feb 20 15:16:25.175055 master-0 kubenswrapper[28120]: I0220 15:16:25.174985 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rccl4" event={"ID":"15cb2c0b-9fbe-4047-b752-b96968eb5408","Type":"ContainerDied","Data":"23cc41505651c526a1b48385622973fe629b225cfc2a5f88335b6e56a3569e64"} Feb 20 15:16:25.183028 master-0 kubenswrapper[28120]: I0220 15:16:25.182850 28120 generic.go:334] "Generic (PLEG): container finished" podID="6721337b-1c31-48d0-88e1-f3251edbebc3" containerID="8d05d0e6d15707bc3dcff498ac6208f34998d1fcd446f63947d46c5d8f8d75dd" exitCode=0 Feb 20 15:16:25.183028 master-0 kubenswrapper[28120]: I0220 15:16:25.182962 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kcsfb" event={"ID":"6721337b-1c31-48d0-88e1-f3251edbebc3","Type":"ContainerDied","Data":"8d05d0e6d15707bc3dcff498ac6208f34998d1fcd446f63947d46c5d8f8d75dd"} Feb 20 15:16:25.194221 master-0 kubenswrapper[28120]: I0220 15:16:25.190358 28120 generic.go:334] "Generic (PLEG): container finished" podID="2ccdb626-016f-4380-9396-9597993bc3df" containerID="26d740bd52dbb799f39ad6993bfd71fe7d05ab157c6d37770eaad4e0fa5d9971" exitCode=0 Feb 20 15:16:25.194221 master-0 kubenswrapper[28120]: I0220 15:16:25.190428 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5334-account-create-update-s44rq" event={"ID":"2ccdb626-016f-4380-9396-9597993bc3df","Type":"ContainerDied","Data":"26d740bd52dbb799f39ad6993bfd71fe7d05ab157c6d37770eaad4e0fa5d9971"} Feb 20 15:16:25.194221 master-0 kubenswrapper[28120]: I0220 15:16:25.190507 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5334-account-create-update-s44rq" event={"ID":"2ccdb626-016f-4380-9396-9597993bc3df","Type":"ContainerStarted","Data":"a7588d68285632cfe7b7a6b990d4b7e520a8f34c5155071dc7fe6617db36347e"} Feb 20 15:16:25.194221 master-0 kubenswrapper[28120]: I0220 15:16:25.192878 28120 generic.go:334] "Generic (PLEG): container finished" podID="25bbf6e5-6c71-4b45-9aa6-cbfe06455f50" containerID="7dcd675589c3794fef1eed51a5e221c41edf9244eae9084a15620057b60d792b" exitCode=0 Feb 20 15:16:25.194221 master-0 kubenswrapper[28120]: I0220 15:16:25.193901 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cdlc8" event={"ID":"25bbf6e5-6c71-4b45-9aa6-cbfe06455f50","Type":"ContainerDied","Data":"7dcd675589c3794fef1eed51a5e221c41edf9244eae9084a15620057b60d792b"} Feb 20 15:16:25.202347 master-0 kubenswrapper[28120]: I0220 15:16:25.202301 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-config" (OuterVolumeSpecName: "config") pod "c8e6f35c-0dcb-4fc0-a236-7245736b3ae7" (UID: "c8e6f35c-0dcb-4fc0-a236-7245736b3ae7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:25.204468 master-0 kubenswrapper[28120]: I0220 15:16:25.204424 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c8e6f35c-0dcb-4fc0-a236-7245736b3ae7" (UID: "c8e6f35c-0dcb-4fc0-a236-7245736b3ae7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:25.208398 master-0 kubenswrapper[28120]: I0220 15:16:25.208180 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8e6f35c-0dcb-4fc0-a236-7245736b3ae7" (UID: "c8e6f35c-0dcb-4fc0-a236-7245736b3ae7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:25.228267 master-0 kubenswrapper[28120]: I0220 15:16:25.228136 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hmqf\" (UniqueName: \"kubernetes.io/projected/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-kube-api-access-9hmqf\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:25.228267 master-0 kubenswrapper[28120]: I0220 15:16:25.228188 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:25.228267 master-0 kubenswrapper[28120]: I0220 15:16:25.228203 28120 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:25.228267 master-0 kubenswrapper[28120]: I0220 15:16:25.228216 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:25.251562 master-0 kubenswrapper[28120]: I0220 15:16:25.251480 28120 scope.go:117] "RemoveContainer" containerID="24ae547f022d6f5ee6a086e436e3927f70b26bd810a8bf76e3b3045c260f61d6" Feb 20 15:16:25.286931 master-0 kubenswrapper[28120]: I0220 15:16:25.286854 28120 scope.go:117] "RemoveContainer" containerID="53e018325ffcc3cc698d08257b35affd29c14be51833c264ab08b53cf5bc0a65" Feb 20 15:16:25.287656 master-0 kubenswrapper[28120]: E0220 15:16:25.287484 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53e018325ffcc3cc698d08257b35affd29c14be51833c264ab08b53cf5bc0a65\": container with ID starting with 53e018325ffcc3cc698d08257b35affd29c14be51833c264ab08b53cf5bc0a65 not found: ID does not exist" containerID="53e018325ffcc3cc698d08257b35affd29c14be51833c264ab08b53cf5bc0a65" Feb 20 15:16:25.287656 master-0 kubenswrapper[28120]: I0220 15:16:25.287533 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53e018325ffcc3cc698d08257b35affd29c14be51833c264ab08b53cf5bc0a65"} err="failed to get container status \"53e018325ffcc3cc698d08257b35affd29c14be51833c264ab08b53cf5bc0a65\": rpc error: code = NotFound desc = could not find container \"53e018325ffcc3cc698d08257b35affd29c14be51833c264ab08b53cf5bc0a65\": container with ID starting with 53e018325ffcc3cc698d08257b35affd29c14be51833c264ab08b53cf5bc0a65 not found: ID does not exist" Feb 20 15:16:25.287656 master-0 kubenswrapper[28120]: I0220 15:16:25.287560 28120 scope.go:117] "RemoveContainer" containerID="24ae547f022d6f5ee6a086e436e3927f70b26bd810a8bf76e3b3045c260f61d6" Feb 20 15:16:25.287951 master-0 kubenswrapper[28120]: E0220 15:16:25.287884 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24ae547f022d6f5ee6a086e436e3927f70b26bd810a8bf76e3b3045c260f61d6\": container with ID starting with 24ae547f022d6f5ee6a086e436e3927f70b26bd810a8bf76e3b3045c260f61d6 not found: ID does not exist" containerID="24ae547f022d6f5ee6a086e436e3927f70b26bd810a8bf76e3b3045c260f61d6" Feb 20 15:16:25.288036 master-0 kubenswrapper[28120]: I0220 15:16:25.287910 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ae547f022d6f5ee6a086e436e3927f70b26bd810a8bf76e3b3045c260f61d6"} err="failed to get container status \"24ae547f022d6f5ee6a086e436e3927f70b26bd810a8bf76e3b3045c260f61d6\": rpc error: code = NotFound desc = could not find container \"24ae547f022d6f5ee6a086e436e3927f70b26bd810a8bf76e3b3045c260f61d6\": container with ID starting with 24ae547f022d6f5ee6a086e436e3927f70b26bd810a8bf76e3b3045c260f61d6 not found: ID does not exist" Feb 20 15:16:25.528233 master-0 kubenswrapper[28120]: I0220 15:16:25.528177 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-lznrf"] Feb 20 15:16:25.540206 master-0 kubenswrapper[28120]: I0220 15:16:25.540128 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-lznrf"] Feb 20 15:16:26.080683 master-0 kubenswrapper[28120]: I0220 15:16:26.080596 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="812afa64-06d1-4cb1-95b4-93dda2cce8eb" path="/var/lib/kubelet/pods/812afa64-06d1-4cb1-95b4-93dda2cce8eb/volumes" Feb 20 15:16:26.082642 master-0 kubenswrapper[28120]: I0220 15:16:26.082597 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8e6f35c-0dcb-4fc0-a236-7245736b3ae7" path="/var/lib/kubelet/pods/c8e6f35c-0dcb-4fc0-a236-7245736b3ae7/volumes" Feb 20 15:16:26.852769 master-0 kubenswrapper[28120]: I0220 15:16:26.852724 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5334-account-create-update-s44rq" Feb 20 15:16:26.975023 master-0 kubenswrapper[28120]: I0220 15:16:26.972531 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jflkq\" (UniqueName: \"kubernetes.io/projected/2ccdb626-016f-4380-9396-9597993bc3df-kube-api-access-jflkq\") pod \"2ccdb626-016f-4380-9396-9597993bc3df\" (UID: \"2ccdb626-016f-4380-9396-9597993bc3df\") " Feb 20 15:16:26.975023 master-0 kubenswrapper[28120]: I0220 15:16:26.972809 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ccdb626-016f-4380-9396-9597993bc3df-operator-scripts\") pod \"2ccdb626-016f-4380-9396-9597993bc3df\" (UID: \"2ccdb626-016f-4380-9396-9597993bc3df\") " Feb 20 15:16:26.975023 master-0 kubenswrapper[28120]: I0220 15:16:26.973936 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ccdb626-016f-4380-9396-9597993bc3df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ccdb626-016f-4380-9396-9597993bc3df" (UID: "2ccdb626-016f-4380-9396-9597993bc3df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:26.981220 master-0 kubenswrapper[28120]: I0220 15:16:26.981173 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ccdb626-016f-4380-9396-9597993bc3df-kube-api-access-jflkq" (OuterVolumeSpecName: "kube-api-access-jflkq") pod "2ccdb626-016f-4380-9396-9597993bc3df" (UID: "2ccdb626-016f-4380-9396-9597993bc3df"). InnerVolumeSpecName "kube-api-access-jflkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:27.074831 master-0 kubenswrapper[28120]: I0220 15:16:27.074800 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ccdb626-016f-4380-9396-9597993bc3df-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:27.075049 master-0 kubenswrapper[28120]: I0220 15:16:27.075035 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jflkq\" (UniqueName: \"kubernetes.io/projected/2ccdb626-016f-4380-9396-9597993bc3df-kube-api-access-jflkq\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:27.226825 master-0 kubenswrapper[28120]: I0220 15:16:27.226728 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rccl4" Feb 20 15:16:27.235278 master-0 kubenswrapper[28120]: I0220 15:16:27.235244 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5334-account-create-update-s44rq" event={"ID":"2ccdb626-016f-4380-9396-9597993bc3df","Type":"ContainerDied","Data":"a7588d68285632cfe7b7a6b990d4b7e520a8f34c5155071dc7fe6617db36347e"} Feb 20 15:16:27.235493 master-0 kubenswrapper[28120]: I0220 15:16:27.235478 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7588d68285632cfe7b7a6b990d4b7e520a8f34c5155071dc7fe6617db36347e" Feb 20 15:16:27.235598 master-0 kubenswrapper[28120]: I0220 15:16:27.235587 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5334-account-create-update-s44rq" Feb 20 15:16:27.238860 master-0 kubenswrapper[28120]: I0220 15:16:27.238814 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2588-account-create-update-2jr7b" event={"ID":"e5684c90-d24c-4dff-b5aa-ae72282b2a6f","Type":"ContainerDied","Data":"5a927988120b0a398db46d6083726e6b522781b03e9753306e3cfe1b7a790fa8"} Feb 20 15:16:27.238860 master-0 kubenswrapper[28120]: I0220 15:16:27.238861 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a927988120b0a398db46d6083726e6b522781b03e9753306e3cfe1b7a790fa8" Feb 20 15:16:27.240124 master-0 kubenswrapper[28120]: I0220 15:16:27.240095 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cdlc8" Feb 20 15:16:27.240639 master-0 kubenswrapper[28120]: I0220 15:16:27.240603 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cdlc8" event={"ID":"25bbf6e5-6c71-4b45-9aa6-cbfe06455f50","Type":"ContainerDied","Data":"a7c0955d6c586db45bb21cf4ddbc438ebb1d0f50a492bbda871c6bdf9a54788b"} Feb 20 15:16:27.240693 master-0 kubenswrapper[28120]: I0220 15:16:27.240644 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7c0955d6c586db45bb21cf4ddbc438ebb1d0f50a492bbda871c6bdf9a54788b" Feb 20 15:16:27.265158 master-0 kubenswrapper[28120]: I0220 15:16:27.259606 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3973-account-create-update-hf6wb" event={"ID":"2b332210-786b-421d-8b99-5dcfbeb5196c","Type":"ContainerDied","Data":"d9d635fdf7b7037b535917530454f880b574c76b66cca392fd30c3ce61548a47"} Feb 20 15:16:27.265158 master-0 kubenswrapper[28120]: I0220 15:16:27.259643 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9d635fdf7b7037b535917530454f880b574c76b66cca392fd30c3ce61548a47" Feb 20 15:16:27.265158 master-0 kubenswrapper[28120]: I0220 15:16:27.262186 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rccl4" event={"ID":"15cb2c0b-9fbe-4047-b752-b96968eb5408","Type":"ContainerDied","Data":"a74f77929970879ccab44ab312756cc82091a8803ccdc76e1f03e55f4701d7d3"} Feb 20 15:16:27.265158 master-0 kubenswrapper[28120]: I0220 15:16:27.262244 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a74f77929970879ccab44ab312756cc82091a8803ccdc76e1f03e55f4701d7d3" Feb 20 15:16:27.265158 master-0 kubenswrapper[28120]: I0220 15:16:27.262427 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2588-account-create-update-2jr7b" Feb 20 15:16:27.265158 master-0 kubenswrapper[28120]: I0220 15:16:27.262519 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rccl4" Feb 20 15:16:27.288102 master-0 kubenswrapper[28120]: I0220 15:16:27.288016 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kcsfb" event={"ID":"6721337b-1c31-48d0-88e1-f3251edbebc3","Type":"ContainerDied","Data":"9c2a6eb6f08d30cffb052df8f29b6a77229d1c5060fbd1ffd345cbce9d8228c5"} Feb 20 15:16:27.288102 master-0 kubenswrapper[28120]: I0220 15:16:27.288061 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c2a6eb6f08d30cffb052df8f29b6a77229d1c5060fbd1ffd345cbce9d8228c5" Feb 20 15:16:27.293721 master-0 kubenswrapper[28120]: I0220 15:16:27.293671 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kcsfb" Feb 20 15:16:27.304026 master-0 kubenswrapper[28120]: I0220 15:16:27.303128 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3973-account-create-update-hf6wb" Feb 20 15:16:27.380967 master-0 kubenswrapper[28120]: I0220 15:16:27.380880 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9r25\" (UniqueName: \"kubernetes.io/projected/15cb2c0b-9fbe-4047-b752-b96968eb5408-kube-api-access-k9r25\") pod \"15cb2c0b-9fbe-4047-b752-b96968eb5408\" (UID: \"15cb2c0b-9fbe-4047-b752-b96968eb5408\") " Feb 20 15:16:27.381167 master-0 kubenswrapper[28120]: I0220 15:16:27.380992 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15cb2c0b-9fbe-4047-b752-b96968eb5408-operator-scripts\") pod \"15cb2c0b-9fbe-4047-b752-b96968eb5408\" (UID: \"15cb2c0b-9fbe-4047-b752-b96968eb5408\") " Feb 20 15:16:27.381167 master-0 kubenswrapper[28120]: I0220 15:16:27.381091 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25bbf6e5-6c71-4b45-9aa6-cbfe06455f50-operator-scripts\") pod \"25bbf6e5-6c71-4b45-9aa6-cbfe06455f50\" (UID: \"25bbf6e5-6c71-4b45-9aa6-cbfe06455f50\") " Feb 20 15:16:27.381167 master-0 kubenswrapper[28120]: I0220 15:16:27.381129 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5gph\" (UniqueName: \"kubernetes.io/projected/e5684c90-d24c-4dff-b5aa-ae72282b2a6f-kube-api-access-j5gph\") pod \"e5684c90-d24c-4dff-b5aa-ae72282b2a6f\" (UID: \"e5684c90-d24c-4dff-b5aa-ae72282b2a6f\") " Feb 20 15:16:27.381381 master-0 kubenswrapper[28120]: I0220 15:16:27.381335 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tl7v\" (UniqueName: \"kubernetes.io/projected/25bbf6e5-6c71-4b45-9aa6-cbfe06455f50-kube-api-access-2tl7v\") pod \"25bbf6e5-6c71-4b45-9aa6-cbfe06455f50\" (UID: \"25bbf6e5-6c71-4b45-9aa6-cbfe06455f50\") " Feb 20 15:16:27.381431 master-0 kubenswrapper[28120]: I0220 15:16:27.381414 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5684c90-d24c-4dff-b5aa-ae72282b2a6f-operator-scripts\") pod \"e5684c90-d24c-4dff-b5aa-ae72282b2a6f\" (UID: \"e5684c90-d24c-4dff-b5aa-ae72282b2a6f\") " Feb 20 15:16:27.381747 master-0 kubenswrapper[28120]: I0220 15:16:27.381678 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15cb2c0b-9fbe-4047-b752-b96968eb5408-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15cb2c0b-9fbe-4047-b752-b96968eb5408" (UID: "15cb2c0b-9fbe-4047-b752-b96968eb5408"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:27.382085 master-0 kubenswrapper[28120]: I0220 15:16:27.382037 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15cb2c0b-9fbe-4047-b752-b96968eb5408-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:27.382892 master-0 kubenswrapper[28120]: I0220 15:16:27.382751 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25bbf6e5-6c71-4b45-9aa6-cbfe06455f50-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25bbf6e5-6c71-4b45-9aa6-cbfe06455f50" (UID: "25bbf6e5-6c71-4b45-9aa6-cbfe06455f50"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:27.382892 master-0 kubenswrapper[28120]: I0220 15:16:27.382857 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5684c90-d24c-4dff-b5aa-ae72282b2a6f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5684c90-d24c-4dff-b5aa-ae72282b2a6f" (UID: "e5684c90-d24c-4dff-b5aa-ae72282b2a6f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:27.385164 master-0 kubenswrapper[28120]: I0220 15:16:27.385092 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5684c90-d24c-4dff-b5aa-ae72282b2a6f-kube-api-access-j5gph" (OuterVolumeSpecName: "kube-api-access-j5gph") pod "e5684c90-d24c-4dff-b5aa-ae72282b2a6f" (UID: "e5684c90-d24c-4dff-b5aa-ae72282b2a6f"). InnerVolumeSpecName "kube-api-access-j5gph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:27.385362 master-0 kubenswrapper[28120]: I0220 15:16:27.385263 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15cb2c0b-9fbe-4047-b752-b96968eb5408-kube-api-access-k9r25" (OuterVolumeSpecName: "kube-api-access-k9r25") pod "15cb2c0b-9fbe-4047-b752-b96968eb5408" (UID: "15cb2c0b-9fbe-4047-b752-b96968eb5408"). InnerVolumeSpecName "kube-api-access-k9r25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:27.385546 master-0 kubenswrapper[28120]: I0220 15:16:27.385497 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25bbf6e5-6c71-4b45-9aa6-cbfe06455f50-kube-api-access-2tl7v" (OuterVolumeSpecName: "kube-api-access-2tl7v") pod "25bbf6e5-6c71-4b45-9aa6-cbfe06455f50" (UID: "25bbf6e5-6c71-4b45-9aa6-cbfe06455f50"). InnerVolumeSpecName "kube-api-access-2tl7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:27.483608 master-0 kubenswrapper[28120]: I0220 15:16:27.483479 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6721337b-1c31-48d0-88e1-f3251edbebc3-operator-scripts\") pod \"6721337b-1c31-48d0-88e1-f3251edbebc3\" (UID: \"6721337b-1c31-48d0-88e1-f3251edbebc3\") " Feb 20 15:16:27.483608 master-0 kubenswrapper[28120]: I0220 15:16:27.483542 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b332210-786b-421d-8b99-5dcfbeb5196c-operator-scripts\") pod \"2b332210-786b-421d-8b99-5dcfbeb5196c\" (UID: \"2b332210-786b-421d-8b99-5dcfbeb5196c\") " Feb 20 15:16:27.483825 master-0 kubenswrapper[28120]: I0220 15:16:27.483677 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkjkd\" (UniqueName: \"kubernetes.io/projected/6721337b-1c31-48d0-88e1-f3251edbebc3-kube-api-access-fkjkd\") pod \"6721337b-1c31-48d0-88e1-f3251edbebc3\" (UID: \"6721337b-1c31-48d0-88e1-f3251edbebc3\") " Feb 20 15:16:27.483825 master-0 kubenswrapper[28120]: I0220 15:16:27.483789 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrgm8\" (UniqueName: \"kubernetes.io/projected/2b332210-786b-421d-8b99-5dcfbeb5196c-kube-api-access-nrgm8\") pod \"2b332210-786b-421d-8b99-5dcfbeb5196c\" (UID: \"2b332210-786b-421d-8b99-5dcfbeb5196c\") " Feb 20 15:16:27.484313 master-0 kubenswrapper[28120]: I0220 15:16:27.484292 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tl7v\" (UniqueName: \"kubernetes.io/projected/25bbf6e5-6c71-4b45-9aa6-cbfe06455f50-kube-api-access-2tl7v\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:27.484313 master-0 kubenswrapper[28120]: I0220 15:16:27.484313 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5684c90-d24c-4dff-b5aa-ae72282b2a6f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:27.484383 master-0 kubenswrapper[28120]: I0220 15:16:27.484323 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9r25\" (UniqueName: \"kubernetes.io/projected/15cb2c0b-9fbe-4047-b752-b96968eb5408-kube-api-access-k9r25\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:27.484383 master-0 kubenswrapper[28120]: I0220 15:16:27.484334 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25bbf6e5-6c71-4b45-9aa6-cbfe06455f50-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:27.484383 master-0 kubenswrapper[28120]: I0220 15:16:27.484343 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5gph\" (UniqueName: \"kubernetes.io/projected/e5684c90-d24c-4dff-b5aa-ae72282b2a6f-kube-api-access-j5gph\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:27.484383 master-0 kubenswrapper[28120]: I0220 15:16:27.484275 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6721337b-1c31-48d0-88e1-f3251edbebc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6721337b-1c31-48d0-88e1-f3251edbebc3" (UID: "6721337b-1c31-48d0-88e1-f3251edbebc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:27.485286 master-0 kubenswrapper[28120]: I0220 15:16:27.485221 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b332210-786b-421d-8b99-5dcfbeb5196c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b332210-786b-421d-8b99-5dcfbeb5196c" (UID: "2b332210-786b-421d-8b99-5dcfbeb5196c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:27.487325 master-0 kubenswrapper[28120]: I0220 15:16:27.487286 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b332210-786b-421d-8b99-5dcfbeb5196c-kube-api-access-nrgm8" (OuterVolumeSpecName: "kube-api-access-nrgm8") pod "2b332210-786b-421d-8b99-5dcfbeb5196c" (UID: "2b332210-786b-421d-8b99-5dcfbeb5196c"). InnerVolumeSpecName "kube-api-access-nrgm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:27.487836 master-0 kubenswrapper[28120]: I0220 15:16:27.487784 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6721337b-1c31-48d0-88e1-f3251edbebc3-kube-api-access-fkjkd" (OuterVolumeSpecName: "kube-api-access-fkjkd") pod "6721337b-1c31-48d0-88e1-f3251edbebc3" (UID: "6721337b-1c31-48d0-88e1-f3251edbebc3"). InnerVolumeSpecName "kube-api-access-fkjkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:27.588228 master-0 kubenswrapper[28120]: I0220 15:16:27.588147 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6721337b-1c31-48d0-88e1-f3251edbebc3-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:27.588513 master-0 kubenswrapper[28120]: I0220 15:16:27.588474 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b332210-786b-421d-8b99-5dcfbeb5196c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:27.588703 master-0 kubenswrapper[28120]: I0220 15:16:27.588668 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkjkd\" (UniqueName: \"kubernetes.io/projected/6721337b-1c31-48d0-88e1-f3251edbebc3-kube-api-access-fkjkd\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:27.588959 master-0 kubenswrapper[28120]: I0220 15:16:27.588888 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrgm8\" (UniqueName: \"kubernetes.io/projected/2b332210-786b-421d-8b99-5dcfbeb5196c-kube-api-access-nrgm8\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:28.306302 master-0 kubenswrapper[28120]: I0220 15:16:28.306194 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2588-account-create-update-2jr7b" Feb 20 15:16:28.306302 master-0 kubenswrapper[28120]: I0220 15:16:28.306275 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kcsfb" Feb 20 15:16:28.307315 master-0 kubenswrapper[28120]: I0220 15:16:28.306305 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cdlc8" Feb 20 15:16:28.307315 master-0 kubenswrapper[28120]: I0220 15:16:28.306194 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3973-account-create-update-hf6wb" Feb 20 15:16:30.128990 master-0 kubenswrapper[28120]: I0220 15:16:30.127849 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8z72f"] Feb 20 15:16:30.131369 master-0 kubenswrapper[28120]: E0220 15:16:30.131317 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bbf6e5-6c71-4b45-9aa6-cbfe06455f50" containerName="mariadb-database-create" Feb 20 15:16:30.131369 master-0 kubenswrapper[28120]: I0220 15:16:30.131357 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bbf6e5-6c71-4b45-9aa6-cbfe06455f50" containerName="mariadb-database-create" Feb 20 15:16:30.131504 master-0 kubenswrapper[28120]: E0220 15:16:30.131383 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e6f35c-0dcb-4fc0-a236-7245736b3ae7" containerName="init" Feb 20 15:16:30.131504 master-0 kubenswrapper[28120]: I0220 15:16:30.131393 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e6f35c-0dcb-4fc0-a236-7245736b3ae7" containerName="init" Feb 20 15:16:30.131504 master-0 kubenswrapper[28120]: E0220 15:16:30.131406 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812afa64-06d1-4cb1-95b4-93dda2cce8eb" containerName="mariadb-account-create-update" Feb 20 15:16:30.131504 master-0 kubenswrapper[28120]: I0220 15:16:30.131416 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="812afa64-06d1-4cb1-95b4-93dda2cce8eb" containerName="mariadb-account-create-update" Feb 20 15:16:30.131504 master-0 kubenswrapper[28120]: E0220 15:16:30.131437 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5684c90-d24c-4dff-b5aa-ae72282b2a6f" containerName="mariadb-account-create-update" Feb 20 15:16:30.131504 master-0 kubenswrapper[28120]: I0220 15:16:30.131445 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5684c90-d24c-4dff-b5aa-ae72282b2a6f" containerName="mariadb-account-create-update" Feb 20 15:16:30.131504 master-0 kubenswrapper[28120]: E0220 15:16:30.131461 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ccdb626-016f-4380-9396-9597993bc3df" containerName="mariadb-account-create-update" Feb 20 15:16:30.131504 master-0 kubenswrapper[28120]: I0220 15:16:30.131499 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ccdb626-016f-4380-9396-9597993bc3df" containerName="mariadb-account-create-update" Feb 20 15:16:30.131855 master-0 kubenswrapper[28120]: E0220 15:16:30.131535 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b332210-786b-421d-8b99-5dcfbeb5196c" containerName="mariadb-account-create-update" Feb 20 15:16:30.131855 master-0 kubenswrapper[28120]: I0220 15:16:30.131548 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b332210-786b-421d-8b99-5dcfbeb5196c" containerName="mariadb-account-create-update" Feb 20 15:16:30.131855 master-0 kubenswrapper[28120]: E0220 15:16:30.131585 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15cb2c0b-9fbe-4047-b752-b96968eb5408" containerName="mariadb-database-create" Feb 20 15:16:30.131855 master-0 kubenswrapper[28120]: I0220 15:16:30.131597 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cb2c0b-9fbe-4047-b752-b96968eb5408" containerName="mariadb-database-create" Feb 20 15:16:30.131855 master-0 kubenswrapper[28120]: E0220 15:16:30.131616 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e6f35c-0dcb-4fc0-a236-7245736b3ae7" containerName="dnsmasq-dns" Feb 20 15:16:30.131855 master-0 kubenswrapper[28120]: I0220 15:16:30.131627 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e6f35c-0dcb-4fc0-a236-7245736b3ae7" containerName="dnsmasq-dns" Feb 20 15:16:30.131855 master-0 kubenswrapper[28120]: E0220 15:16:30.131667 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6721337b-1c31-48d0-88e1-f3251edbebc3" containerName="mariadb-database-create" Feb 20 15:16:30.131855 master-0 kubenswrapper[28120]: I0220 15:16:30.131680 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6721337b-1c31-48d0-88e1-f3251edbebc3" containerName="mariadb-database-create" Feb 20 15:16:30.132194 master-0 kubenswrapper[28120]: I0220 15:16:30.132126 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bbf6e5-6c71-4b45-9aa6-cbfe06455f50" containerName="mariadb-database-create" Feb 20 15:16:30.132194 master-0 kubenswrapper[28120]: I0220 15:16:30.132174 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5684c90-d24c-4dff-b5aa-ae72282b2a6f" containerName="mariadb-account-create-update" Feb 20 15:16:30.132270 master-0 kubenswrapper[28120]: I0220 15:16:30.132199 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="6721337b-1c31-48d0-88e1-f3251edbebc3" containerName="mariadb-database-create" Feb 20 15:16:30.132270 master-0 kubenswrapper[28120]: I0220 15:16:30.132228 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="812afa64-06d1-4cb1-95b4-93dda2cce8eb" containerName="mariadb-account-create-update" Feb 20 15:16:30.132270 master-0 kubenswrapper[28120]: I0220 15:16:30.132253 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ccdb626-016f-4380-9396-9597993bc3df" containerName="mariadb-account-create-update" Feb 20 15:16:30.132270 master-0 kubenswrapper[28120]: I0220 15:16:30.132269 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="15cb2c0b-9fbe-4047-b752-b96968eb5408" containerName="mariadb-database-create" Feb 20 15:16:30.132424 master-0 kubenswrapper[28120]: I0220 15:16:30.132300 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b332210-786b-421d-8b99-5dcfbeb5196c" containerName="mariadb-account-create-update" Feb 20 15:16:30.132424 master-0 kubenswrapper[28120]: I0220 15:16:30.132315 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e6f35c-0dcb-4fc0-a236-7245736b3ae7" containerName="dnsmasq-dns" Feb 20 15:16:30.133237 master-0 kubenswrapper[28120]: I0220 15:16:30.133208 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8z72f" Feb 20 15:16:30.139184 master-0 kubenswrapper[28120]: I0220 15:16:30.139105 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 20 15:16:30.146874 master-0 kubenswrapper[28120]: I0220 15:16:30.146811 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8z72f"] Feb 20 15:16:30.167217 master-0 kubenswrapper[28120]: I0220 15:16:30.167146 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b-operator-scripts\") pod \"root-account-create-update-8z72f\" (UID: \"01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b\") " pod="openstack/root-account-create-update-8z72f" Feb 20 15:16:30.167450 master-0 kubenswrapper[28120]: I0220 15:16:30.167333 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxnwc\" (UniqueName: \"kubernetes.io/projected/01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b-kube-api-access-mxnwc\") pod \"root-account-create-update-8z72f\" (UID: \"01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b\") " pod="openstack/root-account-create-update-8z72f" Feb 20 15:16:30.268463 master-0 kubenswrapper[28120]: I0220 15:16:30.268361 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b-operator-scripts\") pod \"root-account-create-update-8z72f\" (UID: \"01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b\") " pod="openstack/root-account-create-update-8z72f" Feb 20 15:16:30.268739 master-0 kubenswrapper[28120]: I0220 15:16:30.268644 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxnwc\" (UniqueName: \"kubernetes.io/projected/01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b-kube-api-access-mxnwc\") pod \"root-account-create-update-8z72f\" (UID: \"01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b\") " pod="openstack/root-account-create-update-8z72f" Feb 20 15:16:30.269092 master-0 kubenswrapper[28120]: I0220 15:16:30.269040 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b-operator-scripts\") pod \"root-account-create-update-8z72f\" (UID: \"01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b\") " pod="openstack/root-account-create-update-8z72f" Feb 20 15:16:30.288649 master-0 kubenswrapper[28120]: I0220 15:16:30.288565 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxnwc\" (UniqueName: \"kubernetes.io/projected/01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b-kube-api-access-mxnwc\") pod \"root-account-create-update-8z72f\" (UID: \"01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b\") " pod="openstack/root-account-create-update-8z72f" Feb 20 15:16:30.337989 master-0 kubenswrapper[28120]: I0220 15:16:30.337885 28120 generic.go:334] "Generic (PLEG): container finished" podID="e9cef7d6-6daa-4419-8bfa-4591fab1d15e" containerID="d10b3ea93bd1033515717862125ddc4d1e58a40f2f81af9d93edfccdc389873c" exitCode=0 Feb 20 15:16:30.337989 master-0 kubenswrapper[28120]: I0220 15:16:30.337951 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-chd7d" event={"ID":"e9cef7d6-6daa-4419-8bfa-4591fab1d15e","Type":"ContainerDied","Data":"d10b3ea93bd1033515717862125ddc4d1e58a40f2f81af9d93edfccdc389873c"} Feb 20 15:16:30.509457 master-0 kubenswrapper[28120]: I0220 15:16:30.509341 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8z72f" Feb 20 15:16:31.059287 master-0 kubenswrapper[28120]: I0220 15:16:31.059210 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8z72f"] Feb 20 15:16:31.060588 master-0 kubenswrapper[28120]: W0220 15:16:31.060527 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01ea4d45_37ae_4b09_a8a3_4bd372b5ac7b.slice/crio-e919669ac3b55651841d12885bfa8d5c8854ee19faf4d250b1d2174000f678aa WatchSource:0}: Error finding container e919669ac3b55651841d12885bfa8d5c8854ee19faf4d250b1d2174000f678aa: Status 404 returned error can't find the container with id e919669ac3b55651841d12885bfa8d5c8854ee19faf4d250b1d2174000f678aa Feb 20 15:16:31.357180 master-0 kubenswrapper[28120]: I0220 15:16:31.355449 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8z72f" event={"ID":"01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b","Type":"ContainerStarted","Data":"0f344b88adcf8828707d3a641cfe72ce319a5628a21a7995a90e103d07f9b5dd"} Feb 20 15:16:31.357180 master-0 kubenswrapper[28120]: I0220 15:16:31.355569 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8z72f" event={"ID":"01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b","Type":"ContainerStarted","Data":"e919669ac3b55651841d12885bfa8d5c8854ee19faf4d250b1d2174000f678aa"} Feb 20 15:16:31.382823 master-0 kubenswrapper[28120]: I0220 15:16:31.382674 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-8z72f" podStartSLOduration=1.382626951 podStartE2EDuration="1.382626951s" podCreationTimestamp="2026-02-20 15:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:16:31.379422841 +0000 UTC m=+929.640216444" watchObservedRunningTime="2026-02-20 15:16:31.382626951 +0000 UTC m=+929.643420514" Feb 20 15:16:31.598031 master-0 kubenswrapper[28120]: I0220 15:16:31.597897 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:31.602752 master-0 kubenswrapper[28120]: I0220 15:16:31.602707 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/742c46aa-374c-4e50-a4a1-6e46f7c13937-etc-swift\") pod \"swift-storage-0\" (UID: \"742c46aa-374c-4e50-a4a1-6e46f7c13937\") " pod="openstack/swift-storage-0" Feb 20 15:16:31.624080 master-0 kubenswrapper[28120]: I0220 15:16:31.623781 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 20 15:16:31.826553 master-0 kubenswrapper[28120]: I0220 15:16:31.826500 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:32.005719 master-0 kubenswrapper[28120]: I0220 15:16:32.005677 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-dispersionconf\") pod \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " Feb 20 15:16:32.005942 master-0 kubenswrapper[28120]: I0220 15:16:32.005735 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-combined-ca-bundle\") pod \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " Feb 20 15:16:32.005942 master-0 kubenswrapper[28120]: I0220 15:16:32.005846 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwbqf\" (UniqueName: \"kubernetes.io/projected/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-kube-api-access-gwbqf\") pod \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " Feb 20 15:16:32.006072 master-0 kubenswrapper[28120]: I0220 15:16:32.006044 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-etc-swift\") pod \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " Feb 20 15:16:32.006132 master-0 kubenswrapper[28120]: I0220 15:16:32.006084 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-scripts\") pod \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " Feb 20 15:16:32.006194 master-0 kubenswrapper[28120]: I0220 15:16:32.006142 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-swiftconf\") pod \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " Feb 20 15:16:32.006240 master-0 kubenswrapper[28120]: I0220 15:16:32.006208 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-ring-data-devices\") pod \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\" (UID: \"e9cef7d6-6daa-4419-8bfa-4591fab1d15e\") " Feb 20 15:16:32.007520 master-0 kubenswrapper[28120]: I0220 15:16:32.007457 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e9cef7d6-6daa-4419-8bfa-4591fab1d15e" (UID: "e9cef7d6-6daa-4419-8bfa-4591fab1d15e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:32.008311 master-0 kubenswrapper[28120]: I0220 15:16:32.008249 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e9cef7d6-6daa-4419-8bfa-4591fab1d15e" (UID: "e9cef7d6-6daa-4419-8bfa-4591fab1d15e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:16:32.009727 master-0 kubenswrapper[28120]: I0220 15:16:32.009656 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-kube-api-access-gwbqf" (OuterVolumeSpecName: "kube-api-access-gwbqf") pod "e9cef7d6-6daa-4419-8bfa-4591fab1d15e" (UID: "e9cef7d6-6daa-4419-8bfa-4591fab1d15e"). InnerVolumeSpecName "kube-api-access-gwbqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:32.016828 master-0 kubenswrapper[28120]: I0220 15:16:32.016750 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e9cef7d6-6daa-4419-8bfa-4591fab1d15e" (UID: "e9cef7d6-6daa-4419-8bfa-4591fab1d15e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:16:32.033842 master-0 kubenswrapper[28120]: I0220 15:16:32.033761 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9cef7d6-6daa-4419-8bfa-4591fab1d15e" (UID: "e9cef7d6-6daa-4419-8bfa-4591fab1d15e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:16:32.054449 master-0 kubenswrapper[28120]: I0220 15:16:32.054367 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-scripts" (OuterVolumeSpecName: "scripts") pod "e9cef7d6-6daa-4419-8bfa-4591fab1d15e" (UID: "e9cef7d6-6daa-4419-8bfa-4591fab1d15e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:32.067955 master-0 kubenswrapper[28120]: I0220 15:16:32.067101 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e9cef7d6-6daa-4419-8bfa-4591fab1d15e" (UID: "e9cef7d6-6daa-4419-8bfa-4591fab1d15e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:16:32.113987 master-0 kubenswrapper[28120]: I0220 15:16:32.110858 28120 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-etc-swift\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:32.113987 master-0 kubenswrapper[28120]: I0220 15:16:32.111410 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:32.113987 master-0 kubenswrapper[28120]: I0220 15:16:32.111434 28120 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-swiftconf\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:32.113987 master-0 kubenswrapper[28120]: I0220 15:16:32.111452 28120 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:32.113987 master-0 kubenswrapper[28120]: I0220 15:16:32.111468 28120 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-dispersionconf\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:32.113987 master-0 kubenswrapper[28120]: I0220 15:16:32.111485 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:32.113987 master-0 kubenswrapper[28120]: I0220 15:16:32.111501 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwbqf\" (UniqueName: \"kubernetes.io/projected/e9cef7d6-6daa-4419-8bfa-4591fab1d15e-kube-api-access-gwbqf\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:32.129596 master-0 kubenswrapper[28120]: I0220 15:16:32.129532 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 20 15:16:32.370350 master-0 kubenswrapper[28120]: I0220 15:16:32.370092 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"742c46aa-374c-4e50-a4a1-6e46f7c13937","Type":"ContainerStarted","Data":"ac6835f9b6a8b7a13676b306755be884db486f93706aeb6dc990d58d8d4e8f21"} Feb 20 15:16:32.374348 master-0 kubenswrapper[28120]: I0220 15:16:32.374271 28120 generic.go:334] "Generic (PLEG): container finished" podID="01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b" containerID="0f344b88adcf8828707d3a641cfe72ce319a5628a21a7995a90e103d07f9b5dd" exitCode=0 Feb 20 15:16:32.374542 master-0 kubenswrapper[28120]: I0220 15:16:32.374358 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8z72f" event={"ID":"01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b","Type":"ContainerDied","Data":"0f344b88adcf8828707d3a641cfe72ce319a5628a21a7995a90e103d07f9b5dd"} Feb 20 15:16:32.377429 master-0 kubenswrapper[28120]: I0220 15:16:32.377355 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-chd7d" event={"ID":"e9cef7d6-6daa-4419-8bfa-4591fab1d15e","Type":"ContainerDied","Data":"10dd15c106f2515f026ad477f562837c1b1cfacc310a9e8d71139cbb65f8f1b2"} Feb 20 15:16:32.377429 master-0 kubenswrapper[28120]: I0220 15:16:32.377395 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-chd7d" Feb 20 15:16:32.377715 master-0 kubenswrapper[28120]: I0220 15:16:32.377401 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10dd15c106f2515f026ad477f562837c1b1cfacc310a9e8d71139cbb65f8f1b2" Feb 20 15:16:32.487594 master-0 kubenswrapper[28120]: I0220 15:16:32.487531 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-ksw92"] Feb 20 15:16:32.488127 master-0 kubenswrapper[28120]: E0220 15:16:32.488094 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9cef7d6-6daa-4419-8bfa-4591fab1d15e" containerName="swift-ring-rebalance" Feb 20 15:16:32.488127 master-0 kubenswrapper[28120]: I0220 15:16:32.488118 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9cef7d6-6daa-4419-8bfa-4591fab1d15e" containerName="swift-ring-rebalance" Feb 20 15:16:32.488516 master-0 kubenswrapper[28120]: I0220 15:16:32.488475 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9cef7d6-6daa-4419-8bfa-4591fab1d15e" containerName="swift-ring-rebalance" Feb 20 15:16:32.489324 master-0 kubenswrapper[28120]: I0220 15:16:32.489281 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ksw92" Feb 20 15:16:32.493157 master-0 kubenswrapper[28120]: I0220 15:16:32.493115 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-c0df7-config-data" Feb 20 15:16:32.502699 master-0 kubenswrapper[28120]: I0220 15:16:32.502642 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ksw92"] Feb 20 15:16:32.622146 master-0 kubenswrapper[28120]: I0220 15:16:32.621932 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dstvx\" (UniqueName: \"kubernetes.io/projected/1f0398cf-ba38-435d-b5a6-44254a2cb187-kube-api-access-dstvx\") pod \"glance-db-sync-ksw92\" (UID: \"1f0398cf-ba38-435d-b5a6-44254a2cb187\") " pod="openstack/glance-db-sync-ksw92" Feb 20 15:16:32.622452 master-0 kubenswrapper[28120]: I0220 15:16:32.622430 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-combined-ca-bundle\") pod \"glance-db-sync-ksw92\" (UID: \"1f0398cf-ba38-435d-b5a6-44254a2cb187\") " pod="openstack/glance-db-sync-ksw92" Feb 20 15:16:32.622630 master-0 kubenswrapper[28120]: I0220 15:16:32.622610 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-config-data\") pod \"glance-db-sync-ksw92\" (UID: \"1f0398cf-ba38-435d-b5a6-44254a2cb187\") " pod="openstack/glance-db-sync-ksw92" Feb 20 15:16:32.622786 master-0 kubenswrapper[28120]: I0220 15:16:32.622768 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-db-sync-config-data\") pod \"glance-db-sync-ksw92\" (UID: \"1f0398cf-ba38-435d-b5a6-44254a2cb187\") " pod="openstack/glance-db-sync-ksw92" Feb 20 15:16:32.727357 master-0 kubenswrapper[28120]: I0220 15:16:32.727292 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-combined-ca-bundle\") pod \"glance-db-sync-ksw92\" (UID: \"1f0398cf-ba38-435d-b5a6-44254a2cb187\") " pod="openstack/glance-db-sync-ksw92" Feb 20 15:16:32.727563 master-0 kubenswrapper[28120]: I0220 15:16:32.727420 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-config-data\") pod \"glance-db-sync-ksw92\" (UID: \"1f0398cf-ba38-435d-b5a6-44254a2cb187\") " pod="openstack/glance-db-sync-ksw92" Feb 20 15:16:32.727563 master-0 kubenswrapper[28120]: I0220 15:16:32.727497 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-db-sync-config-data\") pod \"glance-db-sync-ksw92\" (UID: \"1f0398cf-ba38-435d-b5a6-44254a2cb187\") " pod="openstack/glance-db-sync-ksw92" Feb 20 15:16:32.727630 master-0 kubenswrapper[28120]: I0220 15:16:32.727603 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dstvx\" (UniqueName: \"kubernetes.io/projected/1f0398cf-ba38-435d-b5a6-44254a2cb187-kube-api-access-dstvx\") pod \"glance-db-sync-ksw92\" (UID: \"1f0398cf-ba38-435d-b5a6-44254a2cb187\") " pod="openstack/glance-db-sync-ksw92" Feb 20 15:16:32.741784 master-0 kubenswrapper[28120]: I0220 15:16:32.741731 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-combined-ca-bundle\") pod \"glance-db-sync-ksw92\" (UID: \"1f0398cf-ba38-435d-b5a6-44254a2cb187\") " pod="openstack/glance-db-sync-ksw92" Feb 20 15:16:32.751537 master-0 kubenswrapper[28120]: I0220 15:16:32.751494 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-db-sync-config-data\") pod \"glance-db-sync-ksw92\" (UID: \"1f0398cf-ba38-435d-b5a6-44254a2cb187\") " pod="openstack/glance-db-sync-ksw92" Feb 20 15:16:32.765974 master-0 kubenswrapper[28120]: I0220 15:16:32.765493 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-config-data\") pod \"glance-db-sync-ksw92\" (UID: \"1f0398cf-ba38-435d-b5a6-44254a2cb187\") " pod="openstack/glance-db-sync-ksw92" Feb 20 15:16:32.771803 master-0 kubenswrapper[28120]: I0220 15:16:32.771757 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dstvx\" (UniqueName: \"kubernetes.io/projected/1f0398cf-ba38-435d-b5a6-44254a2cb187-kube-api-access-dstvx\") pod \"glance-db-sync-ksw92\" (UID: \"1f0398cf-ba38-435d-b5a6-44254a2cb187\") " pod="openstack/glance-db-sync-ksw92" Feb 20 15:16:32.812541 master-0 kubenswrapper[28120]: I0220 15:16:32.812121 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ksw92" Feb 20 15:16:33.402283 master-0 kubenswrapper[28120]: I0220 15:16:33.396342 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ksw92"] Feb 20 15:16:33.410521 master-0 kubenswrapper[28120]: W0220 15:16:33.408305 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f0398cf_ba38_435d_b5a6_44254a2cb187.slice/crio-e14d5994a0a1619c6947dfad80a14e506c26663727d675ff244fb689217e963d WatchSource:0}: Error finding container e14d5994a0a1619c6947dfad80a14e506c26663727d675ff244fb689217e963d: Status 404 returned error can't find the container with id e14d5994a0a1619c6947dfad80a14e506c26663727d675ff244fb689217e963d Feb 20 15:16:33.826755 master-0 kubenswrapper[28120]: I0220 15:16:33.826681 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8z72f" Feb 20 15:16:33.953105 master-0 kubenswrapper[28120]: I0220 15:16:33.953026 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b-operator-scripts\") pod \"01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b\" (UID: \"01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b\") " Feb 20 15:16:33.953352 master-0 kubenswrapper[28120]: I0220 15:16:33.953195 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxnwc\" (UniqueName: \"kubernetes.io/projected/01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b-kube-api-access-mxnwc\") pod \"01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b\" (UID: \"01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b\") " Feb 20 15:16:33.954134 master-0 kubenswrapper[28120]: I0220 15:16:33.954059 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b" (UID: "01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:33.958070 master-0 kubenswrapper[28120]: I0220 15:16:33.956996 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b-kube-api-access-mxnwc" (OuterVolumeSpecName: "kube-api-access-mxnwc") pod "01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b" (UID: "01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b"). InnerVolumeSpecName "kube-api-access-mxnwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:34.055427 master-0 kubenswrapper[28120]: I0220 15:16:34.055267 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxnwc\" (UniqueName: \"kubernetes.io/projected/01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b-kube-api-access-mxnwc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:34.055661 master-0 kubenswrapper[28120]: I0220 15:16:34.055648 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:34.411199 master-0 kubenswrapper[28120]: I0220 15:16:34.411056 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ksw92" event={"ID":"1f0398cf-ba38-435d-b5a6-44254a2cb187","Type":"ContainerStarted","Data":"e14d5994a0a1619c6947dfad80a14e506c26663727d675ff244fb689217e963d"} Feb 20 15:16:34.416946 master-0 kubenswrapper[28120]: I0220 15:16:34.415949 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8z72f" event={"ID":"01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b","Type":"ContainerDied","Data":"e919669ac3b55651841d12885bfa8d5c8854ee19faf4d250b1d2174000f678aa"} Feb 20 15:16:34.416946 master-0 kubenswrapper[28120]: I0220 15:16:34.416086 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e919669ac3b55651841d12885bfa8d5c8854ee19faf4d250b1d2174000f678aa" Feb 20 15:16:34.416946 master-0 kubenswrapper[28120]: I0220 15:16:34.416001 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8z72f" Feb 20 15:16:34.420730 master-0 kubenswrapper[28120]: I0220 15:16:34.420685 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"742c46aa-374c-4e50-a4a1-6e46f7c13937","Type":"ContainerStarted","Data":"0b100e127255aec1897c3312d38572eea642bd256bfee2629ca929a9119ecfed"} Feb 20 15:16:34.420864 master-0 kubenswrapper[28120]: I0220 15:16:34.420739 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"742c46aa-374c-4e50-a4a1-6e46f7c13937","Type":"ContainerStarted","Data":"c2d2fe01e626eb0f5a0bece22db61f5d96379e97b13ad1f2886d9f3474f9de3a"} Feb 20 15:16:34.420864 master-0 kubenswrapper[28120]: I0220 15:16:34.420754 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"742c46aa-374c-4e50-a4a1-6e46f7c13937","Type":"ContainerStarted","Data":"dce11b073f1aaaec61d8b46ec3a84f3f24ab2f9a0bd9b5901a49c399f7365101"} Feb 20 15:16:34.420864 master-0 kubenswrapper[28120]: I0220 15:16:34.420769 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"742c46aa-374c-4e50-a4a1-6e46f7c13937","Type":"ContainerStarted","Data":"33d052e26369590337f9a4543a1bdea28bf7db8c021d5c4c337900d55035fe94"} Feb 20 15:16:34.678103 master-0 kubenswrapper[28120]: I0220 15:16:34.677990 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 20 15:16:36.446361 master-0 kubenswrapper[28120]: I0220 15:16:36.446246 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"742c46aa-374c-4e50-a4a1-6e46f7c13937","Type":"ContainerStarted","Data":"1e534fc29da914540fc46c933885600945feef13b81ec9cf4050488e3cb4982c"} Feb 20 15:16:36.446361 master-0 kubenswrapper[28120]: I0220 15:16:36.446317 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"742c46aa-374c-4e50-a4a1-6e46f7c13937","Type":"ContainerStarted","Data":"4200fedba8c2c558260141f47adc637f4c777d3f84b04ebd2dec661508c5bbd0"} Feb 20 15:16:36.446361 master-0 kubenswrapper[28120]: I0220 15:16:36.446339 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"742c46aa-374c-4e50-a4a1-6e46f7c13937","Type":"ContainerStarted","Data":"375cf58e0d551ef24c96ec43eb7c59b7aec005bd26f56007e739edc50a70b941"} Feb 20 15:16:36.446361 master-0 kubenswrapper[28120]: I0220 15:16:36.446357 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"742c46aa-374c-4e50-a4a1-6e46f7c13937","Type":"ContainerStarted","Data":"30b1274969cde2a7fd9c19b28191e19308e9bbaa4a2f43a61e542f8d9a227e97"} Feb 20 15:16:36.689010 master-0 kubenswrapper[28120]: I0220 15:16:36.688853 28120 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mdjwf" podUID="4a840047-42da-4d9c-81e2-8a4da0c3997f" containerName="ovn-controller" probeResult="failure" output=< Feb 20 15:16:36.689010 master-0 kubenswrapper[28120]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 20 15:16:36.689010 master-0 kubenswrapper[28120]: > Feb 20 15:16:36.771994 master-0 kubenswrapper[28120]: I0220 15:16:36.771884 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:16:37.461900 master-0 kubenswrapper[28120]: I0220 15:16:37.461819 28120 generic.go:334] "Generic (PLEG): container finished" podID="041ad820-183f-41ee-b690-b1687d55e12e" containerID="fa167e79bf213365d37983b936c0fa20c6cdd368d2f3e767dce20eed925344bb" exitCode=0 Feb 20 15:16:37.461900 master-0 kubenswrapper[28120]: I0220 15:16:37.461879 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"041ad820-183f-41ee-b690-b1687d55e12e","Type":"ContainerDied","Data":"fa167e79bf213365d37983b936c0fa20c6cdd368d2f3e767dce20eed925344bb"} Feb 20 15:16:38.477763 master-0 kubenswrapper[28120]: I0220 15:16:38.477683 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"041ad820-183f-41ee-b690-b1687d55e12e","Type":"ContainerStarted","Data":"aa5ca879e93a21ed2e633036f42c8335529132caed09f85497e16853c6f7a260"} Feb 20 15:16:38.478761 master-0 kubenswrapper[28120]: I0220 15:16:38.478404 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:16:38.487684 master-0 kubenswrapper[28120]: I0220 15:16:38.487609 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"742c46aa-374c-4e50-a4a1-6e46f7c13937","Type":"ContainerStarted","Data":"8638911e1c8454d3e51273ba3c0e93f15e1a43f5562018faefa6a37015f32070"} Feb 20 15:16:38.487684 master-0 kubenswrapper[28120]: I0220 15:16:38.487652 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"742c46aa-374c-4e50-a4a1-6e46f7c13937","Type":"ContainerStarted","Data":"3fc37f6416ef325f7bce9e258b2a4ec92dd9d8199c87819f43c460e6038d2e16"} Feb 20 15:16:38.487684 master-0 kubenswrapper[28120]: I0220 15:16:38.487663 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"742c46aa-374c-4e50-a4a1-6e46f7c13937","Type":"ContainerStarted","Data":"6153251d986fab1e26a3740bf8b020d9ec370f6f93672a26e70f60f0592710d7"} Feb 20 15:16:38.487684 master-0 kubenswrapper[28120]: I0220 15:16:38.487672 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"742c46aa-374c-4e50-a4a1-6e46f7c13937","Type":"ContainerStarted","Data":"b90ca20926c8ab797eb8e2e5a7dd8ebcc8a931dd1181cfd69df3f7b6198b1b59"} Feb 20 15:16:38.491582 master-0 kubenswrapper[28120]: I0220 15:16:38.491533 28120 generic.go:334] "Generic (PLEG): container finished" podID="86fa1b6a-d104-4787-b6d5-21dfd3a324f8" containerID="8adaee0af42d8c571d79f832d25c27da679ac0d768ebd2e1584a2e78f559823f" exitCode=0 Feb 20 15:16:38.491582 master-0 kubenswrapper[28120]: I0220 15:16:38.491568 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86fa1b6a-d104-4787-b6d5-21dfd3a324f8","Type":"ContainerDied","Data":"8adaee0af42d8c571d79f832d25c27da679ac0d768ebd2e1584a2e78f559823f"} Feb 20 15:16:38.533341 master-0 kubenswrapper[28120]: I0220 15:16:38.533248 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.519134668 podStartE2EDuration="1m1.533226398s" podCreationTimestamp="2026-02-20 15:15:37 +0000 UTC" firstStartedPulling="2026-02-20 15:15:54.896351925 +0000 UTC m=+893.157145488" lastFinishedPulling="2026-02-20 15:16:02.910443635 +0000 UTC m=+901.171237218" observedRunningTime="2026-02-20 15:16:38.512771558 +0000 UTC m=+936.773565121" watchObservedRunningTime="2026-02-20 15:16:38.533226398 +0000 UTC m=+936.794019961" Feb 20 15:16:39.516111 master-0 kubenswrapper[28120]: I0220 15:16:39.516046 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"742c46aa-374c-4e50-a4a1-6e46f7c13937","Type":"ContainerStarted","Data":"04d7bdf3d6d5dc0d1a285a12df0b8cf6c9f7b046a8d401833c23cdb775b6d59d"} Feb 20 15:16:39.516111 master-0 kubenswrapper[28120]: I0220 15:16:39.516111 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"742c46aa-374c-4e50-a4a1-6e46f7c13937","Type":"ContainerStarted","Data":"853b36d87f895218905656ade2e9f442c2615cb2820519adfa048ce108ba10cd"} Feb 20 15:16:39.516111 master-0 kubenswrapper[28120]: I0220 15:16:39.516124 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"742c46aa-374c-4e50-a4a1-6e46f7c13937","Type":"ContainerStarted","Data":"03c115cb8cb67a5a0378d84d80cae9a731812ae6b26347e7ee8141d6ee42bb80"} Feb 20 15:16:39.520175 master-0 kubenswrapper[28120]: I0220 15:16:39.520147 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86fa1b6a-d104-4787-b6d5-21dfd3a324f8","Type":"ContainerStarted","Data":"9b904dd1d666a259131da2c4900a6af3066bcc94cea96a37adebf6410e0c13d7"} Feb 20 15:16:39.520277 master-0 kubenswrapper[28120]: I0220 15:16:39.520261 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 20 15:16:39.584120 master-0 kubenswrapper[28120]: I0220 15:16:39.581958 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.388058017 podStartE2EDuration="26.581859274s" podCreationTimestamp="2026-02-20 15:16:13 +0000 UTC" firstStartedPulling="2026-02-20 15:16:32.131330821 +0000 UTC m=+930.392124394" lastFinishedPulling="2026-02-20 15:16:37.325132068 +0000 UTC m=+935.585925651" observedRunningTime="2026-02-20 15:16:39.567154528 +0000 UTC m=+937.827948101" watchObservedRunningTime="2026-02-20 15:16:39.581859274 +0000 UTC m=+937.842652877" Feb 20 15:16:39.641768 master-0 kubenswrapper[28120]: I0220 15:16:39.641672 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=56.07316181 podStartE2EDuration="1m3.641646004s" podCreationTimestamp="2026-02-20 15:15:36 +0000 UTC" firstStartedPulling="2026-02-20 15:15:55.30701502 +0000 UTC m=+893.567808583" lastFinishedPulling="2026-02-20 15:16:02.875499194 +0000 UTC m=+901.136292777" observedRunningTime="2026-02-20 15:16:39.631358378 +0000 UTC m=+937.892151981" watchObservedRunningTime="2026-02-20 15:16:39.641646004 +0000 UTC m=+937.902439587" Feb 20 15:16:39.900983 master-0 kubenswrapper[28120]: I0220 15:16:39.900773 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67dc4d787c-4fhxl"] Feb 20 15:16:39.901510 master-0 kubenswrapper[28120]: E0220 15:16:39.901439 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b" containerName="mariadb-account-create-update" Feb 20 15:16:39.901510 master-0 kubenswrapper[28120]: I0220 15:16:39.901468 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b" containerName="mariadb-account-create-update" Feb 20 15:16:39.903451 master-0 kubenswrapper[28120]: I0220 15:16:39.902149 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b" containerName="mariadb-account-create-update" Feb 20 15:16:39.929029 master-0 kubenswrapper[28120]: I0220 15:16:39.928691 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67dc4d787c-4fhxl"] Feb 20 15:16:39.929029 master-0 kubenswrapper[28120]: I0220 15:16:39.928800 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:39.933706 master-0 kubenswrapper[28120]: I0220 15:16:39.933661 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 20 15:16:39.995947 master-0 kubenswrapper[28120]: I0220 15:16:39.995574 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-ovsdbserver-sb\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:39.995947 master-0 kubenswrapper[28120]: I0220 15:16:39.995655 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-dns-svc\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:39.995947 master-0 kubenswrapper[28120]: I0220 15:16:39.995747 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t72vq\" (UniqueName: \"kubernetes.io/projected/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-kube-api-access-t72vq\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:39.995947 master-0 kubenswrapper[28120]: I0220 15:16:39.995883 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-ovsdbserver-nb\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:39.996279 master-0 kubenswrapper[28120]: I0220 15:16:39.995963 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-dns-swift-storage-0\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:39.996279 master-0 kubenswrapper[28120]: I0220 15:16:39.996197 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-config\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:40.099872 master-0 kubenswrapper[28120]: I0220 15:16:40.099573 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-dns-swift-storage-0\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:40.100182 master-0 kubenswrapper[28120]: I0220 15:16:40.100040 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-config\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:40.100497 master-0 kubenswrapper[28120]: I0220 15:16:40.100466 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-ovsdbserver-sb\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:40.100829 master-0 kubenswrapper[28120]: I0220 15:16:40.100778 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-dns-svc\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:40.100971 master-0 kubenswrapper[28120]: I0220 15:16:40.100956 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t72vq\" (UniqueName: \"kubernetes.io/projected/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-kube-api-access-t72vq\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:40.101106 master-0 kubenswrapper[28120]: I0220 15:16:40.101092 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-ovsdbserver-nb\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:40.101947 master-0 kubenswrapper[28120]: I0220 15:16:40.101878 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-dns-svc\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:40.102612 master-0 kubenswrapper[28120]: I0220 15:16:40.102561 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-config\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:40.103781 master-0 kubenswrapper[28120]: I0220 15:16:40.103750 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-ovsdbserver-nb\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:40.103837 master-0 kubenswrapper[28120]: I0220 15:16:40.103775 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-ovsdbserver-sb\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:40.106004 master-0 kubenswrapper[28120]: I0220 15:16:40.105976 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-dns-swift-storage-0\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:40.120657 master-0 kubenswrapper[28120]: I0220 15:16:40.120342 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t72vq\" (UniqueName: \"kubernetes.io/projected/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-kube-api-access-t72vq\") pod \"dnsmasq-dns-67dc4d787c-4fhxl\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:40.253839 master-0 kubenswrapper[28120]: I0220 15:16:40.253768 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:41.682145 master-0 kubenswrapper[28120]: I0220 15:16:41.682082 28120 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mdjwf" podUID="4a840047-42da-4d9c-81e2-8a4da0c3997f" containerName="ovn-controller" probeResult="failure" output=< Feb 20 15:16:41.682145 master-0 kubenswrapper[28120]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 20 15:16:41.682145 master-0 kubenswrapper[28120]: > Feb 20 15:16:41.772606 master-0 kubenswrapper[28120]: I0220 15:16:41.772545 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5lchw" Feb 20 15:16:42.101418 master-0 kubenswrapper[28120]: I0220 15:16:42.101358 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mdjwf-config-d2pjm"] Feb 20 15:16:42.103768 master-0 kubenswrapper[28120]: I0220 15:16:42.103729 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.107993 master-0 kubenswrapper[28120]: I0220 15:16:42.107785 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 20 15:16:42.144183 master-0 kubenswrapper[28120]: I0220 15:16:42.144053 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjwsd\" (UniqueName: \"kubernetes.io/projected/7dbff544-863a-4c2a-91c4-95869aee72d6-kube-api-access-rjwsd\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.144274 master-0 kubenswrapper[28120]: I0220 15:16:42.144191 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7dbff544-863a-4c2a-91c4-95869aee72d6-additional-scripts\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.144274 master-0 kubenswrapper[28120]: I0220 15:16:42.144256 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-run\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.144464 master-0 kubenswrapper[28120]: I0220 15:16:42.144438 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dbff544-863a-4c2a-91c4-95869aee72d6-scripts\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.144526 master-0 kubenswrapper[28120]: I0220 15:16:42.144507 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-log-ovn\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.144646 master-0 kubenswrapper[28120]: I0220 15:16:42.144629 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-run-ovn\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.158247 master-0 kubenswrapper[28120]: I0220 15:16:42.158180 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mdjwf-config-d2pjm"] Feb 20 15:16:42.249122 master-0 kubenswrapper[28120]: I0220 15:16:42.249038 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-log-ovn\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.249284 master-0 kubenswrapper[28120]: I0220 15:16:42.249143 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-run-ovn\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.249284 master-0 kubenswrapper[28120]: I0220 15:16:42.249205 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjwsd\" (UniqueName: \"kubernetes.io/projected/7dbff544-863a-4c2a-91c4-95869aee72d6-kube-api-access-rjwsd\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.249284 master-0 kubenswrapper[28120]: I0220 15:16:42.249248 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7dbff544-863a-4c2a-91c4-95869aee72d6-additional-scripts\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.249425 master-0 kubenswrapper[28120]: I0220 15:16:42.249320 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-run\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.249425 master-0 kubenswrapper[28120]: I0220 15:16:42.249392 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dbff544-863a-4c2a-91c4-95869aee72d6-scripts\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.252011 master-0 kubenswrapper[28120]: I0220 15:16:42.251986 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dbff544-863a-4c2a-91c4-95869aee72d6-scripts\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.252102 master-0 kubenswrapper[28120]: I0220 15:16:42.252074 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-log-ovn\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.252145 master-0 kubenswrapper[28120]: I0220 15:16:42.252118 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-run-ovn\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.252378 master-0 kubenswrapper[28120]: I0220 15:16:42.252337 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-run\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.252950 master-0 kubenswrapper[28120]: I0220 15:16:42.252894 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7dbff544-863a-4c2a-91c4-95869aee72d6-additional-scripts\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.268959 master-0 kubenswrapper[28120]: I0220 15:16:42.268877 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjwsd\" (UniqueName: \"kubernetes.io/projected/7dbff544-863a-4c2a-91c4-95869aee72d6-kube-api-access-rjwsd\") pod \"ovn-controller-mdjwf-config-d2pjm\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:42.434487 master-0 kubenswrapper[28120]: I0220 15:16:42.434349 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:46.695469 master-0 kubenswrapper[28120]: I0220 15:16:46.695393 28120 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mdjwf" podUID="4a840047-42da-4d9c-81e2-8a4da0c3997f" containerName="ovn-controller" probeResult="failure" output=< Feb 20 15:16:46.695469 master-0 kubenswrapper[28120]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 20 15:16:46.695469 master-0 kubenswrapper[28120]: > Feb 20 15:16:46.756064 master-0 kubenswrapper[28120]: I0220 15:16:46.753903 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67dc4d787c-4fhxl"] Feb 20 15:16:46.768578 master-0 kubenswrapper[28120]: W0220 15:16:46.768069 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1bcbfd6_13f6_4fa6_9afc_f5fd68e4a3e8.slice/crio-9c0380b1b47e2ae4b0ea539d6643e8046df73477bcb87a26671b1f505779ba8e WatchSource:0}: Error finding container 9c0380b1b47e2ae4b0ea539d6643e8046df73477bcb87a26671b1f505779ba8e: Status 404 returned error can't find the container with id 9c0380b1b47e2ae4b0ea539d6643e8046df73477bcb87a26671b1f505779ba8e Feb 20 15:16:46.874354 master-0 kubenswrapper[28120]: W0220 15:16:46.874298 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dbff544_863a_4c2a_91c4_95869aee72d6.slice/crio-d997a728472711521b5620323114d94bb79a948ea5229f839bdaa4092bd991d4 WatchSource:0}: Error finding container d997a728472711521b5620323114d94bb79a948ea5229f839bdaa4092bd991d4: Status 404 returned error can't find the container with id d997a728472711521b5620323114d94bb79a948ea5229f839bdaa4092bd991d4 Feb 20 15:16:46.880410 master-0 kubenswrapper[28120]: I0220 15:16:46.880335 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mdjwf-config-d2pjm"] Feb 20 15:16:47.630483 master-0 kubenswrapper[28120]: I0220 15:16:47.630317 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mdjwf-config-d2pjm" event={"ID":"7dbff544-863a-4c2a-91c4-95869aee72d6","Type":"ContainerStarted","Data":"9d3ab156505d0a147cafbd9f58f42ff51264ee66c0eb34b8112f4b16e3bfea31"} Feb 20 15:16:47.630483 master-0 kubenswrapper[28120]: I0220 15:16:47.630403 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mdjwf-config-d2pjm" event={"ID":"7dbff544-863a-4c2a-91c4-95869aee72d6","Type":"ContainerStarted","Data":"d997a728472711521b5620323114d94bb79a948ea5229f839bdaa4092bd991d4"} Feb 20 15:16:47.634035 master-0 kubenswrapper[28120]: I0220 15:16:47.633986 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ksw92" event={"ID":"1f0398cf-ba38-435d-b5a6-44254a2cb187","Type":"ContainerStarted","Data":"4d2fcadee999e60e0c45f3652858d10d9d957882fbd8f5a778a0efd228d28713"} Feb 20 15:16:47.635937 master-0 kubenswrapper[28120]: I0220 15:16:47.635859 28120 generic.go:334] "Generic (PLEG): container finished" podID="f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" containerID="f47025edb1499fcf9ab38b0aea9abbad52d8f524d6827879d741702f3793da79" exitCode=0 Feb 20 15:16:47.635937 master-0 kubenswrapper[28120]: I0220 15:16:47.635892 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" event={"ID":"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8","Type":"ContainerDied","Data":"f47025edb1499fcf9ab38b0aea9abbad52d8f524d6827879d741702f3793da79"} Feb 20 15:16:47.635937 master-0 kubenswrapper[28120]: I0220 15:16:47.635908 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" event={"ID":"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8","Type":"ContainerStarted","Data":"9c0380b1b47e2ae4b0ea539d6643e8046df73477bcb87a26671b1f505779ba8e"} Feb 20 15:16:47.656592 master-0 kubenswrapper[28120]: I0220 15:16:47.656488 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mdjwf-config-d2pjm" podStartSLOduration=5.656468412 podStartE2EDuration="5.656468412s" podCreationTimestamp="2026-02-20 15:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:16:47.654233806 +0000 UTC m=+945.915027389" watchObservedRunningTime="2026-02-20 15:16:47.656468412 +0000 UTC m=+945.917261985" Feb 20 15:16:47.684047 master-0 kubenswrapper[28120]: I0220 15:16:47.683951 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-ksw92" podStartSLOduration=2.838958173 podStartE2EDuration="15.683933096s" podCreationTimestamp="2026-02-20 15:16:32 +0000 UTC" firstStartedPulling="2026-02-20 15:16:33.41203423 +0000 UTC m=+931.672827793" lastFinishedPulling="2026-02-20 15:16:46.257009153 +0000 UTC m=+944.517802716" observedRunningTime="2026-02-20 15:16:47.679382353 +0000 UTC m=+945.940175936" watchObservedRunningTime="2026-02-20 15:16:47.683933096 +0000 UTC m=+945.944726659" Feb 20 15:16:48.653710 master-0 kubenswrapper[28120]: I0220 15:16:48.653612 28120 generic.go:334] "Generic (PLEG): container finished" podID="7dbff544-863a-4c2a-91c4-95869aee72d6" containerID="9d3ab156505d0a147cafbd9f58f42ff51264ee66c0eb34b8112f4b16e3bfea31" exitCode=0 Feb 20 15:16:48.654548 master-0 kubenswrapper[28120]: I0220 15:16:48.653741 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mdjwf-config-d2pjm" event={"ID":"7dbff544-863a-4c2a-91c4-95869aee72d6","Type":"ContainerDied","Data":"9d3ab156505d0a147cafbd9f58f42ff51264ee66c0eb34b8112f4b16e3bfea31"} Feb 20 15:16:48.657903 master-0 kubenswrapper[28120]: I0220 15:16:48.657834 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" event={"ID":"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8","Type":"ContainerStarted","Data":"01800cbdf70942edc132b8b6021821c204891c17ac24664d21a8b6d40ef9eb4c"} Feb 20 15:16:48.741763 master-0 kubenswrapper[28120]: I0220 15:16:48.739530 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" podStartSLOduration=9.739499635 podStartE2EDuration="9.739499635s" podCreationTimestamp="2026-02-20 15:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:16:48.726398868 +0000 UTC m=+946.987192531" watchObservedRunningTime="2026-02-20 15:16:48.739499635 +0000 UTC m=+947.000293238" Feb 20 15:16:49.673366 master-0 kubenswrapper[28120]: I0220 15:16:49.671798 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:50.225171 master-0 kubenswrapper[28120]: I0220 15:16:50.225086 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:50.360101 master-0 kubenswrapper[28120]: I0220 15:16:50.359904 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-run-ovn\") pod \"7dbff544-863a-4c2a-91c4-95869aee72d6\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " Feb 20 15:16:50.360101 master-0 kubenswrapper[28120]: I0220 15:16:50.360063 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjwsd\" (UniqueName: \"kubernetes.io/projected/7dbff544-863a-4c2a-91c4-95869aee72d6-kube-api-access-rjwsd\") pod \"7dbff544-863a-4c2a-91c4-95869aee72d6\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " Feb 20 15:16:50.360473 master-0 kubenswrapper[28120]: I0220 15:16:50.360113 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7dbff544-863a-4c2a-91c4-95869aee72d6" (UID: "7dbff544-863a-4c2a-91c4-95869aee72d6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:16:50.360473 master-0 kubenswrapper[28120]: I0220 15:16:50.360142 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dbff544-863a-4c2a-91c4-95869aee72d6-scripts\") pod \"7dbff544-863a-4c2a-91c4-95869aee72d6\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " Feb 20 15:16:50.360473 master-0 kubenswrapper[28120]: I0220 15:16:50.360233 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7dbff544-863a-4c2a-91c4-95869aee72d6-additional-scripts\") pod \"7dbff544-863a-4c2a-91c4-95869aee72d6\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " Feb 20 15:16:50.360473 master-0 kubenswrapper[28120]: I0220 15:16:50.360296 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-log-ovn\") pod \"7dbff544-863a-4c2a-91c4-95869aee72d6\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " Feb 20 15:16:50.360473 master-0 kubenswrapper[28120]: I0220 15:16:50.360337 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-run\") pod \"7dbff544-863a-4c2a-91c4-95869aee72d6\" (UID: \"7dbff544-863a-4c2a-91c4-95869aee72d6\") " Feb 20 15:16:50.361020 master-0 kubenswrapper[28120]: I0220 15:16:50.360499 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7dbff544-863a-4c2a-91c4-95869aee72d6" (UID: "7dbff544-863a-4c2a-91c4-95869aee72d6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:16:50.361020 master-0 kubenswrapper[28120]: I0220 15:16:50.360638 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-run" (OuterVolumeSpecName: "var-run") pod "7dbff544-863a-4c2a-91c4-95869aee72d6" (UID: "7dbff544-863a-4c2a-91c4-95869aee72d6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:16:50.361020 master-0 kubenswrapper[28120]: I0220 15:16:50.360725 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dbff544-863a-4c2a-91c4-95869aee72d6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7dbff544-863a-4c2a-91c4-95869aee72d6" (UID: "7dbff544-863a-4c2a-91c4-95869aee72d6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:50.361260 master-0 kubenswrapper[28120]: I0220 15:16:50.361126 28120 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:50.361260 master-0 kubenswrapper[28120]: I0220 15:16:50.361155 28120 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7dbff544-863a-4c2a-91c4-95869aee72d6-additional-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:50.361260 master-0 kubenswrapper[28120]: I0220 15:16:50.361175 28120 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:50.361260 master-0 kubenswrapper[28120]: I0220 15:16:50.361193 28120 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7dbff544-863a-4c2a-91c4-95869aee72d6-var-run\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:50.361606 master-0 kubenswrapper[28120]: I0220 15:16:50.361261 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dbff544-863a-4c2a-91c4-95869aee72d6-scripts" (OuterVolumeSpecName: "scripts") pod "7dbff544-863a-4c2a-91c4-95869aee72d6" (UID: "7dbff544-863a-4c2a-91c4-95869aee72d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:50.365488 master-0 kubenswrapper[28120]: I0220 15:16:50.365427 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbff544-863a-4c2a-91c4-95869aee72d6-kube-api-access-rjwsd" (OuterVolumeSpecName: "kube-api-access-rjwsd") pod "7dbff544-863a-4c2a-91c4-95869aee72d6" (UID: "7dbff544-863a-4c2a-91c4-95869aee72d6"). InnerVolumeSpecName "kube-api-access-rjwsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:50.464049 master-0 kubenswrapper[28120]: I0220 15:16:50.463908 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dbff544-863a-4c2a-91c4-95869aee72d6-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:50.464049 master-0 kubenswrapper[28120]: I0220 15:16:50.464008 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjwsd\" (UniqueName: \"kubernetes.io/projected/7dbff544-863a-4c2a-91c4-95869aee72d6-kube-api-access-rjwsd\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:50.690305 master-0 kubenswrapper[28120]: I0220 15:16:50.690151 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mdjwf-config-d2pjm" Feb 20 15:16:50.690305 master-0 kubenswrapper[28120]: I0220 15:16:50.690236 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mdjwf-config-d2pjm" event={"ID":"7dbff544-863a-4c2a-91c4-95869aee72d6","Type":"ContainerDied","Data":"d997a728472711521b5620323114d94bb79a948ea5229f839bdaa4092bd991d4"} Feb 20 15:16:50.690305 master-0 kubenswrapper[28120]: I0220 15:16:50.690276 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d997a728472711521b5620323114d94bb79a948ea5229f839bdaa4092bd991d4" Feb 20 15:16:51.399307 master-0 kubenswrapper[28120]: I0220 15:16:51.399223 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mdjwf-config-d2pjm"] Feb 20 15:16:51.416438 master-0 kubenswrapper[28120]: I0220 15:16:51.416330 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mdjwf-config-d2pjm"] Feb 20 15:16:51.674819 master-0 kubenswrapper[28120]: I0220 15:16:51.674705 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-mdjwf" Feb 20 15:16:52.070445 master-0 kubenswrapper[28120]: I0220 15:16:52.070379 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dbff544-863a-4c2a-91c4-95869aee72d6" path="/var/lib/kubelet/pods/7dbff544-863a-4c2a-91c4-95869aee72d6/volumes" Feb 20 15:16:53.168068 master-0 kubenswrapper[28120]: I0220 15:16:53.143893 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 20 15:16:53.513217 master-0 kubenswrapper[28120]: I0220 15:16:53.512302 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-vgtxx"] Feb 20 15:16:53.513217 master-0 kubenswrapper[28120]: E0220 15:16:53.512875 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dbff544-863a-4c2a-91c4-95869aee72d6" containerName="ovn-config" Feb 20 15:16:53.513217 master-0 kubenswrapper[28120]: I0220 15:16:53.512894 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dbff544-863a-4c2a-91c4-95869aee72d6" containerName="ovn-config" Feb 20 15:16:53.513486 master-0 kubenswrapper[28120]: I0220 15:16:53.513237 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dbff544-863a-4c2a-91c4-95869aee72d6" containerName="ovn-config" Feb 20 15:16:53.515003 master-0 kubenswrapper[28120]: I0220 15:16:53.514080 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vgtxx" Feb 20 15:16:53.542966 master-0 kubenswrapper[28120]: I0220 15:16:53.541369 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vgtxx"] Feb 20 15:16:53.569420 master-0 kubenswrapper[28120]: I0220 15:16:53.569207 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x57sx\" (UniqueName: \"kubernetes.io/projected/1f8f0488-62d7-4225-adc3-b536806b56c7-kube-api-access-x57sx\") pod \"cinder-db-create-vgtxx\" (UID: \"1f8f0488-62d7-4225-adc3-b536806b56c7\") " pod="openstack/cinder-db-create-vgtxx" Feb 20 15:16:53.569420 master-0 kubenswrapper[28120]: I0220 15:16:53.569271 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f8f0488-62d7-4225-adc3-b536806b56c7-operator-scripts\") pod \"cinder-db-create-vgtxx\" (UID: \"1f8f0488-62d7-4225-adc3-b536806b56c7\") " pod="openstack/cinder-db-create-vgtxx" Feb 20 15:16:53.614483 master-0 kubenswrapper[28120]: I0220 15:16:53.614225 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2c3c-account-create-update-fvlnz"] Feb 20 15:16:53.616733 master-0 kubenswrapper[28120]: I0220 15:16:53.616703 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2c3c-account-create-update-fvlnz" Feb 20 15:16:53.621325 master-0 kubenswrapper[28120]: I0220 15:16:53.621274 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 20 15:16:53.622768 master-0 kubenswrapper[28120]: I0220 15:16:53.622503 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2c3c-account-create-update-fvlnz"] Feb 20 15:16:53.674247 master-0 kubenswrapper[28120]: I0220 15:16:53.674154 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hhl8\" (UniqueName: \"kubernetes.io/projected/119ab416-9782-4f7d-a843-15903771feb0-kube-api-access-4hhl8\") pod \"cinder-2c3c-account-create-update-fvlnz\" (UID: \"119ab416-9782-4f7d-a843-15903771feb0\") " pod="openstack/cinder-2c3c-account-create-update-fvlnz" Feb 20 15:16:53.674472 master-0 kubenswrapper[28120]: I0220 15:16:53.674367 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/119ab416-9782-4f7d-a843-15903771feb0-operator-scripts\") pod \"cinder-2c3c-account-create-update-fvlnz\" (UID: \"119ab416-9782-4f7d-a843-15903771feb0\") " pod="openstack/cinder-2c3c-account-create-update-fvlnz" Feb 20 15:16:53.674472 master-0 kubenswrapper[28120]: I0220 15:16:53.674401 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x57sx\" (UniqueName: \"kubernetes.io/projected/1f8f0488-62d7-4225-adc3-b536806b56c7-kube-api-access-x57sx\") pod \"cinder-db-create-vgtxx\" (UID: \"1f8f0488-62d7-4225-adc3-b536806b56c7\") " pod="openstack/cinder-db-create-vgtxx" Feb 20 15:16:53.674549 master-0 kubenswrapper[28120]: I0220 15:16:53.674462 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f8f0488-62d7-4225-adc3-b536806b56c7-operator-scripts\") pod \"cinder-db-create-vgtxx\" (UID: \"1f8f0488-62d7-4225-adc3-b536806b56c7\") " pod="openstack/cinder-db-create-vgtxx" Feb 20 15:16:53.675654 master-0 kubenswrapper[28120]: I0220 15:16:53.675165 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f8f0488-62d7-4225-adc3-b536806b56c7-operator-scripts\") pod \"cinder-db-create-vgtxx\" (UID: \"1f8f0488-62d7-4225-adc3-b536806b56c7\") " pod="openstack/cinder-db-create-vgtxx" Feb 20 15:16:53.701541 master-0 kubenswrapper[28120]: I0220 15:16:53.701460 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x57sx\" (UniqueName: \"kubernetes.io/projected/1f8f0488-62d7-4225-adc3-b536806b56c7-kube-api-access-x57sx\") pod \"cinder-db-create-vgtxx\" (UID: \"1f8f0488-62d7-4225-adc3-b536806b56c7\") " pod="openstack/cinder-db-create-vgtxx" Feb 20 15:16:53.778296 master-0 kubenswrapper[28120]: I0220 15:16:53.778246 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/119ab416-9782-4f7d-a843-15903771feb0-operator-scripts\") pod \"cinder-2c3c-account-create-update-fvlnz\" (UID: \"119ab416-9782-4f7d-a843-15903771feb0\") " pod="openstack/cinder-2c3c-account-create-update-fvlnz" Feb 20 15:16:53.780645 master-0 kubenswrapper[28120]: I0220 15:16:53.778888 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hhl8\" (UniqueName: \"kubernetes.io/projected/119ab416-9782-4f7d-a843-15903771feb0-kube-api-access-4hhl8\") pod \"cinder-2c3c-account-create-update-fvlnz\" (UID: \"119ab416-9782-4f7d-a843-15903771feb0\") " pod="openstack/cinder-2c3c-account-create-update-fvlnz" Feb 20 15:16:53.780645 master-0 kubenswrapper[28120]: I0220 15:16:53.779448 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/119ab416-9782-4f7d-a843-15903771feb0-operator-scripts\") pod \"cinder-2c3c-account-create-update-fvlnz\" (UID: \"119ab416-9782-4f7d-a843-15903771feb0\") " pod="openstack/cinder-2c3c-account-create-update-fvlnz" Feb 20 15:16:53.816538 master-0 kubenswrapper[28120]: I0220 15:16:53.816473 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lpgjh"] Feb 20 15:16:53.816716 master-0 kubenswrapper[28120]: I0220 15:16:53.816492 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hhl8\" (UniqueName: \"kubernetes.io/projected/119ab416-9782-4f7d-a843-15903771feb0-kube-api-access-4hhl8\") pod \"cinder-2c3c-account-create-update-fvlnz\" (UID: \"119ab416-9782-4f7d-a843-15903771feb0\") " pod="openstack/cinder-2c3c-account-create-update-fvlnz" Feb 20 15:16:53.842263 master-0 kubenswrapper[28120]: I0220 15:16:53.842205 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lpgjh"] Feb 20 15:16:53.842444 master-0 kubenswrapper[28120]: I0220 15:16:53.842329 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lpgjh" Feb 20 15:16:53.859754 master-0 kubenswrapper[28120]: I0220 15:16:53.859694 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vgtxx" Feb 20 15:16:53.913952 master-0 kubenswrapper[28120]: I0220 15:16:53.911415 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-nmklt"] Feb 20 15:16:53.914201 master-0 kubenswrapper[28120]: I0220 15:16:53.914164 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nmklt" Feb 20 15:16:53.917519 master-0 kubenswrapper[28120]: I0220 15:16:53.917464 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 20 15:16:53.917979 master-0 kubenswrapper[28120]: I0220 15:16:53.917649 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 20 15:16:53.918133 master-0 kubenswrapper[28120]: I0220 15:16:53.917701 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 20 15:16:53.930547 master-0 kubenswrapper[28120]: I0220 15:16:53.930494 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-nmklt"] Feb 20 15:16:53.939068 master-0 kubenswrapper[28120]: I0220 15:16:53.938381 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2c3c-account-create-update-fvlnz" Feb 20 15:16:53.946810 master-0 kubenswrapper[28120]: I0220 15:16:53.946762 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ff8a-account-create-update-szf6b"] Feb 20 15:16:53.948767 master-0 kubenswrapper[28120]: I0220 15:16:53.948106 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ff8a-account-create-update-szf6b" Feb 20 15:16:53.951904 master-0 kubenswrapper[28120]: I0220 15:16:53.950618 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 20 15:16:53.957347 master-0 kubenswrapper[28120]: I0220 15:16:53.956483 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ff8a-account-create-update-szf6b"] Feb 20 15:16:53.991207 master-0 kubenswrapper[28120]: I0220 15:16:53.984869 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eab76df2-0be6-47a5-9cec-1283cadcae8f-config-data\") pod \"keystone-db-sync-nmklt\" (UID: \"eab76df2-0be6-47a5-9cec-1283cadcae8f\") " pod="openstack/keystone-db-sync-nmklt" Feb 20 15:16:53.991207 master-0 kubenswrapper[28120]: I0220 15:16:53.984980 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6crj\" (UniqueName: \"kubernetes.io/projected/aaf6c20f-8830-480b-b20a-704a8afa894d-kube-api-access-x6crj\") pod \"neutron-db-create-lpgjh\" (UID: \"aaf6c20f-8830-480b-b20a-704a8afa894d\") " pod="openstack/neutron-db-create-lpgjh" Feb 20 15:16:53.991207 master-0 kubenswrapper[28120]: I0220 15:16:53.985032 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab76df2-0be6-47a5-9cec-1283cadcae8f-combined-ca-bundle\") pod \"keystone-db-sync-nmklt\" (UID: \"eab76df2-0be6-47a5-9cec-1283cadcae8f\") " pod="openstack/keystone-db-sync-nmklt" Feb 20 15:16:53.991207 master-0 kubenswrapper[28120]: I0220 15:16:53.985086 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaf6c20f-8830-480b-b20a-704a8afa894d-operator-scripts\") pod \"neutron-db-create-lpgjh\" (UID: \"aaf6c20f-8830-480b-b20a-704a8afa894d\") " pod="openstack/neutron-db-create-lpgjh" Feb 20 15:16:53.991207 master-0 kubenswrapper[28120]: I0220 15:16:53.985112 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2b6m\" (UniqueName: \"kubernetes.io/projected/eab76df2-0be6-47a5-9cec-1283cadcae8f-kube-api-access-r2b6m\") pod \"keystone-db-sync-nmklt\" (UID: \"eab76df2-0be6-47a5-9cec-1283cadcae8f\") " pod="openstack/keystone-db-sync-nmklt" Feb 20 15:16:54.087196 master-0 kubenswrapper[28120]: I0220 15:16:54.087144 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab76df2-0be6-47a5-9cec-1283cadcae8f-combined-ca-bundle\") pod \"keystone-db-sync-nmklt\" (UID: \"eab76df2-0be6-47a5-9cec-1283cadcae8f\") " pod="openstack/keystone-db-sync-nmklt" Feb 20 15:16:54.087428 master-0 kubenswrapper[28120]: I0220 15:16:54.087215 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaf6c20f-8830-480b-b20a-704a8afa894d-operator-scripts\") pod \"neutron-db-create-lpgjh\" (UID: \"aaf6c20f-8830-480b-b20a-704a8afa894d\") " pod="openstack/neutron-db-create-lpgjh" Feb 20 15:16:54.087428 master-0 kubenswrapper[28120]: I0220 15:16:54.087244 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2b6m\" (UniqueName: \"kubernetes.io/projected/eab76df2-0be6-47a5-9cec-1283cadcae8f-kube-api-access-r2b6m\") pod \"keystone-db-sync-nmklt\" (UID: \"eab76df2-0be6-47a5-9cec-1283cadcae8f\") " pod="openstack/keystone-db-sync-nmklt" Feb 20 15:16:54.087428 master-0 kubenswrapper[28120]: I0220 15:16:54.087315 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eab76df2-0be6-47a5-9cec-1283cadcae8f-config-data\") pod \"keystone-db-sync-nmklt\" (UID: \"eab76df2-0be6-47a5-9cec-1283cadcae8f\") " pod="openstack/keystone-db-sync-nmklt" Feb 20 15:16:54.087428 master-0 kubenswrapper[28120]: I0220 15:16:54.087341 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbgxb\" (UniqueName: \"kubernetes.io/projected/31ff0e77-87a0-4a3b-b389-c6e44d24fea3-kube-api-access-qbgxb\") pod \"neutron-ff8a-account-create-update-szf6b\" (UID: \"31ff0e77-87a0-4a3b-b389-c6e44d24fea3\") " pod="openstack/neutron-ff8a-account-create-update-szf6b" Feb 20 15:16:54.087428 master-0 kubenswrapper[28120]: I0220 15:16:54.087380 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31ff0e77-87a0-4a3b-b389-c6e44d24fea3-operator-scripts\") pod \"neutron-ff8a-account-create-update-szf6b\" (UID: \"31ff0e77-87a0-4a3b-b389-c6e44d24fea3\") " pod="openstack/neutron-ff8a-account-create-update-szf6b" Feb 20 15:16:54.087428 master-0 kubenswrapper[28120]: I0220 15:16:54.087417 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6crj\" (UniqueName: \"kubernetes.io/projected/aaf6c20f-8830-480b-b20a-704a8afa894d-kube-api-access-x6crj\") pod \"neutron-db-create-lpgjh\" (UID: \"aaf6c20f-8830-480b-b20a-704a8afa894d\") " pod="openstack/neutron-db-create-lpgjh" Feb 20 15:16:54.104069 master-0 kubenswrapper[28120]: I0220 15:16:54.092337 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eab76df2-0be6-47a5-9cec-1283cadcae8f-config-data\") pod \"keystone-db-sync-nmklt\" (UID: \"eab76df2-0be6-47a5-9cec-1283cadcae8f\") " pod="openstack/keystone-db-sync-nmklt" Feb 20 15:16:54.104069 master-0 kubenswrapper[28120]: I0220 15:16:54.095217 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab76df2-0be6-47a5-9cec-1283cadcae8f-combined-ca-bundle\") pod \"keystone-db-sync-nmklt\" (UID: \"eab76df2-0be6-47a5-9cec-1283cadcae8f\") " pod="openstack/keystone-db-sync-nmklt" Feb 20 15:16:54.104069 master-0 kubenswrapper[28120]: I0220 15:16:54.095966 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaf6c20f-8830-480b-b20a-704a8afa894d-operator-scripts\") pod \"neutron-db-create-lpgjh\" (UID: \"aaf6c20f-8830-480b-b20a-704a8afa894d\") " pod="openstack/neutron-db-create-lpgjh" Feb 20 15:16:54.108865 master-0 kubenswrapper[28120]: I0220 15:16:54.108814 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2b6m\" (UniqueName: \"kubernetes.io/projected/eab76df2-0be6-47a5-9cec-1283cadcae8f-kube-api-access-r2b6m\") pod \"keystone-db-sync-nmklt\" (UID: \"eab76df2-0be6-47a5-9cec-1283cadcae8f\") " pod="openstack/keystone-db-sync-nmklt" Feb 20 15:16:54.116630 master-0 kubenswrapper[28120]: I0220 15:16:54.116580 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6crj\" (UniqueName: \"kubernetes.io/projected/aaf6c20f-8830-480b-b20a-704a8afa894d-kube-api-access-x6crj\") pod \"neutron-db-create-lpgjh\" (UID: \"aaf6c20f-8830-480b-b20a-704a8afa894d\") " pod="openstack/neutron-db-create-lpgjh" Feb 20 15:16:54.185993 master-0 kubenswrapper[28120]: I0220 15:16:54.183577 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lpgjh" Feb 20 15:16:54.191082 master-0 kubenswrapper[28120]: I0220 15:16:54.190100 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbgxb\" (UniqueName: \"kubernetes.io/projected/31ff0e77-87a0-4a3b-b389-c6e44d24fea3-kube-api-access-qbgxb\") pod \"neutron-ff8a-account-create-update-szf6b\" (UID: \"31ff0e77-87a0-4a3b-b389-c6e44d24fea3\") " pod="openstack/neutron-ff8a-account-create-update-szf6b" Feb 20 15:16:54.191082 master-0 kubenswrapper[28120]: I0220 15:16:54.190185 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31ff0e77-87a0-4a3b-b389-c6e44d24fea3-operator-scripts\") pod \"neutron-ff8a-account-create-update-szf6b\" (UID: \"31ff0e77-87a0-4a3b-b389-c6e44d24fea3\") " pod="openstack/neutron-ff8a-account-create-update-szf6b" Feb 20 15:16:54.195139 master-0 kubenswrapper[28120]: I0220 15:16:54.192213 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31ff0e77-87a0-4a3b-b389-c6e44d24fea3-operator-scripts\") pod \"neutron-ff8a-account-create-update-szf6b\" (UID: \"31ff0e77-87a0-4a3b-b389-c6e44d24fea3\") " pod="openstack/neutron-ff8a-account-create-update-szf6b" Feb 20 15:16:54.208045 master-0 kubenswrapper[28120]: I0220 15:16:54.206128 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbgxb\" (UniqueName: \"kubernetes.io/projected/31ff0e77-87a0-4a3b-b389-c6e44d24fea3-kube-api-access-qbgxb\") pod \"neutron-ff8a-account-create-update-szf6b\" (UID: \"31ff0e77-87a0-4a3b-b389-c6e44d24fea3\") " pod="openstack/neutron-ff8a-account-create-update-szf6b" Feb 20 15:16:54.348504 master-0 kubenswrapper[28120]: I0220 15:16:54.348457 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nmklt" Feb 20 15:16:54.365302 master-0 kubenswrapper[28120]: I0220 15:16:54.361090 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ff8a-account-create-update-szf6b" Feb 20 15:16:54.387175 master-0 kubenswrapper[28120]: I0220 15:16:54.387102 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vgtxx"] Feb 20 15:16:54.411314 master-0 kubenswrapper[28120]: W0220 15:16:54.411252 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f8f0488_62d7_4225_adc3_b536806b56c7.slice/crio-cf4f161d50d5009c22aee8b724f99abd8faa1eab3b9f90ecd370876d0283dd9b WatchSource:0}: Error finding container cf4f161d50d5009c22aee8b724f99abd8faa1eab3b9f90ecd370876d0283dd9b: Status 404 returned error can't find the container with id cf4f161d50d5009c22aee8b724f99abd8faa1eab3b9f90ecd370876d0283dd9b Feb 20 15:16:54.426234 master-0 kubenswrapper[28120]: I0220 15:16:54.426135 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 20 15:16:54.485203 master-0 kubenswrapper[28120]: W0220 15:16:54.485150 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod119ab416_9782_4f7d_a843_15903771feb0.slice/crio-98aa51e7f6ce8fc669e8628d0b6d80bbc8ccf000c12f02079f0f8e9d3b5d58c7 WatchSource:0}: Error finding container 98aa51e7f6ce8fc669e8628d0b6d80bbc8ccf000c12f02079f0f8e9d3b5d58c7: Status 404 returned error can't find the container with id 98aa51e7f6ce8fc669e8628d0b6d80bbc8ccf000c12f02079f0f8e9d3b5d58c7 Feb 20 15:16:54.501031 master-0 kubenswrapper[28120]: I0220 15:16:54.500982 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2c3c-account-create-update-fvlnz"] Feb 20 15:16:54.684527 master-0 kubenswrapper[28120]: I0220 15:16:54.684486 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lpgjh"] Feb 20 15:16:54.767673 master-0 kubenswrapper[28120]: I0220 15:16:54.767615 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lpgjh" event={"ID":"aaf6c20f-8830-480b-b20a-704a8afa894d","Type":"ContainerStarted","Data":"fad347467816d326770b3771af38454d2a02aaaee99edd560c81a7f3911c97a5"} Feb 20 15:16:54.771949 master-0 kubenswrapper[28120]: I0220 15:16:54.770325 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vgtxx" event={"ID":"1f8f0488-62d7-4225-adc3-b536806b56c7","Type":"ContainerStarted","Data":"31652261467bf78a8b1a4cff0f390a39afa001021acee8624c7e29e139d0b099"} Feb 20 15:16:54.771949 master-0 kubenswrapper[28120]: I0220 15:16:54.770359 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vgtxx" event={"ID":"1f8f0488-62d7-4225-adc3-b536806b56c7","Type":"ContainerStarted","Data":"cf4f161d50d5009c22aee8b724f99abd8faa1eab3b9f90ecd370876d0283dd9b"} Feb 20 15:16:54.775945 master-0 kubenswrapper[28120]: I0220 15:16:54.773026 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2c3c-account-create-update-fvlnz" event={"ID":"119ab416-9782-4f7d-a843-15903771feb0","Type":"ContainerStarted","Data":"8433294c69d076deafee059faf170d983e4547292276f2cbe9869a3386e3e65c"} Feb 20 15:16:54.775945 master-0 kubenswrapper[28120]: I0220 15:16:54.773055 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2c3c-account-create-update-fvlnz" event={"ID":"119ab416-9782-4f7d-a843-15903771feb0","Type":"ContainerStarted","Data":"98aa51e7f6ce8fc669e8628d0b6d80bbc8ccf000c12f02079f0f8e9d3b5d58c7"} Feb 20 15:16:54.794205 master-0 kubenswrapper[28120]: I0220 15:16:54.791288 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-vgtxx" podStartSLOduration=1.791261935 podStartE2EDuration="1.791261935s" podCreationTimestamp="2026-02-20 15:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:16:54.785274406 +0000 UTC m=+953.046067969" watchObservedRunningTime="2026-02-20 15:16:54.791261935 +0000 UTC m=+953.052055498" Feb 20 15:16:54.826002 master-0 kubenswrapper[28120]: I0220 15:16:54.822265 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-2c3c-account-create-update-fvlnz" podStartSLOduration=1.822245128 podStartE2EDuration="1.822245128s" podCreationTimestamp="2026-02-20 15:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:16:54.80588009 +0000 UTC m=+953.066673653" watchObservedRunningTime="2026-02-20 15:16:54.822245128 +0000 UTC m=+953.083038691" Feb 20 15:16:54.905040 master-0 kubenswrapper[28120]: I0220 15:16:54.903596 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-nmklt"] Feb 20 15:16:54.996013 master-0 kubenswrapper[28120]: I0220 15:16:54.995968 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ff8a-account-create-update-szf6b"] Feb 20 15:16:55.011469 master-0 kubenswrapper[28120]: W0220 15:16:55.011406 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31ff0e77_87a0_4a3b_b389_c6e44d24fea3.slice/crio-71d8a156f0b3dbb7c45ba52850d8c6f6368f1e9ac2b0cd879e20af03cae69ec8 WatchSource:0}: Error finding container 71d8a156f0b3dbb7c45ba52850d8c6f6368f1e9ac2b0cd879e20af03cae69ec8: Status 404 returned error can't find the container with id 71d8a156f0b3dbb7c45ba52850d8c6f6368f1e9ac2b0cd879e20af03cae69ec8 Feb 20 15:16:55.256454 master-0 kubenswrapper[28120]: I0220 15:16:55.256296 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:16:55.337941 master-0 kubenswrapper[28120]: I0220 15:16:55.337413 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-wjnmk"] Feb 20 15:16:55.337941 master-0 kubenswrapper[28120]: I0220 15:16:55.337657 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" podUID="1042f39c-bf98-4a4f-92ed-5b937bbc88f1" containerName="dnsmasq-dns" containerID="cri-o://c530edc6df6a6aae8c87428b8cd75716632ac75413dba47a954eba749a097089" gracePeriod=10 Feb 20 15:16:55.818790 master-0 kubenswrapper[28120]: I0220 15:16:55.817269 28120 generic.go:334] "Generic (PLEG): container finished" podID="1042f39c-bf98-4a4f-92ed-5b937bbc88f1" containerID="c530edc6df6a6aae8c87428b8cd75716632ac75413dba47a954eba749a097089" exitCode=0 Feb 20 15:16:55.818790 master-0 kubenswrapper[28120]: I0220 15:16:55.817341 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" event={"ID":"1042f39c-bf98-4a4f-92ed-5b937bbc88f1","Type":"ContainerDied","Data":"c530edc6df6a6aae8c87428b8cd75716632ac75413dba47a954eba749a097089"} Feb 20 15:16:55.820316 master-0 kubenswrapper[28120]: I0220 15:16:55.820292 28120 generic.go:334] "Generic (PLEG): container finished" podID="119ab416-9782-4f7d-a843-15903771feb0" containerID="8433294c69d076deafee059faf170d983e4547292276f2cbe9869a3386e3e65c" exitCode=0 Feb 20 15:16:55.820371 master-0 kubenswrapper[28120]: I0220 15:16:55.820340 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2c3c-account-create-update-fvlnz" event={"ID":"119ab416-9782-4f7d-a843-15903771feb0","Type":"ContainerDied","Data":"8433294c69d076deafee059faf170d983e4547292276f2cbe9869a3386e3e65c"} Feb 20 15:16:55.849609 master-0 kubenswrapper[28120]: I0220 15:16:55.848951 28120 generic.go:334] "Generic (PLEG): container finished" podID="31ff0e77-87a0-4a3b-b389-c6e44d24fea3" containerID="cd76cba1e4d8887069f63dd913b6db433e0081d6fffc9eec9e2daa845305d7e9" exitCode=0 Feb 20 15:16:55.855186 master-0 kubenswrapper[28120]: I0220 15:16:55.854998 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ff8a-account-create-update-szf6b" event={"ID":"31ff0e77-87a0-4a3b-b389-c6e44d24fea3","Type":"ContainerDied","Data":"cd76cba1e4d8887069f63dd913b6db433e0081d6fffc9eec9e2daa845305d7e9"} Feb 20 15:16:55.855186 master-0 kubenswrapper[28120]: I0220 15:16:55.855059 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ff8a-account-create-update-szf6b" event={"ID":"31ff0e77-87a0-4a3b-b389-c6e44d24fea3","Type":"ContainerStarted","Data":"71d8a156f0b3dbb7c45ba52850d8c6f6368f1e9ac2b0cd879e20af03cae69ec8"} Feb 20 15:16:55.856807 master-0 kubenswrapper[28120]: I0220 15:16:55.856766 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nmklt" event={"ID":"eab76df2-0be6-47a5-9cec-1283cadcae8f","Type":"ContainerStarted","Data":"9dac7e746a263aa5e605c54d1ae999726b3290d438c8fc416e8aabbf53d9e251"} Feb 20 15:16:55.858752 master-0 kubenswrapper[28120]: I0220 15:16:55.858706 28120 generic.go:334] "Generic (PLEG): container finished" podID="aaf6c20f-8830-480b-b20a-704a8afa894d" containerID="47890fdb07b15f56f4a4f0ac850f6b77335c757f096834c2833aaab2c3ca190c" exitCode=0 Feb 20 15:16:55.858809 master-0 kubenswrapper[28120]: I0220 15:16:55.858772 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lpgjh" event={"ID":"aaf6c20f-8830-480b-b20a-704a8afa894d","Type":"ContainerDied","Data":"47890fdb07b15f56f4a4f0ac850f6b77335c757f096834c2833aaab2c3ca190c"} Feb 20 15:16:55.861357 master-0 kubenswrapper[28120]: I0220 15:16:55.861309 28120 generic.go:334] "Generic (PLEG): container finished" podID="1f8f0488-62d7-4225-adc3-b536806b56c7" containerID="31652261467bf78a8b1a4cff0f390a39afa001021acee8624c7e29e139d0b099" exitCode=0 Feb 20 15:16:55.861415 master-0 kubenswrapper[28120]: I0220 15:16:55.861359 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vgtxx" event={"ID":"1f8f0488-62d7-4225-adc3-b536806b56c7","Type":"ContainerDied","Data":"31652261467bf78a8b1a4cff0f390a39afa001021acee8624c7e29e139d0b099"} Feb 20 15:16:55.883616 master-0 kubenswrapper[28120]: I0220 15:16:55.883560 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:55.957451 master-0 kubenswrapper[28120]: I0220 15:16:55.957384 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-config\") pod \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " Feb 20 15:16:55.957662 master-0 kubenswrapper[28120]: I0220 15:16:55.957568 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-ovsdbserver-nb\") pod \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " Feb 20 15:16:55.957723 master-0 kubenswrapper[28120]: I0220 15:16:55.957675 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-dns-svc\") pod \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " Feb 20 15:16:55.957836 master-0 kubenswrapper[28120]: I0220 15:16:55.957805 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-ovsdbserver-sb\") pod \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " Feb 20 15:16:55.957893 master-0 kubenswrapper[28120]: I0220 15:16:55.957883 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmgcn\" (UniqueName: \"kubernetes.io/projected/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-kube-api-access-gmgcn\") pod \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\" (UID: \"1042f39c-bf98-4a4f-92ed-5b937bbc88f1\") " Feb 20 15:16:55.963798 master-0 kubenswrapper[28120]: I0220 15:16:55.963744 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-kube-api-access-gmgcn" (OuterVolumeSpecName: "kube-api-access-gmgcn") pod "1042f39c-bf98-4a4f-92ed-5b937bbc88f1" (UID: "1042f39c-bf98-4a4f-92ed-5b937bbc88f1"). InnerVolumeSpecName "kube-api-access-gmgcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:56.038807 master-0 kubenswrapper[28120]: I0220 15:16:56.038743 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1042f39c-bf98-4a4f-92ed-5b937bbc88f1" (UID: "1042f39c-bf98-4a4f-92ed-5b937bbc88f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:56.041706 master-0 kubenswrapper[28120]: I0220 15:16:56.041671 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1042f39c-bf98-4a4f-92ed-5b937bbc88f1" (UID: "1042f39c-bf98-4a4f-92ed-5b937bbc88f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:56.042507 master-0 kubenswrapper[28120]: I0220 15:16:56.042476 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-config" (OuterVolumeSpecName: "config") pod "1042f39c-bf98-4a4f-92ed-5b937bbc88f1" (UID: "1042f39c-bf98-4a4f-92ed-5b937bbc88f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:56.047364 master-0 kubenswrapper[28120]: I0220 15:16:56.047165 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1042f39c-bf98-4a4f-92ed-5b937bbc88f1" (UID: "1042f39c-bf98-4a4f-92ed-5b937bbc88f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:56.061915 master-0 kubenswrapper[28120]: I0220 15:16:56.060537 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:56.061915 master-0 kubenswrapper[28120]: I0220 15:16:56.060573 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:56.061915 master-0 kubenswrapper[28120]: I0220 15:16:56.060582 28120 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:56.061915 master-0 kubenswrapper[28120]: I0220 15:16:56.060591 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:56.061915 master-0 kubenswrapper[28120]: I0220 15:16:56.060601 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmgcn\" (UniqueName: \"kubernetes.io/projected/1042f39c-bf98-4a4f-92ed-5b937bbc88f1-kube-api-access-gmgcn\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:56.877358 master-0 kubenswrapper[28120]: I0220 15:16:56.877238 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" event={"ID":"1042f39c-bf98-4a4f-92ed-5b937bbc88f1","Type":"ContainerDied","Data":"70928ee6bde31cc19bee02811d7a1d3f237ab49cca76b0e61f4b3853abba7d47"} Feb 20 15:16:56.877358 master-0 kubenswrapper[28120]: I0220 15:16:56.877332 28120 scope.go:117] "RemoveContainer" containerID="c530edc6df6a6aae8c87428b8cd75716632ac75413dba47a954eba749a097089" Feb 20 15:16:56.879288 master-0 kubenswrapper[28120]: I0220 15:16:56.877388 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-wjnmk" Feb 20 15:16:56.881392 master-0 kubenswrapper[28120]: I0220 15:16:56.881342 28120 generic.go:334] "Generic (PLEG): container finished" podID="1f0398cf-ba38-435d-b5a6-44254a2cb187" containerID="4d2fcadee999e60e0c45f3652858d10d9d957882fbd8f5a778a0efd228d28713" exitCode=0 Feb 20 15:16:56.881489 master-0 kubenswrapper[28120]: I0220 15:16:56.881403 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ksw92" event={"ID":"1f0398cf-ba38-435d-b5a6-44254a2cb187","Type":"ContainerDied","Data":"4d2fcadee999e60e0c45f3652858d10d9d957882fbd8f5a778a0efd228d28713"} Feb 20 15:16:56.899696 master-0 kubenswrapper[28120]: I0220 15:16:56.899228 28120 scope.go:117] "RemoveContainer" containerID="2de49e6702d99621e5d47e0b691e94452fd66b24515c959245ce30730e84d1a3" Feb 20 15:16:56.936000 master-0 kubenswrapper[28120]: I0220 15:16:56.935511 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-wjnmk"] Feb 20 15:16:56.944058 master-0 kubenswrapper[28120]: I0220 15:16:56.944011 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-wjnmk"] Feb 20 15:16:57.501315 master-0 kubenswrapper[28120]: I0220 15:16:57.500912 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vgtxx" Feb 20 15:16:57.589901 master-0 kubenswrapper[28120]: I0220 15:16:57.589839 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x57sx\" (UniqueName: \"kubernetes.io/projected/1f8f0488-62d7-4225-adc3-b536806b56c7-kube-api-access-x57sx\") pod \"1f8f0488-62d7-4225-adc3-b536806b56c7\" (UID: \"1f8f0488-62d7-4225-adc3-b536806b56c7\") " Feb 20 15:16:57.592476 master-0 kubenswrapper[28120]: I0220 15:16:57.591252 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f8f0488-62d7-4225-adc3-b536806b56c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f8f0488-62d7-4225-adc3-b536806b56c7" (UID: "1f8f0488-62d7-4225-adc3-b536806b56c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:57.592896 master-0 kubenswrapper[28120]: I0220 15:16:57.589909 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f8f0488-62d7-4225-adc3-b536806b56c7-operator-scripts\") pod \"1f8f0488-62d7-4225-adc3-b536806b56c7\" (UID: \"1f8f0488-62d7-4225-adc3-b536806b56c7\") " Feb 20 15:16:57.594268 master-0 kubenswrapper[28120]: I0220 15:16:57.594195 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f8f0488-62d7-4225-adc3-b536806b56c7-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:57.596079 master-0 kubenswrapper[28120]: I0220 15:16:57.596039 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8f0488-62d7-4225-adc3-b536806b56c7-kube-api-access-x57sx" (OuterVolumeSpecName: "kube-api-access-x57sx") pod "1f8f0488-62d7-4225-adc3-b536806b56c7" (UID: "1f8f0488-62d7-4225-adc3-b536806b56c7"). InnerVolumeSpecName "kube-api-access-x57sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:57.618468 master-0 kubenswrapper[28120]: I0220 15:16:57.616453 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2c3c-account-create-update-fvlnz" Feb 20 15:16:57.624569 master-0 kubenswrapper[28120]: I0220 15:16:57.624524 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lpgjh" Feb 20 15:16:57.629750 master-0 kubenswrapper[28120]: I0220 15:16:57.629708 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ff8a-account-create-update-szf6b" Feb 20 15:16:57.703477 master-0 kubenswrapper[28120]: I0220 15:16:57.698914 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaf6c20f-8830-480b-b20a-704a8afa894d-operator-scripts\") pod \"aaf6c20f-8830-480b-b20a-704a8afa894d\" (UID: \"aaf6c20f-8830-480b-b20a-704a8afa894d\") " Feb 20 15:16:57.703477 master-0 kubenswrapper[28120]: I0220 15:16:57.698999 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hhl8\" (UniqueName: \"kubernetes.io/projected/119ab416-9782-4f7d-a843-15903771feb0-kube-api-access-4hhl8\") pod \"119ab416-9782-4f7d-a843-15903771feb0\" (UID: \"119ab416-9782-4f7d-a843-15903771feb0\") " Feb 20 15:16:57.703477 master-0 kubenswrapper[28120]: I0220 15:16:57.699029 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbgxb\" (UniqueName: \"kubernetes.io/projected/31ff0e77-87a0-4a3b-b389-c6e44d24fea3-kube-api-access-qbgxb\") pod \"31ff0e77-87a0-4a3b-b389-c6e44d24fea3\" (UID: \"31ff0e77-87a0-4a3b-b389-c6e44d24fea3\") " Feb 20 15:16:57.703477 master-0 kubenswrapper[28120]: I0220 15:16:57.699088 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/119ab416-9782-4f7d-a843-15903771feb0-operator-scripts\") pod \"119ab416-9782-4f7d-a843-15903771feb0\" (UID: \"119ab416-9782-4f7d-a843-15903771feb0\") " Feb 20 15:16:57.703477 master-0 kubenswrapper[28120]: I0220 15:16:57.699118 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6crj\" (UniqueName: \"kubernetes.io/projected/aaf6c20f-8830-480b-b20a-704a8afa894d-kube-api-access-x6crj\") pod \"aaf6c20f-8830-480b-b20a-704a8afa894d\" (UID: \"aaf6c20f-8830-480b-b20a-704a8afa894d\") " Feb 20 15:16:57.703477 master-0 kubenswrapper[28120]: I0220 15:16:57.699174 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31ff0e77-87a0-4a3b-b389-c6e44d24fea3-operator-scripts\") pod \"31ff0e77-87a0-4a3b-b389-c6e44d24fea3\" (UID: \"31ff0e77-87a0-4a3b-b389-c6e44d24fea3\") " Feb 20 15:16:57.703477 master-0 kubenswrapper[28120]: I0220 15:16:57.700282 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x57sx\" (UniqueName: \"kubernetes.io/projected/1f8f0488-62d7-4225-adc3-b536806b56c7-kube-api-access-x57sx\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:57.703947 master-0 kubenswrapper[28120]: I0220 15:16:57.703783 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ff0e77-87a0-4a3b-b389-c6e44d24fea3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31ff0e77-87a0-4a3b-b389-c6e44d24fea3" (UID: "31ff0e77-87a0-4a3b-b389-c6e44d24fea3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:57.705312 master-0 kubenswrapper[28120]: I0220 15:16:57.704941 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaf6c20f-8830-480b-b20a-704a8afa894d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aaf6c20f-8830-480b-b20a-704a8afa894d" (UID: "aaf6c20f-8830-480b-b20a-704a8afa894d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:57.711553 master-0 kubenswrapper[28120]: I0220 15:16:57.708699 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/119ab416-9782-4f7d-a843-15903771feb0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "119ab416-9782-4f7d-a843-15903771feb0" (UID: "119ab416-9782-4f7d-a843-15903771feb0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:16:57.711553 master-0 kubenswrapper[28120]: I0220 15:16:57.709358 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaf6c20f-8830-480b-b20a-704a8afa894d-kube-api-access-x6crj" (OuterVolumeSpecName: "kube-api-access-x6crj") pod "aaf6c20f-8830-480b-b20a-704a8afa894d" (UID: "aaf6c20f-8830-480b-b20a-704a8afa894d"). InnerVolumeSpecName "kube-api-access-x6crj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:57.713518 master-0 kubenswrapper[28120]: I0220 15:16:57.713471 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ff0e77-87a0-4a3b-b389-c6e44d24fea3-kube-api-access-qbgxb" (OuterVolumeSpecName: "kube-api-access-qbgxb") pod "31ff0e77-87a0-4a3b-b389-c6e44d24fea3" (UID: "31ff0e77-87a0-4a3b-b389-c6e44d24fea3"). InnerVolumeSpecName "kube-api-access-qbgxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:57.717189 master-0 kubenswrapper[28120]: I0220 15:16:57.717116 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/119ab416-9782-4f7d-a843-15903771feb0-kube-api-access-4hhl8" (OuterVolumeSpecName: "kube-api-access-4hhl8") pod "119ab416-9782-4f7d-a843-15903771feb0" (UID: "119ab416-9782-4f7d-a843-15903771feb0"). InnerVolumeSpecName "kube-api-access-4hhl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:16:57.802649 master-0 kubenswrapper[28120]: I0220 15:16:57.802583 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaf6c20f-8830-480b-b20a-704a8afa894d-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:57.802862 master-0 kubenswrapper[28120]: I0220 15:16:57.802660 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hhl8\" (UniqueName: \"kubernetes.io/projected/119ab416-9782-4f7d-a843-15903771feb0-kube-api-access-4hhl8\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:57.802862 master-0 kubenswrapper[28120]: I0220 15:16:57.802674 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbgxb\" (UniqueName: \"kubernetes.io/projected/31ff0e77-87a0-4a3b-b389-c6e44d24fea3-kube-api-access-qbgxb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:57.802862 master-0 kubenswrapper[28120]: I0220 15:16:57.802683 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/119ab416-9782-4f7d-a843-15903771feb0-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:57.802862 master-0 kubenswrapper[28120]: I0220 15:16:57.802692 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6crj\" (UniqueName: \"kubernetes.io/projected/aaf6c20f-8830-480b-b20a-704a8afa894d-kube-api-access-x6crj\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:57.802862 master-0 kubenswrapper[28120]: I0220 15:16:57.802701 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31ff0e77-87a0-4a3b-b389-c6e44d24fea3-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:16:57.895314 master-0 kubenswrapper[28120]: I0220 15:16:57.895197 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ff8a-account-create-update-szf6b" event={"ID":"31ff0e77-87a0-4a3b-b389-c6e44d24fea3","Type":"ContainerDied","Data":"71d8a156f0b3dbb7c45ba52850d8c6f6368f1e9ac2b0cd879e20af03cae69ec8"} Feb 20 15:16:57.895314 master-0 kubenswrapper[28120]: I0220 15:16:57.895260 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71d8a156f0b3dbb7c45ba52850d8c6f6368f1e9ac2b0cd879e20af03cae69ec8" Feb 20 15:16:57.895314 master-0 kubenswrapper[28120]: I0220 15:16:57.895218 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ff8a-account-create-update-szf6b" Feb 20 15:16:57.897804 master-0 kubenswrapper[28120]: I0220 15:16:57.897742 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lpgjh" event={"ID":"aaf6c20f-8830-480b-b20a-704a8afa894d","Type":"ContainerDied","Data":"fad347467816d326770b3771af38454d2a02aaaee99edd560c81a7f3911c97a5"} Feb 20 15:16:57.897870 master-0 kubenswrapper[28120]: I0220 15:16:57.897810 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fad347467816d326770b3771af38454d2a02aaaee99edd560c81a7f3911c97a5" Feb 20 15:16:57.898045 master-0 kubenswrapper[28120]: I0220 15:16:57.898014 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lpgjh" Feb 20 15:16:57.899677 master-0 kubenswrapper[28120]: I0220 15:16:57.899638 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vgtxx" event={"ID":"1f8f0488-62d7-4225-adc3-b536806b56c7","Type":"ContainerDied","Data":"cf4f161d50d5009c22aee8b724f99abd8faa1eab3b9f90ecd370876d0283dd9b"} Feb 20 15:16:57.899730 master-0 kubenswrapper[28120]: I0220 15:16:57.899680 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf4f161d50d5009c22aee8b724f99abd8faa1eab3b9f90ecd370876d0283dd9b" Feb 20 15:16:57.899763 master-0 kubenswrapper[28120]: I0220 15:16:57.899727 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vgtxx" Feb 20 15:16:57.902144 master-0 kubenswrapper[28120]: I0220 15:16:57.902122 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2c3c-account-create-update-fvlnz" Feb 20 15:16:57.910228 master-0 kubenswrapper[28120]: I0220 15:16:57.909757 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2c3c-account-create-update-fvlnz" event={"ID":"119ab416-9782-4f7d-a843-15903771feb0","Type":"ContainerDied","Data":"98aa51e7f6ce8fc669e8628d0b6d80bbc8ccf000c12f02079f0f8e9d3b5d58c7"} Feb 20 15:16:57.910228 master-0 kubenswrapper[28120]: I0220 15:16:57.909887 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98aa51e7f6ce8fc669e8628d0b6d80bbc8ccf000c12f02079f0f8e9d3b5d58c7" Feb 20 15:16:58.074523 master-0 kubenswrapper[28120]: I0220 15:16:58.074460 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1042f39c-bf98-4a4f-92ed-5b937bbc88f1" path="/var/lib/kubelet/pods/1042f39c-bf98-4a4f-92ed-5b937bbc88f1/volumes" Feb 20 15:17:00.772949 master-0 kubenswrapper[28120]: I0220 15:17:00.772874 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ksw92" Feb 20 15:17:00.783268 master-0 kubenswrapper[28120]: I0220 15:17:00.783150 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-config-data\") pod \"1f0398cf-ba38-435d-b5a6-44254a2cb187\" (UID: \"1f0398cf-ba38-435d-b5a6-44254a2cb187\") " Feb 20 15:17:00.783268 master-0 kubenswrapper[28120]: I0220 15:17:00.783245 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dstvx\" (UniqueName: \"kubernetes.io/projected/1f0398cf-ba38-435d-b5a6-44254a2cb187-kube-api-access-dstvx\") pod \"1f0398cf-ba38-435d-b5a6-44254a2cb187\" (UID: \"1f0398cf-ba38-435d-b5a6-44254a2cb187\") " Feb 20 15:17:00.783498 master-0 kubenswrapper[28120]: I0220 15:17:00.783401 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-combined-ca-bundle\") pod \"1f0398cf-ba38-435d-b5a6-44254a2cb187\" (UID: \"1f0398cf-ba38-435d-b5a6-44254a2cb187\") " Feb 20 15:17:00.783571 master-0 kubenswrapper[28120]: I0220 15:17:00.783501 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-db-sync-config-data\") pod \"1f0398cf-ba38-435d-b5a6-44254a2cb187\" (UID: \"1f0398cf-ba38-435d-b5a6-44254a2cb187\") " Feb 20 15:17:00.788346 master-0 kubenswrapper[28120]: I0220 15:17:00.788298 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f0398cf-ba38-435d-b5a6-44254a2cb187-kube-api-access-dstvx" (OuterVolumeSpecName: "kube-api-access-dstvx") pod "1f0398cf-ba38-435d-b5a6-44254a2cb187" (UID: "1f0398cf-ba38-435d-b5a6-44254a2cb187"). InnerVolumeSpecName "kube-api-access-dstvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:00.789800 master-0 kubenswrapper[28120]: I0220 15:17:00.789736 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1f0398cf-ba38-435d-b5a6-44254a2cb187" (UID: "1f0398cf-ba38-435d-b5a6-44254a2cb187"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:00.837877 master-0 kubenswrapper[28120]: I0220 15:17:00.837701 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f0398cf-ba38-435d-b5a6-44254a2cb187" (UID: "1f0398cf-ba38-435d-b5a6-44254a2cb187"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:00.888338 master-0 kubenswrapper[28120]: I0220 15:17:00.888265 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dstvx\" (UniqueName: \"kubernetes.io/projected/1f0398cf-ba38-435d-b5a6-44254a2cb187-kube-api-access-dstvx\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:00.888338 master-0 kubenswrapper[28120]: I0220 15:17:00.888329 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:00.888600 master-0 kubenswrapper[28120]: I0220 15:17:00.888349 28120 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:00.894227 master-0 kubenswrapper[28120]: I0220 15:17:00.894165 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-config-data" (OuterVolumeSpecName: "config-data") pod "1f0398cf-ba38-435d-b5a6-44254a2cb187" (UID: "1f0398cf-ba38-435d-b5a6-44254a2cb187"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:00.944497 master-0 kubenswrapper[28120]: I0220 15:17:00.944405 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ksw92" event={"ID":"1f0398cf-ba38-435d-b5a6-44254a2cb187","Type":"ContainerDied","Data":"e14d5994a0a1619c6947dfad80a14e506c26663727d675ff244fb689217e963d"} Feb 20 15:17:00.944497 master-0 kubenswrapper[28120]: I0220 15:17:00.944496 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e14d5994a0a1619c6947dfad80a14e506c26663727d675ff244fb689217e963d" Feb 20 15:17:00.944795 master-0 kubenswrapper[28120]: I0220 15:17:00.944439 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ksw92" Feb 20 15:17:00.946267 master-0 kubenswrapper[28120]: I0220 15:17:00.946194 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nmklt" event={"ID":"eab76df2-0be6-47a5-9cec-1283cadcae8f","Type":"ContainerStarted","Data":"49037054d973db3bf51b26a1890fc69911328f3dccdd5954a90d78fa70fbb612"} Feb 20 15:17:00.983508 master-0 kubenswrapper[28120]: I0220 15:17:00.983399 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-nmklt" podStartSLOduration=2.27976796 podStartE2EDuration="7.983377075s" podCreationTimestamp="2026-02-20 15:16:53 +0000 UTC" firstStartedPulling="2026-02-20 15:16:54.907202815 +0000 UTC m=+953.167996378" lastFinishedPulling="2026-02-20 15:17:00.61081193 +0000 UTC m=+958.871605493" observedRunningTime="2026-02-20 15:17:00.967743796 +0000 UTC m=+959.228537429" watchObservedRunningTime="2026-02-20 15:17:00.983377075 +0000 UTC m=+959.244170648" Feb 20 15:17:00.991904 master-0 kubenswrapper[28120]: I0220 15:17:00.991853 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0398cf-ba38-435d-b5a6-44254a2cb187-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: I0220 15:17:02.338408 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-676f54c559-lmbln"] Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: E0220 15:17:02.339044 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1042f39c-bf98-4a4f-92ed-5b937bbc88f1" containerName="init" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: I0220 15:17:02.339066 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1042f39c-bf98-4a4f-92ed-5b937bbc88f1" containerName="init" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: E0220 15:17:02.339094 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0398cf-ba38-435d-b5a6-44254a2cb187" containerName="glance-db-sync" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: I0220 15:17:02.339104 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0398cf-ba38-435d-b5a6-44254a2cb187" containerName="glance-db-sync" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: E0220 15:17:02.339119 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ff0e77-87a0-4a3b-b389-c6e44d24fea3" containerName="mariadb-account-create-update" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: I0220 15:17:02.339130 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ff0e77-87a0-4a3b-b389-c6e44d24fea3" containerName="mariadb-account-create-update" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: E0220 15:17:02.339156 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1042f39c-bf98-4a4f-92ed-5b937bbc88f1" containerName="dnsmasq-dns" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: I0220 15:17:02.339165 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1042f39c-bf98-4a4f-92ed-5b937bbc88f1" containerName="dnsmasq-dns" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: E0220 15:17:02.339178 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="119ab416-9782-4f7d-a843-15903771feb0" containerName="mariadb-account-create-update" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: I0220 15:17:02.339186 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="119ab416-9782-4f7d-a843-15903771feb0" containerName="mariadb-account-create-update" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: E0220 15:17:02.339208 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf6c20f-8830-480b-b20a-704a8afa894d" containerName="mariadb-database-create" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: I0220 15:17:02.339217 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf6c20f-8830-480b-b20a-704a8afa894d" containerName="mariadb-database-create" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: E0220 15:17:02.339237 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8f0488-62d7-4225-adc3-b536806b56c7" containerName="mariadb-database-create" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: I0220 15:17:02.339246 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8f0488-62d7-4225-adc3-b536806b56c7" containerName="mariadb-database-create" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: I0220 15:17:02.339541 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="119ab416-9782-4f7d-a843-15903771feb0" containerName="mariadb-account-create-update" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: I0220 15:17:02.339580 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ff0e77-87a0-4a3b-b389-c6e44d24fea3" containerName="mariadb-account-create-update" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: I0220 15:17:02.339598 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="1042f39c-bf98-4a4f-92ed-5b937bbc88f1" containerName="dnsmasq-dns" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: I0220 15:17:02.339613 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8f0488-62d7-4225-adc3-b536806b56c7" containerName="mariadb-database-create" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: I0220 15:17:02.339628 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0398cf-ba38-435d-b5a6-44254a2cb187" containerName="glance-db-sync" Feb 20 15:17:02.340944 master-0 kubenswrapper[28120]: I0220 15:17:02.339642 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf6c20f-8830-480b-b20a-704a8afa894d" containerName="mariadb-database-create" Feb 20 15:17:02.351947 master-0 kubenswrapper[28120]: I0220 15:17:02.347101 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.362475 master-0 kubenswrapper[28120]: I0220 15:17:02.360840 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-676f54c559-lmbln"] Feb 20 15:17:02.430688 master-0 kubenswrapper[28120]: I0220 15:17:02.430445 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-dns-svc\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.430688 master-0 kubenswrapper[28120]: I0220 15:17:02.430532 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-ovsdbserver-sb\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.430688 master-0 kubenswrapper[28120]: I0220 15:17:02.430638 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6psfr\" (UniqueName: \"kubernetes.io/projected/622b7edf-75d9-42d6-bd8e-39e63d6294b1-kube-api-access-6psfr\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.431091 master-0 kubenswrapper[28120]: I0220 15:17:02.430911 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-config\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.431091 master-0 kubenswrapper[28120]: I0220 15:17:02.431007 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-dns-swift-storage-0\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.431091 master-0 kubenswrapper[28120]: I0220 15:17:02.431069 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-ovsdbserver-nb\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.533390 master-0 kubenswrapper[28120]: I0220 15:17:02.533265 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-dns-svc\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.533390 master-0 kubenswrapper[28120]: I0220 15:17:02.533324 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-ovsdbserver-sb\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.533390 master-0 kubenswrapper[28120]: I0220 15:17:02.533354 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6psfr\" (UniqueName: \"kubernetes.io/projected/622b7edf-75d9-42d6-bd8e-39e63d6294b1-kube-api-access-6psfr\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.533665 master-0 kubenswrapper[28120]: I0220 15:17:02.533439 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-config\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.533665 master-0 kubenswrapper[28120]: I0220 15:17:02.533494 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-dns-swift-storage-0\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.533665 master-0 kubenswrapper[28120]: I0220 15:17:02.533535 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-ovsdbserver-nb\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.534542 master-0 kubenswrapper[28120]: I0220 15:17:02.534474 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-dns-svc\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.534949 master-0 kubenswrapper[28120]: I0220 15:17:02.534908 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-ovsdbserver-nb\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.535089 master-0 kubenswrapper[28120]: I0220 15:17:02.535035 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-config\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.535585 master-0 kubenswrapper[28120]: I0220 15:17:02.535552 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-ovsdbserver-sb\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.535638 master-0 kubenswrapper[28120]: I0220 15:17:02.535614 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-dns-swift-storage-0\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.562140 master-0 kubenswrapper[28120]: I0220 15:17:02.562071 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6psfr\" (UniqueName: \"kubernetes.io/projected/622b7edf-75d9-42d6-bd8e-39e63d6294b1-kube-api-access-6psfr\") pod \"dnsmasq-dns-676f54c559-lmbln\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:02.689473 master-0 kubenswrapper[28120]: I0220 15:17:02.688503 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:03.255445 master-0 kubenswrapper[28120]: I0220 15:17:03.254212 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-676f54c559-lmbln"] Feb 20 15:17:03.981049 master-0 kubenswrapper[28120]: I0220 15:17:03.980939 28120 generic.go:334] "Generic (PLEG): container finished" podID="622b7edf-75d9-42d6-bd8e-39e63d6294b1" containerID="2f941d0520d38e5d4f5e4b89e827287cd9cd31b6efeff440f980506a8920a5e8" exitCode=0 Feb 20 15:17:03.981608 master-0 kubenswrapper[28120]: I0220 15:17:03.981400 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676f54c559-lmbln" event={"ID":"622b7edf-75d9-42d6-bd8e-39e63d6294b1","Type":"ContainerDied","Data":"2f941d0520d38e5d4f5e4b89e827287cd9cd31b6efeff440f980506a8920a5e8"} Feb 20 15:17:03.981719 master-0 kubenswrapper[28120]: I0220 15:17:03.981697 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676f54c559-lmbln" event={"ID":"622b7edf-75d9-42d6-bd8e-39e63d6294b1","Type":"ContainerStarted","Data":"c2c8d0b84eaf9b3ecb39042f3e4ab66358c16e2bec1e19484f69201eb346ee05"} Feb 20 15:17:04.999022 master-0 kubenswrapper[28120]: I0220 15:17:04.998953 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676f54c559-lmbln" event={"ID":"622b7edf-75d9-42d6-bd8e-39e63d6294b1","Type":"ContainerStarted","Data":"0e37b4c038a08fa35701a86510cb767f90ef4485e0edde6431b29703c69d9a35"} Feb 20 15:17:04.999664 master-0 kubenswrapper[28120]: I0220 15:17:04.999182 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:05.032751 master-0 kubenswrapper[28120]: I0220 15:17:05.032650 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-676f54c559-lmbln" podStartSLOduration=3.032629496 podStartE2EDuration="3.032629496s" podCreationTimestamp="2026-02-20 15:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:17:05.027416296 +0000 UTC m=+963.288209869" watchObservedRunningTime="2026-02-20 15:17:05.032629496 +0000 UTC m=+963.293423059" Feb 20 15:17:06.016748 master-0 kubenswrapper[28120]: I0220 15:17:06.016651 28120 generic.go:334] "Generic (PLEG): container finished" podID="eab76df2-0be6-47a5-9cec-1283cadcae8f" containerID="49037054d973db3bf51b26a1890fc69911328f3dccdd5954a90d78fa70fbb612" exitCode=0 Feb 20 15:17:06.017654 master-0 kubenswrapper[28120]: I0220 15:17:06.016813 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nmklt" event={"ID":"eab76df2-0be6-47a5-9cec-1283cadcae8f","Type":"ContainerDied","Data":"49037054d973db3bf51b26a1890fc69911328f3dccdd5954a90d78fa70fbb612"} Feb 20 15:17:07.467057 master-0 kubenswrapper[28120]: I0220 15:17:07.467007 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nmklt" Feb 20 15:17:07.554781 master-0 kubenswrapper[28120]: I0220 15:17:07.554704 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2b6m\" (UniqueName: \"kubernetes.io/projected/eab76df2-0be6-47a5-9cec-1283cadcae8f-kube-api-access-r2b6m\") pod \"eab76df2-0be6-47a5-9cec-1283cadcae8f\" (UID: \"eab76df2-0be6-47a5-9cec-1283cadcae8f\") " Feb 20 15:17:07.555051 master-0 kubenswrapper[28120]: I0220 15:17:07.554988 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eab76df2-0be6-47a5-9cec-1283cadcae8f-config-data\") pod \"eab76df2-0be6-47a5-9cec-1283cadcae8f\" (UID: \"eab76df2-0be6-47a5-9cec-1283cadcae8f\") " Feb 20 15:17:07.555172 master-0 kubenswrapper[28120]: I0220 15:17:07.555135 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab76df2-0be6-47a5-9cec-1283cadcae8f-combined-ca-bundle\") pod \"eab76df2-0be6-47a5-9cec-1283cadcae8f\" (UID: \"eab76df2-0be6-47a5-9cec-1283cadcae8f\") " Feb 20 15:17:07.558272 master-0 kubenswrapper[28120]: I0220 15:17:07.558116 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eab76df2-0be6-47a5-9cec-1283cadcae8f-kube-api-access-r2b6m" (OuterVolumeSpecName: "kube-api-access-r2b6m") pod "eab76df2-0be6-47a5-9cec-1283cadcae8f" (UID: "eab76df2-0be6-47a5-9cec-1283cadcae8f"). InnerVolumeSpecName "kube-api-access-r2b6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:07.579339 master-0 kubenswrapper[28120]: I0220 15:17:07.579251 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab76df2-0be6-47a5-9cec-1283cadcae8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eab76df2-0be6-47a5-9cec-1283cadcae8f" (UID: "eab76df2-0be6-47a5-9cec-1283cadcae8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:07.616374 master-0 kubenswrapper[28120]: I0220 15:17:07.616177 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab76df2-0be6-47a5-9cec-1283cadcae8f-config-data" (OuterVolumeSpecName: "config-data") pod "eab76df2-0be6-47a5-9cec-1283cadcae8f" (UID: "eab76df2-0be6-47a5-9cec-1283cadcae8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:07.657727 master-0 kubenswrapper[28120]: I0220 15:17:07.657644 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eab76df2-0be6-47a5-9cec-1283cadcae8f-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:07.657727 master-0 kubenswrapper[28120]: I0220 15:17:07.657698 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab76df2-0be6-47a5-9cec-1283cadcae8f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:07.657727 master-0 kubenswrapper[28120]: I0220 15:17:07.657709 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2b6m\" (UniqueName: \"kubernetes.io/projected/eab76df2-0be6-47a5-9cec-1283cadcae8f-kube-api-access-r2b6m\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:08.055109 master-0 kubenswrapper[28120]: I0220 15:17:08.055000 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nmklt" event={"ID":"eab76df2-0be6-47a5-9cec-1283cadcae8f","Type":"ContainerDied","Data":"9dac7e746a263aa5e605c54d1ae999726b3290d438c8fc416e8aabbf53d9e251"} Feb 20 15:17:08.055682 master-0 kubenswrapper[28120]: I0220 15:17:08.055614 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dac7e746a263aa5e605c54d1ae999726b3290d438c8fc416e8aabbf53d9e251" Feb 20 15:17:08.055791 master-0 kubenswrapper[28120]: I0220 15:17:08.055719 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nmklt" Feb 20 15:17:08.398741 master-0 kubenswrapper[28120]: I0220 15:17:08.397794 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vxpzd"] Feb 20 15:17:08.398741 master-0 kubenswrapper[28120]: E0220 15:17:08.398449 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eab76df2-0be6-47a5-9cec-1283cadcae8f" containerName="keystone-db-sync" Feb 20 15:17:08.398741 master-0 kubenswrapper[28120]: I0220 15:17:08.398470 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="eab76df2-0be6-47a5-9cec-1283cadcae8f" containerName="keystone-db-sync" Feb 20 15:17:08.399103 master-0 kubenswrapper[28120]: I0220 15:17:08.398841 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="eab76df2-0be6-47a5-9cec-1283cadcae8f" containerName="keystone-db-sync" Feb 20 15:17:08.414833 master-0 kubenswrapper[28120]: I0220 15:17:08.401197 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.422871 master-0 kubenswrapper[28120]: I0220 15:17:08.419422 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 20 15:17:08.424189 master-0 kubenswrapper[28120]: I0220 15:17:08.424147 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 20 15:17:08.424505 master-0 kubenswrapper[28120]: I0220 15:17:08.424474 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 20 15:17:08.424682 master-0 kubenswrapper[28120]: I0220 15:17:08.424652 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 20 15:17:08.441754 master-0 kubenswrapper[28120]: I0220 15:17:08.441691 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-676f54c559-lmbln"] Feb 20 15:17:08.441998 master-0 kubenswrapper[28120]: I0220 15:17:08.441942 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-676f54c559-lmbln" podUID="622b7edf-75d9-42d6-bd8e-39e63d6294b1" containerName="dnsmasq-dns" containerID="cri-o://0e37b4c038a08fa35701a86510cb767f90ef4485e0edde6431b29703c69d9a35" gracePeriod=10 Feb 20 15:17:08.476948 master-0 kubenswrapper[28120]: I0220 15:17:08.468042 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vxpzd"] Feb 20 15:17:08.496056 master-0 kubenswrapper[28120]: I0220 15:17:08.494298 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-fernet-keys\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.496056 master-0 kubenswrapper[28120]: I0220 15:17:08.494405 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-scripts\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.496056 master-0 kubenswrapper[28120]: I0220 15:17:08.494438 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-config-data\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.496056 master-0 kubenswrapper[28120]: I0220 15:17:08.494453 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-combined-ca-bundle\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.496056 master-0 kubenswrapper[28120]: I0220 15:17:08.494484 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjf5v\" (UniqueName: \"kubernetes.io/projected/7c236340-192f-4dc7-b77a-798ac1771a5e-kube-api-access-zjf5v\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.496056 master-0 kubenswrapper[28120]: I0220 15:17:08.494569 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-credential-keys\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.502302 master-0 kubenswrapper[28120]: I0220 15:17:08.502131 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68b4779d45-blxj5"] Feb 20 15:17:08.507844 master-0 kubenswrapper[28120]: I0220 15:17:08.507729 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.513502 master-0 kubenswrapper[28120]: I0220 15:17:08.513463 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68b4779d45-blxj5"] Feb 20 15:17:08.568933 master-0 kubenswrapper[28120]: I0220 15:17:08.568653 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-jw4jx"] Feb 20 15:17:08.569902 master-0 kubenswrapper[28120]: I0220 15:17:08.569880 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-jw4jx" Feb 20 15:17:08.603266 master-0 kubenswrapper[28120]: I0220 15:17:08.603219 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-fernet-keys\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.603649 master-0 kubenswrapper[28120]: I0220 15:17:08.603412 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-ovsdbserver-sb\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.603649 master-0 kubenswrapper[28120]: I0220 15:17:08.603507 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-dns-svc\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.603649 master-0 kubenswrapper[28120]: I0220 15:17:08.603629 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-scripts\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.603831 master-0 kubenswrapper[28120]: I0220 15:17:08.603730 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-config-data\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.603831 master-0 kubenswrapper[28120]: I0220 15:17:08.603756 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-combined-ca-bundle\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.604035 master-0 kubenswrapper[28120]: I0220 15:17:08.604009 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjf5v\" (UniqueName: \"kubernetes.io/projected/7c236340-192f-4dc7-b77a-798ac1771a5e-kube-api-access-zjf5v\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.604162 master-0 kubenswrapper[28120]: I0220 15:17:08.604125 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-ovsdbserver-nb\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.604231 master-0 kubenswrapper[28120]: I0220 15:17:08.604189 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-config\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.604293 master-0 kubenswrapper[28120]: I0220 15:17:08.604256 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz7wj\" (UniqueName: \"kubernetes.io/projected/57a90479-4612-4a36-941a-4838fe51c8eb-kube-api-access-lz7wj\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.604380 master-0 kubenswrapper[28120]: I0220 15:17:08.604353 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-credential-keys\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.604449 master-0 kubenswrapper[28120]: I0220 15:17:08.604430 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-dns-swift-storage-0\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.609857 master-0 kubenswrapper[28120]: I0220 15:17:08.609688 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-jw4jx"] Feb 20 15:17:08.618975 master-0 kubenswrapper[28120]: I0220 15:17:08.617626 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-scripts\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.618975 master-0 kubenswrapper[28120]: I0220 15:17:08.618576 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-config-data\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.630063 master-0 kubenswrapper[28120]: I0220 15:17:08.628438 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-credential-keys\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.630063 master-0 kubenswrapper[28120]: I0220 15:17:08.629215 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-fernet-keys\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.646632 master-0 kubenswrapper[28120]: I0220 15:17:08.646584 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-combined-ca-bundle\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.651819 master-0 kubenswrapper[28120]: I0220 15:17:08.651760 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-280d-account-create-update-6drk5"] Feb 20 15:17:08.653458 master-0 kubenswrapper[28120]: I0220 15:17:08.653426 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-280d-account-create-update-6drk5" Feb 20 15:17:08.658623 master-0 kubenswrapper[28120]: I0220 15:17:08.658507 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Feb 20 15:17:08.675007 master-0 kubenswrapper[28120]: I0220 15:17:08.674951 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-280d-account-create-update-6drk5"] Feb 20 15:17:08.718959 master-0 kubenswrapper[28120]: I0220 15:17:08.709798 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-ovsdbserver-sb\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.718959 master-0 kubenswrapper[28120]: I0220 15:17:08.709868 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-dns-svc\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.718959 master-0 kubenswrapper[28120]: I0220 15:17:08.709976 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9kgp\" (UniqueName: \"kubernetes.io/projected/fb061a46-8122-48b7-a549-c7bd5bf1c0ea-kube-api-access-j9kgp\") pod \"ironic-db-create-jw4jx\" (UID: \"fb061a46-8122-48b7-a549-c7bd5bf1c0ea\") " pod="openstack/ironic-db-create-jw4jx" Feb 20 15:17:08.718959 master-0 kubenswrapper[28120]: I0220 15:17:08.710008 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-ovsdbserver-nb\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.718959 master-0 kubenswrapper[28120]: I0220 15:17:08.710026 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-config\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.718959 master-0 kubenswrapper[28120]: I0220 15:17:08.710051 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz7wj\" (UniqueName: \"kubernetes.io/projected/57a90479-4612-4a36-941a-4838fe51c8eb-kube-api-access-lz7wj\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.718959 master-0 kubenswrapper[28120]: I0220 15:17:08.710084 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a459ddfe-81df-4abd-867d-1ea8ed2f0c95-operator-scripts\") pod \"ironic-280d-account-create-update-6drk5\" (UID: \"a459ddfe-81df-4abd-867d-1ea8ed2f0c95\") " pod="openstack/ironic-280d-account-create-update-6drk5" Feb 20 15:17:08.718959 master-0 kubenswrapper[28120]: I0220 15:17:08.710113 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-dns-swift-storage-0\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.718959 master-0 kubenswrapper[28120]: I0220 15:17:08.710140 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb061a46-8122-48b7-a549-c7bd5bf1c0ea-operator-scripts\") pod \"ironic-db-create-jw4jx\" (UID: \"fb061a46-8122-48b7-a549-c7bd5bf1c0ea\") " pod="openstack/ironic-db-create-jw4jx" Feb 20 15:17:08.718959 master-0 kubenswrapper[28120]: I0220 15:17:08.710164 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwkhw\" (UniqueName: \"kubernetes.io/projected/a459ddfe-81df-4abd-867d-1ea8ed2f0c95-kube-api-access-wwkhw\") pod \"ironic-280d-account-create-update-6drk5\" (UID: \"a459ddfe-81df-4abd-867d-1ea8ed2f0c95\") " pod="openstack/ironic-280d-account-create-update-6drk5" Feb 20 15:17:08.718959 master-0 kubenswrapper[28120]: I0220 15:17:08.711600 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-ovsdbserver-sb\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.719619 master-0 kubenswrapper[28120]: I0220 15:17:08.719512 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-ovsdbserver-nb\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.719671 master-0 kubenswrapper[28120]: I0220 15:17:08.719613 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-dns-svc\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.725488 master-0 kubenswrapper[28120]: I0220 15:17:08.720334 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-config\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.725488 master-0 kubenswrapper[28120]: I0220 15:17:08.720383 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-dns-swift-storage-0\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.742951 master-0 kubenswrapper[28120]: I0220 15:17:08.730184 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjf5v\" (UniqueName: \"kubernetes.io/projected/7c236340-192f-4dc7-b77a-798ac1771a5e-kube-api-access-zjf5v\") pod \"keystone-bootstrap-vxpzd\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:08.742951 master-0 kubenswrapper[28120]: I0220 15:17:08.735136 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68b4779d45-blxj5"] Feb 20 15:17:08.774689 master-0 kubenswrapper[28120]: E0220 15:17:08.757665 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-lz7wj], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-68b4779d45-blxj5" podUID="57a90479-4612-4a36-941a-4838fe51c8eb" Feb 20 15:17:08.779018 master-0 kubenswrapper[28120]: I0220 15:17:08.777826 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-rfmtx"] Feb 20 15:17:08.782950 master-0 kubenswrapper[28120]: I0220 15:17:08.779252 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rfmtx" Feb 20 15:17:08.801507 master-0 kubenswrapper[28120]: I0220 15:17:08.801460 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 20 15:17:08.801780 master-0 kubenswrapper[28120]: I0220 15:17:08.801649 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 20 15:17:08.815919 master-0 kubenswrapper[28120]: I0220 15:17:08.811693 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9kgp\" (UniqueName: \"kubernetes.io/projected/fb061a46-8122-48b7-a549-c7bd5bf1c0ea-kube-api-access-j9kgp\") pod \"ironic-db-create-jw4jx\" (UID: \"fb061a46-8122-48b7-a549-c7bd5bf1c0ea\") " pod="openstack/ironic-db-create-jw4jx" Feb 20 15:17:08.815919 master-0 kubenswrapper[28120]: I0220 15:17:08.811800 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a459ddfe-81df-4abd-867d-1ea8ed2f0c95-operator-scripts\") pod \"ironic-280d-account-create-update-6drk5\" (UID: \"a459ddfe-81df-4abd-867d-1ea8ed2f0c95\") " pod="openstack/ironic-280d-account-create-update-6drk5" Feb 20 15:17:08.815919 master-0 kubenswrapper[28120]: I0220 15:17:08.811859 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb061a46-8122-48b7-a549-c7bd5bf1c0ea-operator-scripts\") pod \"ironic-db-create-jw4jx\" (UID: \"fb061a46-8122-48b7-a549-c7bd5bf1c0ea\") " pod="openstack/ironic-db-create-jw4jx" Feb 20 15:17:08.815919 master-0 kubenswrapper[28120]: I0220 15:17:08.811886 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwkhw\" (UniqueName: \"kubernetes.io/projected/a459ddfe-81df-4abd-867d-1ea8ed2f0c95-kube-api-access-wwkhw\") pod \"ironic-280d-account-create-update-6drk5\" (UID: \"a459ddfe-81df-4abd-867d-1ea8ed2f0c95\") " pod="openstack/ironic-280d-account-create-update-6drk5" Feb 20 15:17:08.820050 master-0 kubenswrapper[28120]: I0220 15:17:08.819144 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz7wj\" (UniqueName: \"kubernetes.io/projected/57a90479-4612-4a36-941a-4838fe51c8eb-kube-api-access-lz7wj\") pod \"dnsmasq-dns-68b4779d45-blxj5\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:08.820050 master-0 kubenswrapper[28120]: I0220 15:17:08.819771 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a459ddfe-81df-4abd-867d-1ea8ed2f0c95-operator-scripts\") pod \"ironic-280d-account-create-update-6drk5\" (UID: \"a459ddfe-81df-4abd-867d-1ea8ed2f0c95\") " pod="openstack/ironic-280d-account-create-update-6drk5" Feb 20 15:17:08.824945 master-0 kubenswrapper[28120]: I0220 15:17:08.821267 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb061a46-8122-48b7-a549-c7bd5bf1c0ea-operator-scripts\") pod \"ironic-db-create-jw4jx\" (UID: \"fb061a46-8122-48b7-a549-c7bd5bf1c0ea\") " pod="openstack/ironic-db-create-jw4jx" Feb 20 15:17:08.824945 master-0 kubenswrapper[28120]: I0220 15:17:08.821417 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-eea69-db-sync-qnpmn"] Feb 20 15:17:08.824945 master-0 kubenswrapper[28120]: I0220 15:17:08.822889 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:08.851769 master-0 kubenswrapper[28120]: I0220 15:17:08.849937 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-eea69-scripts" Feb 20 15:17:08.851769 master-0 kubenswrapper[28120]: I0220 15:17:08.850171 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-eea69-config-data" Feb 20 15:17:08.853404 master-0 kubenswrapper[28120]: I0220 15:17:08.853369 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwkhw\" (UniqueName: \"kubernetes.io/projected/a459ddfe-81df-4abd-867d-1ea8ed2f0c95-kube-api-access-wwkhw\") pod \"ironic-280d-account-create-update-6drk5\" (UID: \"a459ddfe-81df-4abd-867d-1ea8ed2f0c95\") " pod="openstack/ironic-280d-account-create-update-6drk5" Feb 20 15:17:08.854963 master-0 kubenswrapper[28120]: I0220 15:17:08.854590 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rfmtx"] Feb 20 15:17:08.865381 master-0 kubenswrapper[28120]: I0220 15:17:08.865328 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-db-sync-qnpmn"] Feb 20 15:17:08.867050 master-0 kubenswrapper[28120]: I0220 15:17:08.866742 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9kgp\" (UniqueName: \"kubernetes.io/projected/fb061a46-8122-48b7-a549-c7bd5bf1c0ea-kube-api-access-j9kgp\") pod \"ironic-db-create-jw4jx\" (UID: \"fb061a46-8122-48b7-a549-c7bd5bf1c0ea\") " pod="openstack/ironic-db-create-jw4jx" Feb 20 15:17:08.877293 master-0 kubenswrapper[28120]: I0220 15:17:08.877258 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-6pmxn"] Feb 20 15:17:08.879488 master-0 kubenswrapper[28120]: I0220 15:17:08.879296 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:08.881077 master-0 kubenswrapper[28120]: I0220 15:17:08.881042 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 20 15:17:08.881306 master-0 kubenswrapper[28120]: I0220 15:17:08.881236 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 20 15:17:08.896402 master-0 kubenswrapper[28120]: I0220 15:17:08.895916 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d687b68b9-d5bw7"] Feb 20 15:17:08.897959 master-0 kubenswrapper[28120]: I0220 15:17:08.897914 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:08.905986 master-0 kubenswrapper[28120]: I0220 15:17:08.904341 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-280d-account-create-update-6drk5" Feb 20 15:17:08.909745 master-0 kubenswrapper[28120]: I0220 15:17:08.909085 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6pmxn"] Feb 20 15:17:08.915427 master-0 kubenswrapper[28120]: I0220 15:17:08.915383 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/556b936a-2d3e-4b78-b7d1-a7020060be7e-config\") pod \"neutron-db-sync-rfmtx\" (UID: \"556b936a-2d3e-4b78-b7d1-a7020060be7e\") " pod="openstack/neutron-db-sync-rfmtx" Feb 20 15:17:08.915586 master-0 kubenswrapper[28120]: I0220 15:17:08.915471 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/395d8eca-6336-4d08-b510-0d5d5b95e114-etc-machine-id\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:08.915586 master-0 kubenswrapper[28120]: I0220 15:17:08.915508 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/556b936a-2d3e-4b78-b7d1-a7020060be7e-combined-ca-bundle\") pod \"neutron-db-sync-rfmtx\" (UID: \"556b936a-2d3e-4b78-b7d1-a7020060be7e\") " pod="openstack/neutron-db-sync-rfmtx" Feb 20 15:17:08.915586 master-0 kubenswrapper[28120]: I0220 15:17:08.915542 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4lb9\" (UniqueName: \"kubernetes.io/projected/395d8eca-6336-4d08-b510-0d5d5b95e114-kube-api-access-h4lb9\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:08.915586 master-0 kubenswrapper[28120]: I0220 15:17:08.915567 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9qkz\" (UniqueName: \"kubernetes.io/projected/556b936a-2d3e-4b78-b7d1-a7020060be7e-kube-api-access-v9qkz\") pod \"neutron-db-sync-rfmtx\" (UID: \"556b936a-2d3e-4b78-b7d1-a7020060be7e\") " pod="openstack/neutron-db-sync-rfmtx" Feb 20 15:17:08.915714 master-0 kubenswrapper[28120]: I0220 15:17:08.915587 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-combined-ca-bundle\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:08.915714 master-0 kubenswrapper[28120]: I0220 15:17:08.915607 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-db-sync-config-data\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:08.915714 master-0 kubenswrapper[28120]: I0220 15:17:08.915683 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-config-data\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:08.915714 master-0 kubenswrapper[28120]: I0220 15:17:08.915704 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-scripts\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:08.927397 master-0 kubenswrapper[28120]: I0220 15:17:08.927365 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d687b68b9-d5bw7"] Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.018991 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4lb9\" (UniqueName: \"kubernetes.io/projected/395d8eca-6336-4d08-b510-0d5d5b95e114-kube-api-access-h4lb9\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019070 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9qkz\" (UniqueName: \"kubernetes.io/projected/556b936a-2d3e-4b78-b7d1-a7020060be7e-kube-api-access-v9qkz\") pod \"neutron-db-sync-rfmtx\" (UID: \"556b936a-2d3e-4b78-b7d1-a7020060be7e\") " pod="openstack/neutron-db-sync-rfmtx" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019094 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-combined-ca-bundle\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019118 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-db-sync-config-data\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019146 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-scripts\") pod \"placement-db-sync-6pmxn\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019164 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-config-data\") pod \"placement-db-sync-6pmxn\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019246 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-config-data\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019273 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-scripts\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019306 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-config\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019323 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpmww\" (UniqueName: \"kubernetes.io/projected/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-kube-api-access-vpmww\") pod \"placement-db-sync-6pmxn\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019364 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-ovsdbserver-sb\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019418 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/556b936a-2d3e-4b78-b7d1-a7020060be7e-config\") pod \"neutron-db-sync-rfmtx\" (UID: \"556b936a-2d3e-4b78-b7d1-a7020060be7e\") " pod="openstack/neutron-db-sync-rfmtx" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019441 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-ovsdbserver-nb\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019468 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-dns-swift-storage-0\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019499 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-logs\") pod \"placement-db-sync-6pmxn\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019533 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/395d8eca-6336-4d08-b510-0d5d5b95e114-etc-machine-id\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019563 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rptkl\" (UniqueName: \"kubernetes.io/projected/788fe9e6-7d26-4739-855e-0a1f758e0376-kube-api-access-rptkl\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019595 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-combined-ca-bundle\") pod \"placement-db-sync-6pmxn\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019625 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/556b936a-2d3e-4b78-b7d1-a7020060be7e-combined-ca-bundle\") pod \"neutron-db-sync-rfmtx\" (UID: \"556b936a-2d3e-4b78-b7d1-a7020060be7e\") " pod="openstack/neutron-db-sync-rfmtx" Feb 20 15:17:09.035381 master-0 kubenswrapper[28120]: I0220 15:17:09.019650 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-dns-svc\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.036072 master-0 kubenswrapper[28120]: I0220 15:17:09.035674 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:09.041278 master-0 kubenswrapper[28120]: I0220 15:17:09.040901 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/395d8eca-6336-4d08-b510-0d5d5b95e114-etc-machine-id\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:09.046782 master-0 kubenswrapper[28120]: I0220 15:17:09.045771 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-scripts\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:09.047140 master-0 kubenswrapper[28120]: I0220 15:17:09.047069 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/556b936a-2d3e-4b78-b7d1-a7020060be7e-combined-ca-bundle\") pod \"neutron-db-sync-rfmtx\" (UID: \"556b936a-2d3e-4b78-b7d1-a7020060be7e\") " pod="openstack/neutron-db-sync-rfmtx" Feb 20 15:17:09.049823 master-0 kubenswrapper[28120]: I0220 15:17:09.049440 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-db-sync-config-data\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:09.052183 master-0 kubenswrapper[28120]: I0220 15:17:09.051892 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-config-data\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:09.057487 master-0 kubenswrapper[28120]: I0220 15:17:09.057402 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4lb9\" (UniqueName: \"kubernetes.io/projected/395d8eca-6336-4d08-b510-0d5d5b95e114-kube-api-access-h4lb9\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:09.064864 master-0 kubenswrapper[28120]: I0220 15:17:09.064683 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/556b936a-2d3e-4b78-b7d1-a7020060be7e-config\") pod \"neutron-db-sync-rfmtx\" (UID: \"556b936a-2d3e-4b78-b7d1-a7020060be7e\") " pod="openstack/neutron-db-sync-rfmtx" Feb 20 15:17:09.066210 master-0 kubenswrapper[28120]: I0220 15:17:09.066143 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-combined-ca-bundle\") pod \"cinder-eea69-db-sync-qnpmn\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:09.066461 master-0 kubenswrapper[28120]: I0220 15:17:09.066419 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9qkz\" (UniqueName: \"kubernetes.io/projected/556b936a-2d3e-4b78-b7d1-a7020060be7e-kube-api-access-v9qkz\") pod \"neutron-db-sync-rfmtx\" (UID: \"556b936a-2d3e-4b78-b7d1-a7020060be7e\") " pod="openstack/neutron-db-sync-rfmtx" Feb 20 15:17:09.074031 master-0 kubenswrapper[28120]: I0220 15:17:09.073981 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-jw4jx" Feb 20 15:17:09.084069 master-0 kubenswrapper[28120]: I0220 15:17:09.084025 28120 generic.go:334] "Generic (PLEG): container finished" podID="622b7edf-75d9-42d6-bd8e-39e63d6294b1" containerID="0e37b4c038a08fa35701a86510cb767f90ef4485e0edde6431b29703c69d9a35" exitCode=0 Feb 20 15:17:09.084177 master-0 kubenswrapper[28120]: I0220 15:17:09.084136 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:09.084863 master-0 kubenswrapper[28120]: I0220 15:17:09.084832 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676f54c559-lmbln" event={"ID":"622b7edf-75d9-42d6-bd8e-39e63d6294b1","Type":"ContainerDied","Data":"0e37b4c038a08fa35701a86510cb767f90ef4485e0edde6431b29703c69d9a35"} Feb 20 15:17:09.108565 master-0 kubenswrapper[28120]: I0220 15:17:09.108422 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:09.121093 master-0 kubenswrapper[28120]: I0220 15:17:09.120998 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-ovsdbserver-nb\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.121093 master-0 kubenswrapper[28120]: I0220 15:17:09.121071 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-dns-swift-storage-0\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.121906 master-0 kubenswrapper[28120]: I0220 15:17:09.121108 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-logs\") pod \"placement-db-sync-6pmxn\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:09.121906 master-0 kubenswrapper[28120]: I0220 15:17:09.121325 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rptkl\" (UniqueName: \"kubernetes.io/projected/788fe9e6-7d26-4739-855e-0a1f758e0376-kube-api-access-rptkl\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.121906 master-0 kubenswrapper[28120]: I0220 15:17:09.121351 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-combined-ca-bundle\") pod \"placement-db-sync-6pmxn\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:09.121906 master-0 kubenswrapper[28120]: I0220 15:17:09.121383 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-dns-svc\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.121906 master-0 kubenswrapper[28120]: I0220 15:17:09.121446 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-scripts\") pod \"placement-db-sync-6pmxn\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:09.121906 master-0 kubenswrapper[28120]: I0220 15:17:09.121462 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-config-data\") pod \"placement-db-sync-6pmxn\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:09.121906 master-0 kubenswrapper[28120]: I0220 15:17:09.121697 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-config\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.121906 master-0 kubenswrapper[28120]: I0220 15:17:09.121738 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpmww\" (UniqueName: \"kubernetes.io/projected/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-kube-api-access-vpmww\") pod \"placement-db-sync-6pmxn\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:09.121906 master-0 kubenswrapper[28120]: I0220 15:17:09.121808 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-ovsdbserver-sb\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.122331 master-0 kubenswrapper[28120]: I0220 15:17:09.122003 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-logs\") pod \"placement-db-sync-6pmxn\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:09.122331 master-0 kubenswrapper[28120]: I0220 15:17:09.122178 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-ovsdbserver-nb\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.130053 master-0 kubenswrapper[28120]: I0220 15:17:09.123121 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-dns-svc\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.130053 master-0 kubenswrapper[28120]: I0220 15:17:09.123259 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-config\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.130053 master-0 kubenswrapper[28120]: I0220 15:17:09.123271 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-dns-swift-storage-0\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.130053 master-0 kubenswrapper[28120]: I0220 15:17:09.123423 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-ovsdbserver-sb\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.130053 master-0 kubenswrapper[28120]: I0220 15:17:09.126720 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-combined-ca-bundle\") pod \"placement-db-sync-6pmxn\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:09.130053 master-0 kubenswrapper[28120]: I0220 15:17:09.126987 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-config-data\") pod \"placement-db-sync-6pmxn\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:09.136200 master-0 kubenswrapper[28120]: I0220 15:17:09.136169 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-scripts\") pod \"placement-db-sync-6pmxn\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:09.140385 master-0 kubenswrapper[28120]: I0220 15:17:09.140348 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpmww\" (UniqueName: \"kubernetes.io/projected/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-kube-api-access-vpmww\") pod \"placement-db-sync-6pmxn\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:09.143707 master-0 kubenswrapper[28120]: I0220 15:17:09.143664 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rptkl\" (UniqueName: \"kubernetes.io/projected/788fe9e6-7d26-4739-855e-0a1f758e0376-kube-api-access-rptkl\") pod \"dnsmasq-dns-d687b68b9-d5bw7\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.223686 master-0 kubenswrapper[28120]: I0220 15:17:09.223642 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-dns-svc\") pod \"57a90479-4612-4a36-941a-4838fe51c8eb\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " Feb 20 15:17:09.223798 master-0 kubenswrapper[28120]: I0220 15:17:09.223711 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-ovsdbserver-sb\") pod \"57a90479-4612-4a36-941a-4838fe51c8eb\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " Feb 20 15:17:09.223856 master-0 kubenswrapper[28120]: I0220 15:17:09.223822 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-config\") pod \"57a90479-4612-4a36-941a-4838fe51c8eb\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " Feb 20 15:17:09.223905 master-0 kubenswrapper[28120]: I0220 15:17:09.223869 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-ovsdbserver-nb\") pod \"57a90479-4612-4a36-941a-4838fe51c8eb\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " Feb 20 15:17:09.223905 master-0 kubenswrapper[28120]: I0220 15:17:09.223887 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-dns-swift-storage-0\") pod \"57a90479-4612-4a36-941a-4838fe51c8eb\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " Feb 20 15:17:09.224016 master-0 kubenswrapper[28120]: I0220 15:17:09.223944 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz7wj\" (UniqueName: \"kubernetes.io/projected/57a90479-4612-4a36-941a-4838fe51c8eb-kube-api-access-lz7wj\") pod \"57a90479-4612-4a36-941a-4838fe51c8eb\" (UID: \"57a90479-4612-4a36-941a-4838fe51c8eb\") " Feb 20 15:17:09.225388 master-0 kubenswrapper[28120]: I0220 15:17:09.225345 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-config" (OuterVolumeSpecName: "config") pod "57a90479-4612-4a36-941a-4838fe51c8eb" (UID: "57a90479-4612-4a36-941a-4838fe51c8eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:09.225651 master-0 kubenswrapper[28120]: I0220 15:17:09.225620 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "57a90479-4612-4a36-941a-4838fe51c8eb" (UID: "57a90479-4612-4a36-941a-4838fe51c8eb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:09.225723 master-0 kubenswrapper[28120]: I0220 15:17:09.225664 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "57a90479-4612-4a36-941a-4838fe51c8eb" (UID: "57a90479-4612-4a36-941a-4838fe51c8eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:09.225838 master-0 kubenswrapper[28120]: I0220 15:17:09.225815 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rfmtx" Feb 20 15:17:09.225897 master-0 kubenswrapper[28120]: I0220 15:17:09.225833 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "57a90479-4612-4a36-941a-4838fe51c8eb" (UID: "57a90479-4612-4a36-941a-4838fe51c8eb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:09.226019 master-0 kubenswrapper[28120]: I0220 15:17:09.225990 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "57a90479-4612-4a36-941a-4838fe51c8eb" (UID: "57a90479-4612-4a36-941a-4838fe51c8eb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:09.229575 master-0 kubenswrapper[28120]: I0220 15:17:09.229517 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a90479-4612-4a36-941a-4838fe51c8eb-kube-api-access-lz7wj" (OuterVolumeSpecName: "kube-api-access-lz7wj") pod "57a90479-4612-4a36-941a-4838fe51c8eb" (UID: "57a90479-4612-4a36-941a-4838fe51c8eb"). InnerVolumeSpecName "kube-api-access-lz7wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:09.237178 master-0 kubenswrapper[28120]: I0220 15:17:09.237134 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:09.243404 master-0 kubenswrapper[28120]: I0220 15:17:09.242879 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:09.251425 master-0 kubenswrapper[28120]: I0220 15:17:09.251370 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:09.267893 master-0 kubenswrapper[28120]: I0220 15:17:09.267373 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:09.329356 master-0 kubenswrapper[28120]: I0220 15:17:09.326426 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-ovsdbserver-nb\") pod \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " Feb 20 15:17:09.329356 master-0 kubenswrapper[28120]: I0220 15:17:09.326562 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-config\") pod \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " Feb 20 15:17:09.329356 master-0 kubenswrapper[28120]: I0220 15:17:09.326722 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6psfr\" (UniqueName: \"kubernetes.io/projected/622b7edf-75d9-42d6-bd8e-39e63d6294b1-kube-api-access-6psfr\") pod \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " Feb 20 15:17:09.329356 master-0 kubenswrapper[28120]: I0220 15:17:09.326775 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-ovsdbserver-sb\") pod \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " Feb 20 15:17:09.329356 master-0 kubenswrapper[28120]: I0220 15:17:09.326816 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-dns-swift-storage-0\") pod \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " Feb 20 15:17:09.329356 master-0 kubenswrapper[28120]: I0220 15:17:09.326876 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-dns-svc\") pod \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " Feb 20 15:17:09.329356 master-0 kubenswrapper[28120]: I0220 15:17:09.327489 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz7wj\" (UniqueName: \"kubernetes.io/projected/57a90479-4612-4a36-941a-4838fe51c8eb-kube-api-access-lz7wj\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:09.329356 master-0 kubenswrapper[28120]: I0220 15:17:09.327512 28120 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:09.329356 master-0 kubenswrapper[28120]: I0220 15:17:09.327524 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:09.329356 master-0 kubenswrapper[28120]: I0220 15:17:09.327534 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:09.329356 master-0 kubenswrapper[28120]: I0220 15:17:09.327545 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:09.329356 master-0 kubenswrapper[28120]: I0220 15:17:09.327557 28120 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57a90479-4612-4a36-941a-4838fe51c8eb-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:09.356408 master-0 kubenswrapper[28120]: I0220 15:17:09.356325 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/622b7edf-75d9-42d6-bd8e-39e63d6294b1-kube-api-access-6psfr" (OuterVolumeSpecName: "kube-api-access-6psfr") pod "622b7edf-75d9-42d6-bd8e-39e63d6294b1" (UID: "622b7edf-75d9-42d6-bd8e-39e63d6294b1"). InnerVolumeSpecName "kube-api-access-6psfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:09.432026 master-0 kubenswrapper[28120]: I0220 15:17:09.428789 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "622b7edf-75d9-42d6-bd8e-39e63d6294b1" (UID: "622b7edf-75d9-42d6-bd8e-39e63d6294b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:09.432026 master-0 kubenswrapper[28120]: I0220 15:17:09.428890 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-ovsdbserver-nb\") pod \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\" (UID: \"622b7edf-75d9-42d6-bd8e-39e63d6294b1\") " Feb 20 15:17:09.432026 master-0 kubenswrapper[28120]: I0220 15:17:09.429636 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6psfr\" (UniqueName: \"kubernetes.io/projected/622b7edf-75d9-42d6-bd8e-39e63d6294b1-kube-api-access-6psfr\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:09.432026 master-0 kubenswrapper[28120]: W0220 15:17:09.429735 28120 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/622b7edf-75d9-42d6-bd8e-39e63d6294b1/volumes/kubernetes.io~configmap/ovsdbserver-nb Feb 20 15:17:09.432026 master-0 kubenswrapper[28120]: I0220 15:17:09.429748 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "622b7edf-75d9-42d6-bd8e-39e63d6294b1" (UID: "622b7edf-75d9-42d6-bd8e-39e63d6294b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:09.468308 master-0 kubenswrapper[28120]: I0220 15:17:09.462835 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "622b7edf-75d9-42d6-bd8e-39e63d6294b1" (UID: "622b7edf-75d9-42d6-bd8e-39e63d6294b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:09.468308 master-0 kubenswrapper[28120]: I0220 15:17:09.463658 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-280d-account-create-update-6drk5"] Feb 20 15:17:09.468308 master-0 kubenswrapper[28120]: I0220 15:17:09.467506 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "622b7edf-75d9-42d6-bd8e-39e63d6294b1" (UID: "622b7edf-75d9-42d6-bd8e-39e63d6294b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:09.476015 master-0 kubenswrapper[28120]: I0220 15:17:09.475945 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "622b7edf-75d9-42d6-bd8e-39e63d6294b1" (UID: "622b7edf-75d9-42d6-bd8e-39e63d6294b1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:09.532810 master-0 kubenswrapper[28120]: I0220 15:17:09.532737 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:09.532810 master-0 kubenswrapper[28120]: I0220 15:17:09.532797 28120 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:09.532810 master-0 kubenswrapper[28120]: I0220 15:17:09.532815 28120 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:09.539768 master-0 kubenswrapper[28120]: I0220 15:17:09.532827 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:09.590443 master-0 kubenswrapper[28120]: I0220 15:17:09.590393 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-config" (OuterVolumeSpecName: "config") pod "622b7edf-75d9-42d6-bd8e-39e63d6294b1" (UID: "622b7edf-75d9-42d6-bd8e-39e63d6294b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:09.639761 master-0 kubenswrapper[28120]: I0220 15:17:09.636226 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622b7edf-75d9-42d6-bd8e-39e63d6294b1-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:10.070370 master-0 kubenswrapper[28120]: W0220 15:17:10.070065 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c236340_192f_4dc7_b77a_798ac1771a5e.slice/crio-b8ab22711babda8569d17306ebac807361b0583e23ba568a9311e85b273c34bc WatchSource:0}: Error finding container b8ab22711babda8569d17306ebac807361b0583e23ba568a9311e85b273c34bc: Status 404 returned error can't find the container with id b8ab22711babda8569d17306ebac807361b0583e23ba568a9311e85b273c34bc Feb 20 15:17:10.128574 master-0 kubenswrapper[28120]: I0220 15:17:10.128518 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-jw4jx"] Feb 20 15:17:10.128790 master-0 kubenswrapper[28120]: I0220 15:17:10.128587 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vxpzd"] Feb 20 15:17:10.128790 master-0 kubenswrapper[28120]: I0220 15:17:10.128604 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6pmxn"] Feb 20 15:17:10.130539 master-0 kubenswrapper[28120]: I0220 15:17:10.129793 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-280d-account-create-update-6drk5" event={"ID":"a459ddfe-81df-4abd-867d-1ea8ed2f0c95","Type":"ContainerStarted","Data":"ba6576013676e6b966c373c174f1b95e55f59f91b3126091070256173121b9c4"} Feb 20 15:17:10.130539 master-0 kubenswrapper[28120]: I0220 15:17:10.129880 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-280d-account-create-update-6drk5" event={"ID":"a459ddfe-81df-4abd-867d-1ea8ed2f0c95","Type":"ContainerStarted","Data":"0b7356c6e675d5dd92d1d716ff90acdb2c21b2e21c0b939d93c347e5b28da58c"} Feb 20 15:17:10.135765 master-0 kubenswrapper[28120]: I0220 15:17:10.132082 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vxpzd" event={"ID":"7c236340-192f-4dc7-b77a-798ac1771a5e","Type":"ContainerStarted","Data":"b8ab22711babda8569d17306ebac807361b0583e23ba568a9311e85b273c34bc"} Feb 20 15:17:10.140318 master-0 kubenswrapper[28120]: I0220 15:17:10.138193 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-db-sync-qnpmn"] Feb 20 15:17:10.142380 master-0 kubenswrapper[28120]: W0220 15:17:10.141580 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb061a46_8122_48b7_a549_c7bd5bf1c0ea.slice/crio-9d69089fe904e9199a2b3be2dea98a316432294524d79d72a84fec91a746ad5f WatchSource:0}: Error finding container 9d69089fe904e9199a2b3be2dea98a316432294524d79d72a84fec91a746ad5f: Status 404 returned error can't find the container with id 9d69089fe904e9199a2b3be2dea98a316432294524d79d72a84fec91a746ad5f Feb 20 15:17:10.153412 master-0 kubenswrapper[28120]: I0220 15:17:10.147142 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68b4779d45-blxj5" Feb 20 15:17:10.153412 master-0 kubenswrapper[28120]: I0220 15:17:10.147589 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-676f54c559-lmbln" Feb 20 15:17:10.153412 master-0 kubenswrapper[28120]: I0220 15:17:10.148071 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-676f54c559-lmbln" event={"ID":"622b7edf-75d9-42d6-bd8e-39e63d6294b1","Type":"ContainerDied","Data":"c2c8d0b84eaf9b3ecb39042f3e4ab66358c16e2bec1e19484f69201eb346ee05"} Feb 20 15:17:10.153412 master-0 kubenswrapper[28120]: I0220 15:17:10.148162 28120 scope.go:117] "RemoveContainer" containerID="0e37b4c038a08fa35701a86510cb767f90ef4485e0edde6431b29703c69d9a35" Feb 20 15:17:10.153412 master-0 kubenswrapper[28120]: I0220 15:17:10.149377 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d687b68b9-d5bw7"] Feb 20 15:17:10.199886 master-0 kubenswrapper[28120]: I0220 15:17:10.188822 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-280d-account-create-update-6drk5" podStartSLOduration=2.188785486 podStartE2EDuration="2.188785486s" podCreationTimestamp="2026-02-20 15:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:17:10.179896244 +0000 UTC m=+968.440689807" watchObservedRunningTime="2026-02-20 15:17:10.188785486 +0000 UTC m=+968.449579049" Feb 20 15:17:10.214460 master-0 kubenswrapper[28120]: I0220 15:17:10.212544 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rfmtx"] Feb 20 15:17:10.215444 master-0 kubenswrapper[28120]: W0220 15:17:10.215369 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod556b936a_2d3e_4b78_b7d1_a7020060be7e.slice/crio-c548a9bd3eaf9517340a4ff97d4cabe198c64078d3b5587ee85b16f6c5d5b516 WatchSource:0}: Error finding container c548a9bd3eaf9517340a4ff97d4cabe198c64078d3b5587ee85b16f6c5d5b516: Status 404 returned error can't find the container with id c548a9bd3eaf9517340a4ff97d4cabe198c64078d3b5587ee85b16f6c5d5b516 Feb 20 15:17:10.264883 master-0 kubenswrapper[28120]: I0220 15:17:10.261425 28120 scope.go:117] "RemoveContainer" containerID="2f941d0520d38e5d4f5e4b89e827287cd9cd31b6efeff440f980506a8920a5e8" Feb 20 15:17:10.286979 master-0 kubenswrapper[28120]: I0220 15:17:10.283557 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68b4779d45-blxj5"] Feb 20 15:17:10.312891 master-0 kubenswrapper[28120]: I0220 15:17:10.303665 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68b4779d45-blxj5"] Feb 20 15:17:10.391976 master-0 kubenswrapper[28120]: I0220 15:17:10.385668 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-676f54c559-lmbln"] Feb 20 15:17:10.401166 master-0 kubenswrapper[28120]: I0220 15:17:10.400230 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-676f54c559-lmbln"] Feb 20 15:17:10.506949 master-0 kubenswrapper[28120]: I0220 15:17:10.506735 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-c0df7-default-external-api-0"] Feb 20 15:17:10.511007 master-0 kubenswrapper[28120]: E0220 15:17:10.507213 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622b7edf-75d9-42d6-bd8e-39e63d6294b1" containerName="init" Feb 20 15:17:10.511007 master-0 kubenswrapper[28120]: I0220 15:17:10.507232 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="622b7edf-75d9-42d6-bd8e-39e63d6294b1" containerName="init" Feb 20 15:17:10.511007 master-0 kubenswrapper[28120]: E0220 15:17:10.507265 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622b7edf-75d9-42d6-bd8e-39e63d6294b1" containerName="dnsmasq-dns" Feb 20 15:17:10.511007 master-0 kubenswrapper[28120]: I0220 15:17:10.507270 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="622b7edf-75d9-42d6-bd8e-39e63d6294b1" containerName="dnsmasq-dns" Feb 20 15:17:10.511007 master-0 kubenswrapper[28120]: I0220 15:17:10.507465 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="622b7edf-75d9-42d6-bd8e-39e63d6294b1" containerName="dnsmasq-dns" Feb 20 15:17:10.511007 master-0 kubenswrapper[28120]: I0220 15:17:10.508488 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.517366 master-0 kubenswrapper[28120]: I0220 15:17:10.516396 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-c0df7-default-external-config-data" Feb 20 15:17:10.517366 master-0 kubenswrapper[28120]: I0220 15:17:10.516660 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 20 15:17:10.517366 master-0 kubenswrapper[28120]: I0220 15:17:10.516785 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 20 15:17:10.558133 master-0 kubenswrapper[28120]: I0220 15:17:10.558046 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c0df7-default-external-api-0"] Feb 20 15:17:10.576526 master-0 kubenswrapper[28120]: I0220 15:17:10.576441 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-public-tls-certs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.576641 master-0 kubenswrapper[28120]: I0220 15:17:10.576526 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-combined-ca-bundle\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.576641 master-0 kubenswrapper[28120]: I0220 15:17:10.576564 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-scripts\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.576641 master-0 kubenswrapper[28120]: I0220 15:17:10.576620 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91f3d067-71a9-421e-8b60-fbf7609acd6e-httpd-run\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.576744 master-0 kubenswrapper[28120]: I0220 15:17:10.576655 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-config-data\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.576744 master-0 kubenswrapper[28120]: I0220 15:17:10.576700 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.576804 master-0 kubenswrapper[28120]: I0220 15:17:10.576773 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wskns\" (UniqueName: \"kubernetes.io/projected/91f3d067-71a9-421e-8b60-fbf7609acd6e-kube-api-access-wskns\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.576870 master-0 kubenswrapper[28120]: I0220 15:17:10.576839 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91f3d067-71a9-421e-8b60-fbf7609acd6e-logs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.678192 master-0 kubenswrapper[28120]: I0220 15:17:10.677873 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-combined-ca-bundle\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.678192 master-0 kubenswrapper[28120]: I0220 15:17:10.677951 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-scripts\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.678192 master-0 kubenswrapper[28120]: I0220 15:17:10.677987 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91f3d067-71a9-421e-8b60-fbf7609acd6e-httpd-run\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.678192 master-0 kubenswrapper[28120]: I0220 15:17:10.678014 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-config-data\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.678192 master-0 kubenswrapper[28120]: I0220 15:17:10.678044 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.678192 master-0 kubenswrapper[28120]: I0220 15:17:10.678095 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wskns\" (UniqueName: \"kubernetes.io/projected/91f3d067-71a9-421e-8b60-fbf7609acd6e-kube-api-access-wskns\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.678192 master-0 kubenswrapper[28120]: I0220 15:17:10.678135 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91f3d067-71a9-421e-8b60-fbf7609acd6e-logs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.678192 master-0 kubenswrapper[28120]: I0220 15:17:10.678188 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-public-tls-certs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.686380 master-0 kubenswrapper[28120]: I0220 15:17:10.684293 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91f3d067-71a9-421e-8b60-fbf7609acd6e-logs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.686380 master-0 kubenswrapper[28120]: I0220 15:17:10.685109 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-public-tls-certs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.686380 master-0 kubenswrapper[28120]: I0220 15:17:10.685346 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91f3d067-71a9-421e-8b60-fbf7609acd6e-httpd-run\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.686770 master-0 kubenswrapper[28120]: I0220 15:17:10.686726 28120 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 15:17:10.686841 master-0 kubenswrapper[28120]: I0220 15:17:10.686794 28120 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/975746d433a1b57bdfbaee59d00ca3c763a79be4597c58f513ba3507287deebe/globalmount\"" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.697755 master-0 kubenswrapper[28120]: I0220 15:17:10.695876 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-config-data\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.697755 master-0 kubenswrapper[28120]: I0220 15:17:10.696727 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-scripts\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.702911 master-0 kubenswrapper[28120]: I0220 15:17:10.702861 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-combined-ca-bundle\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:10.711706 master-0 kubenswrapper[28120]: I0220 15:17:10.711630 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wskns\" (UniqueName: \"kubernetes.io/projected/91f3d067-71a9-421e-8b60-fbf7609acd6e-kube-api-access-wskns\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:11.096304 master-0 kubenswrapper[28120]: I0220 15:17:11.096181 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c0df7-default-external-api-0"] Feb 20 15:17:11.102338 master-0 kubenswrapper[28120]: E0220 15:17:11.101426 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-c0df7-default-external-api-0" podUID="91f3d067-71a9-421e-8b60-fbf7609acd6e" Feb 20 15:17:11.181992 master-0 kubenswrapper[28120]: I0220 15:17:11.180343 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rfmtx" event={"ID":"556b936a-2d3e-4b78-b7d1-a7020060be7e","Type":"ContainerStarted","Data":"595479c3e8f2aafb02a35789c9d2e81b12722779ae6b63e5331d9b53df76367f"} Feb 20 15:17:11.181992 master-0 kubenswrapper[28120]: I0220 15:17:11.180400 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rfmtx" event={"ID":"556b936a-2d3e-4b78-b7d1-a7020060be7e","Type":"ContainerStarted","Data":"c548a9bd3eaf9517340a4ff97d4cabe198c64078d3b5587ee85b16f6c5d5b516"} Feb 20 15:17:11.186915 master-0 kubenswrapper[28120]: I0220 15:17:11.183528 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6pmxn" event={"ID":"0efac632-d90c-4e67-b0a7-4dd3df2a08f9","Type":"ContainerStarted","Data":"ec41fd3c9682d28f23e2bb1267b7c34cbbdcb0b4880609c23860dd35a3c88016"} Feb 20 15:17:11.218000 master-0 kubenswrapper[28120]: I0220 15:17:11.206468 28120 generic.go:334] "Generic (PLEG): container finished" podID="a459ddfe-81df-4abd-867d-1ea8ed2f0c95" containerID="ba6576013676e6b966c373c174f1b95e55f59f91b3126091070256173121b9c4" exitCode=0 Feb 20 15:17:11.218000 master-0 kubenswrapper[28120]: I0220 15:17:11.206568 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-280d-account-create-update-6drk5" event={"ID":"a459ddfe-81df-4abd-867d-1ea8ed2f0c95","Type":"ContainerDied","Data":"ba6576013676e6b966c373c174f1b95e55f59f91b3126091070256173121b9c4"} Feb 20 15:17:11.221286 master-0 kubenswrapper[28120]: I0220 15:17:11.221231 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-db-sync-qnpmn" event={"ID":"395d8eca-6336-4d08-b510-0d5d5b95e114","Type":"ContainerStarted","Data":"59c0bda23f5f33c479ada018592e94d515c85f7785f93b41dcba438fdf92b249"} Feb 20 15:17:11.223183 master-0 kubenswrapper[28120]: I0220 15:17:11.223148 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-c0df7-default-internal-api-0"] Feb 20 15:17:11.223864 master-0 kubenswrapper[28120]: I0220 15:17:11.223826 28120 generic.go:334] "Generic (PLEG): container finished" podID="fb061a46-8122-48b7-a549-c7bd5bf1c0ea" containerID="406e5e5746889cd13f6f31eacaff326c90364dd765b468f67dccf72ceb3ad0fc" exitCode=0 Feb 20 15:17:11.226613 master-0 kubenswrapper[28120]: I0220 15:17:11.226573 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-jw4jx" event={"ID":"fb061a46-8122-48b7-a549-c7bd5bf1c0ea","Type":"ContainerDied","Data":"406e5e5746889cd13f6f31eacaff326c90364dd765b468f67dccf72ceb3ad0fc"} Feb 20 15:17:11.226734 master-0 kubenswrapper[28120]: I0220 15:17:11.226617 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-jw4jx" event={"ID":"fb061a46-8122-48b7-a549-c7bd5bf1c0ea","Type":"ContainerStarted","Data":"9d69089fe904e9199a2b3be2dea98a316432294524d79d72a84fec91a746ad5f"} Feb 20 15:17:11.226734 master-0 kubenswrapper[28120]: I0220 15:17:11.226706 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.233201 master-0 kubenswrapper[28120]: I0220 15:17:11.230898 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 20 15:17:11.233201 master-0 kubenswrapper[28120]: I0220 15:17:11.231102 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-c0df7-default-internal-config-data" Feb 20 15:17:11.239299 master-0 kubenswrapper[28120]: I0220 15:17:11.238002 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c0df7-default-internal-api-0"] Feb 20 15:17:11.241295 master-0 kubenswrapper[28120]: I0220 15:17:11.240381 28120 generic.go:334] "Generic (PLEG): container finished" podID="788fe9e6-7d26-4739-855e-0a1f758e0376" containerID="d253d7a4782181ee715b0a7baa5e77a2f9a2113fd77b7646e8676ddef1abf770" exitCode=0 Feb 20 15:17:11.241295 master-0 kubenswrapper[28120]: I0220 15:17:11.240457 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" event={"ID":"788fe9e6-7d26-4739-855e-0a1f758e0376","Type":"ContainerDied","Data":"d253d7a4782181ee715b0a7baa5e77a2f9a2113fd77b7646e8676ddef1abf770"} Feb 20 15:17:11.241295 master-0 kubenswrapper[28120]: I0220 15:17:11.240481 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" event={"ID":"788fe9e6-7d26-4739-855e-0a1f758e0376","Type":"ContainerStarted","Data":"d522be06ac328822d0118bced3005f41d9fbbb82b454280943ca80d362998c6b"} Feb 20 15:17:11.248508 master-0 kubenswrapper[28120]: I0220 15:17:11.247698 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:11.248698 master-0 kubenswrapper[28120]: I0220 15:17:11.248539 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vxpzd" event={"ID":"7c236340-192f-4dc7-b77a-798ac1771a5e","Type":"ContainerStarted","Data":"8d0d95b16efe3caf22501cb057776fd5f6b09608c7ca41fdec256586d6e4e404"} Feb 20 15:17:11.286961 master-0 kubenswrapper[28120]: I0220 15:17:11.284391 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-rfmtx" podStartSLOduration=3.284368852 podStartE2EDuration="3.284368852s" podCreationTimestamp="2026-02-20 15:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:17:11.208523141 +0000 UTC m=+969.469316704" watchObservedRunningTime="2026-02-20 15:17:11.284368852 +0000 UTC m=+969.545162415" Feb 20 15:17:11.294241 master-0 kubenswrapper[28120]: I0220 15:17:11.293452 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e80723b0-174d-4db6-ba59-b4ab371ed0cc-httpd-run\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.294241 master-0 kubenswrapper[28120]: I0220 15:17:11.293498 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-internal-tls-certs\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.294241 master-0 kubenswrapper[28120]: I0220 15:17:11.293520 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-68c93e35-40fa-4709-92c7-5387ee663a47\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e7d6ec2f-0159-4059-a4df-0b5b60e7e4f5\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.294241 master-0 kubenswrapper[28120]: I0220 15:17:11.293567 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-combined-ca-bundle\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.294241 master-0 kubenswrapper[28120]: I0220 15:17:11.293738 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42fl5\" (UniqueName: \"kubernetes.io/projected/e80723b0-174d-4db6-ba59-b4ab371ed0cc-kube-api-access-42fl5\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.294241 master-0 kubenswrapper[28120]: I0220 15:17:11.293758 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-config-data\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.294241 master-0 kubenswrapper[28120]: I0220 15:17:11.293807 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e80723b0-174d-4db6-ba59-b4ab371ed0cc-logs\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.294241 master-0 kubenswrapper[28120]: I0220 15:17:11.293850 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-scripts\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.332967 master-0 kubenswrapper[28120]: I0220 15:17:11.332210 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vxpzd" podStartSLOduration=3.332193334 podStartE2EDuration="3.332193334s" podCreationTimestamp="2026-02-20 15:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:17:11.311076637 +0000 UTC m=+969.571870200" watchObservedRunningTime="2026-02-20 15:17:11.332193334 +0000 UTC m=+969.592986897" Feb 20 15:17:11.371993 master-0 kubenswrapper[28120]: I0220 15:17:11.371884 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:11.395377 master-0 kubenswrapper[28120]: I0220 15:17:11.395307 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-public-tls-certs\") pod \"91f3d067-71a9-421e-8b60-fbf7609acd6e\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " Feb 20 15:17:11.395548 master-0 kubenswrapper[28120]: I0220 15:17:11.395395 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-scripts\") pod \"91f3d067-71a9-421e-8b60-fbf7609acd6e\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " Feb 20 15:17:11.395548 master-0 kubenswrapper[28120]: I0220 15:17:11.395437 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91f3d067-71a9-421e-8b60-fbf7609acd6e-httpd-run\") pod \"91f3d067-71a9-421e-8b60-fbf7609acd6e\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " Feb 20 15:17:11.395654 master-0 kubenswrapper[28120]: I0220 15:17:11.395634 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91f3d067-71a9-421e-8b60-fbf7609acd6e-logs\") pod \"91f3d067-71a9-421e-8b60-fbf7609acd6e\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " Feb 20 15:17:11.395867 master-0 kubenswrapper[28120]: I0220 15:17:11.395829 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91f3d067-71a9-421e-8b60-fbf7609acd6e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "91f3d067-71a9-421e-8b60-fbf7609acd6e" (UID: "91f3d067-71a9-421e-8b60-fbf7609acd6e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:17:11.396360 master-0 kubenswrapper[28120]: I0220 15:17:11.396316 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91f3d067-71a9-421e-8b60-fbf7609acd6e-logs" (OuterVolumeSpecName: "logs") pod "91f3d067-71a9-421e-8b60-fbf7609acd6e" (UID: "91f3d067-71a9-421e-8b60-fbf7609acd6e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:17:11.396470 master-0 kubenswrapper[28120]: I0220 15:17:11.396447 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-config-data\") pod \"91f3d067-71a9-421e-8b60-fbf7609acd6e\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " Feb 20 15:17:11.396549 master-0 kubenswrapper[28120]: I0220 15:17:11.396526 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-combined-ca-bundle\") pod \"91f3d067-71a9-421e-8b60-fbf7609acd6e\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " Feb 20 15:17:11.396594 master-0 kubenswrapper[28120]: I0220 15:17:11.396556 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wskns\" (UniqueName: \"kubernetes.io/projected/91f3d067-71a9-421e-8b60-fbf7609acd6e-kube-api-access-wskns\") pod \"91f3d067-71a9-421e-8b60-fbf7609acd6e\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " Feb 20 15:17:11.397560 master-0 kubenswrapper[28120]: I0220 15:17:11.397139 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e80723b0-174d-4db6-ba59-b4ab371ed0cc-httpd-run\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.397632 master-0 kubenswrapper[28120]: I0220 15:17:11.397617 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-internal-tls-certs\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.398173 master-0 kubenswrapper[28120]: I0220 15:17:11.398125 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e80723b0-174d-4db6-ba59-b4ab371ed0cc-httpd-run\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.398262 master-0 kubenswrapper[28120]: I0220 15:17:11.398203 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "91f3d067-71a9-421e-8b60-fbf7609acd6e" (UID: "91f3d067-71a9-421e-8b60-fbf7609acd6e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:11.398697 master-0 kubenswrapper[28120]: I0220 15:17:11.398598 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-68c93e35-40fa-4709-92c7-5387ee663a47\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e7d6ec2f-0159-4059-a4df-0b5b60e7e4f5\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.399345 master-0 kubenswrapper[28120]: I0220 15:17:11.399308 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-combined-ca-bundle\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.399415 master-0 kubenswrapper[28120]: I0220 15:17:11.399352 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-scripts" (OuterVolumeSpecName: "scripts") pod "91f3d067-71a9-421e-8b60-fbf7609acd6e" (UID: "91f3d067-71a9-421e-8b60-fbf7609acd6e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:11.400585 master-0 kubenswrapper[28120]: I0220 15:17:11.399933 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42fl5\" (UniqueName: \"kubernetes.io/projected/e80723b0-174d-4db6-ba59-b4ab371ed0cc-kube-api-access-42fl5\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.400585 master-0 kubenswrapper[28120]: I0220 15:17:11.399988 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-config-data\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.400585 master-0 kubenswrapper[28120]: I0220 15:17:11.400042 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e80723b0-174d-4db6-ba59-b4ab371ed0cc-logs\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.401010 master-0 kubenswrapper[28120]: I0220 15:17:11.400987 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e80723b0-174d-4db6-ba59-b4ab371ed0cc-logs\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.401066 master-0 kubenswrapper[28120]: I0220 15:17:11.401056 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-scripts\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.401956 master-0 kubenswrapper[28120]: I0220 15:17:11.401570 28120 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:11.402031 master-0 kubenswrapper[28120]: I0220 15:17:11.401977 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:11.402031 master-0 kubenswrapper[28120]: I0220 15:17:11.401989 28120 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91f3d067-71a9-421e-8b60-fbf7609acd6e-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:11.402174 master-0 kubenswrapper[28120]: I0220 15:17:11.401999 28120 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91f3d067-71a9-421e-8b60-fbf7609acd6e-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:11.402220 master-0 kubenswrapper[28120]: I0220 15:17:11.402184 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-internal-tls-certs\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.402297 master-0 kubenswrapper[28120]: I0220 15:17:11.402274 28120 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 15:17:11.402576 master-0 kubenswrapper[28120]: I0220 15:17:11.402311 28120 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-68c93e35-40fa-4709-92c7-5387ee663a47\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e7d6ec2f-0159-4059-a4df-0b5b60e7e4f5\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/f9eb06ead75927ffb33ebe5802ebc626d2bdc6570ed175f8a5fed1a927e8b454/globalmount\"" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.403841 master-0 kubenswrapper[28120]: I0220 15:17:11.403806 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-config-data\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.404668 master-0 kubenswrapper[28120]: I0220 15:17:11.404637 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-scripts\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.405049 master-0 kubenswrapper[28120]: I0220 15:17:11.405025 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-combined-ca-bundle\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.405512 master-0 kubenswrapper[28120]: I0220 15:17:11.405486 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-config-data" (OuterVolumeSpecName: "config-data") pod "91f3d067-71a9-421e-8b60-fbf7609acd6e" (UID: "91f3d067-71a9-421e-8b60-fbf7609acd6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:11.411216 master-0 kubenswrapper[28120]: I0220 15:17:11.411146 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f3d067-71a9-421e-8b60-fbf7609acd6e-kube-api-access-wskns" (OuterVolumeSpecName: "kube-api-access-wskns") pod "91f3d067-71a9-421e-8b60-fbf7609acd6e" (UID: "91f3d067-71a9-421e-8b60-fbf7609acd6e"). InnerVolumeSpecName "kube-api-access-wskns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:11.414881 master-0 kubenswrapper[28120]: I0220 15:17:11.414831 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91f3d067-71a9-421e-8b60-fbf7609acd6e" (UID: "91f3d067-71a9-421e-8b60-fbf7609acd6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:11.416469 master-0 kubenswrapper[28120]: I0220 15:17:11.416437 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42fl5\" (UniqueName: \"kubernetes.io/projected/e80723b0-174d-4db6-ba59-b4ab371ed0cc-kube-api-access-42fl5\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:11.503853 master-0 kubenswrapper[28120]: I0220 15:17:11.503797 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:11.503853 master-0 kubenswrapper[28120]: I0220 15:17:11.503839 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91f3d067-71a9-421e-8b60-fbf7609acd6e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:11.503853 master-0 kubenswrapper[28120]: I0220 15:17:11.503851 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wskns\" (UniqueName: \"kubernetes.io/projected/91f3d067-71a9-421e-8b60-fbf7609acd6e-kube-api-access-wskns\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:12.076710 master-0 kubenswrapper[28120]: I0220 15:17:12.076638 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a90479-4612-4a36-941a-4838fe51c8eb" path="/var/lib/kubelet/pods/57a90479-4612-4a36-941a-4838fe51c8eb/volumes" Feb 20 15:17:12.077229 master-0 kubenswrapper[28120]: I0220 15:17:12.077193 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="622b7edf-75d9-42d6-bd8e-39e63d6294b1" path="/var/lib/kubelet/pods/622b7edf-75d9-42d6-bd8e-39e63d6294b1/volumes" Feb 20 15:17:12.079896 master-0 kubenswrapper[28120]: I0220 15:17:12.078427 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") pod \"glance-c0df7-default-external-api-0\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:12.130959 master-0 kubenswrapper[28120]: I0220 15:17:12.119098 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") pod \"91f3d067-71a9-421e-8b60-fbf7609acd6e\" (UID: \"91f3d067-71a9-421e-8b60-fbf7609acd6e\") " Feb 20 15:17:12.264948 master-0 kubenswrapper[28120]: I0220 15:17:12.263890 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" event={"ID":"788fe9e6-7d26-4739-855e-0a1f758e0376","Type":"ContainerStarted","Data":"8e083b7dce91c3ee1f1f7d37019ca73b66f7e59d1305c72e283d8238718f7459"} Feb 20 15:17:12.264948 master-0 kubenswrapper[28120]: I0220 15:17:12.264039 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:12.372608 master-0 kubenswrapper[28120]: I0220 15:17:12.361702 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" podStartSLOduration=4.361676061 podStartE2EDuration="4.361676061s" podCreationTimestamp="2026-02-20 15:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:17:12.34637227 +0000 UTC m=+970.607165843" watchObservedRunningTime="2026-02-20 15:17:12.361676061 +0000 UTC m=+970.622469664" Feb 20 15:17:14.330017 master-0 kubenswrapper[28120]: I0220 15:17:14.325211 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:14.982397 master-0 kubenswrapper[28120]: I0220 15:17:14.982349 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-68c93e35-40fa-4709-92c7-5387ee663a47\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e7d6ec2f-0159-4059-a4df-0b5b60e7e4f5\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:15.132164 master-0 kubenswrapper[28120]: I0220 15:17:15.132034 28120 trace.go:236] Trace[1659322084]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (20-Feb-2026 15:17:12.055) (total time: 3076ms): Feb 20 15:17:15.132164 master-0 kubenswrapper[28120]: Trace[1659322084]: [3.076309642s] [3.076309642s] END Feb 20 15:17:15.148360 master-0 kubenswrapper[28120]: I0220 15:17:15.148294 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:15.151596 master-0 kubenswrapper[28120]: I0220 15:17:15.151560 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12" (OuterVolumeSpecName: "glance") pod "91f3d067-71a9-421e-8b60-fbf7609acd6e" (UID: "91f3d067-71a9-421e-8b60-fbf7609acd6e"). InnerVolumeSpecName "pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 20 15:17:15.251837 master-0 kubenswrapper[28120]: I0220 15:17:15.251771 28120 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") on node \"master-0\" " Feb 20 15:17:15.291033 master-0 kubenswrapper[28120]: I0220 15:17:15.290793 28120 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 20 15:17:15.291638 master-0 kubenswrapper[28120]: I0220 15:17:15.291619 28120 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a" (UniqueName: "kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12") on node "master-0" Feb 20 15:17:15.356233 master-0 kubenswrapper[28120]: I0220 15:17:15.356157 28120 reconciler_common.go:293] "Volume detached for volume \"pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:15.384888 master-0 kubenswrapper[28120]: I0220 15:17:15.361576 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c0df7-default-external-api-0"] Feb 20 15:17:15.388100 master-0 kubenswrapper[28120]: I0220 15:17:15.385135 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-c0df7-default-external-api-0"] Feb 20 15:17:15.388100 master-0 kubenswrapper[28120]: I0220 15:17:15.385211 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-c0df7-default-external-api-0"] Feb 20 15:17:15.388100 master-0 kubenswrapper[28120]: I0220 15:17:15.387462 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.388554 master-0 kubenswrapper[28120]: I0220 15:17:15.388504 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c0df7-default-external-api-0"] Feb 20 15:17:15.392626 master-0 kubenswrapper[28120]: I0220 15:17:15.392582 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-c0df7-default-external-config-data" Feb 20 15:17:15.394313 master-0 kubenswrapper[28120]: I0220 15:17:15.394268 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 20 15:17:15.564862 master-0 kubenswrapper[28120]: I0220 15:17:15.564299 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-config-data\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.564862 master-0 kubenswrapper[28120]: I0220 15:17:15.564445 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-public-tls-certs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.564862 master-0 kubenswrapper[28120]: I0220 15:17:15.564471 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-httpd-run\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.564862 master-0 kubenswrapper[28120]: I0220 15:17:15.564502 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v46dv\" (UniqueName: \"kubernetes.io/projected/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-kube-api-access-v46dv\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.564862 master-0 kubenswrapper[28120]: I0220 15:17:15.564520 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-combined-ca-bundle\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.564862 master-0 kubenswrapper[28120]: I0220 15:17:15.564541 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-logs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.564862 master-0 kubenswrapper[28120]: I0220 15:17:15.564563 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.564862 master-0 kubenswrapper[28120]: I0220 15:17:15.564611 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-scripts\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.666271 master-0 kubenswrapper[28120]: I0220 15:17:15.666113 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-config-data\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.666485 master-0 kubenswrapper[28120]: I0220 15:17:15.666288 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-public-tls-certs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.666485 master-0 kubenswrapper[28120]: I0220 15:17:15.666325 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-httpd-run\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.666485 master-0 kubenswrapper[28120]: I0220 15:17:15.666370 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v46dv\" (UniqueName: \"kubernetes.io/projected/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-kube-api-access-v46dv\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.666683 master-0 kubenswrapper[28120]: I0220 15:17:15.666541 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-combined-ca-bundle\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.666683 master-0 kubenswrapper[28120]: I0220 15:17:15.666578 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-logs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.666683 master-0 kubenswrapper[28120]: I0220 15:17:15.666612 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.667361 master-0 kubenswrapper[28120]: I0220 15:17:15.667329 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-scripts\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.671027 master-0 kubenswrapper[28120]: I0220 15:17:15.668909 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-httpd-run\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.671027 master-0 kubenswrapper[28120]: I0220 15:17:15.669998 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-logs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.671027 master-0 kubenswrapper[28120]: I0220 15:17:15.670761 28120 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 15:17:15.671027 master-0 kubenswrapper[28120]: I0220 15:17:15.670787 28120 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/975746d433a1b57bdfbaee59d00ca3c763a79be4597c58f513ba3507287deebe/globalmount\"" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.671901 master-0 kubenswrapper[28120]: I0220 15:17:15.671832 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-public-tls-certs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.672308 master-0 kubenswrapper[28120]: I0220 15:17:15.672275 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-config-data\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.675043 master-0 kubenswrapper[28120]: I0220 15:17:15.674917 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-scripts\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.675676 master-0 kubenswrapper[28120]: I0220 15:17:15.675605 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-combined-ca-bundle\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:15.692303 master-0 kubenswrapper[28120]: I0220 15:17:15.692002 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v46dv\" (UniqueName: \"kubernetes.io/projected/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-kube-api-access-v46dv\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:16.072858 master-0 kubenswrapper[28120]: I0220 15:17:16.072782 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91f3d067-71a9-421e-8b60-fbf7609acd6e" path="/var/lib/kubelet/pods/91f3d067-71a9-421e-8b60-fbf7609acd6e/volumes" Feb 20 15:17:16.084039 master-0 kubenswrapper[28120]: E0220 15:17:16.083992 28120 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c236340_192f_4dc7_b77a_798ac1771a5e.slice/crio-8d0d95b16efe3caf22501cb057776fd5f6b09608c7ca41fdec256586d6e4e404.scope\": RecentStats: unable to find data in memory cache]" Feb 20 15:17:16.209695 master-0 kubenswrapper[28120]: I0220 15:17:16.209196 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-jw4jx" Feb 20 15:17:16.220236 master-0 kubenswrapper[28120]: I0220 15:17:16.220199 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-280d-account-create-update-6drk5" Feb 20 15:17:16.382597 master-0 kubenswrapper[28120]: I0220 15:17:16.382546 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9kgp\" (UniqueName: \"kubernetes.io/projected/fb061a46-8122-48b7-a549-c7bd5bf1c0ea-kube-api-access-j9kgp\") pod \"fb061a46-8122-48b7-a549-c7bd5bf1c0ea\" (UID: \"fb061a46-8122-48b7-a549-c7bd5bf1c0ea\") " Feb 20 15:17:16.383127 master-0 kubenswrapper[28120]: I0220 15:17:16.382685 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwkhw\" (UniqueName: \"kubernetes.io/projected/a459ddfe-81df-4abd-867d-1ea8ed2f0c95-kube-api-access-wwkhw\") pod \"a459ddfe-81df-4abd-867d-1ea8ed2f0c95\" (UID: \"a459ddfe-81df-4abd-867d-1ea8ed2f0c95\") " Feb 20 15:17:16.383127 master-0 kubenswrapper[28120]: I0220 15:17:16.382764 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a459ddfe-81df-4abd-867d-1ea8ed2f0c95-operator-scripts\") pod \"a459ddfe-81df-4abd-867d-1ea8ed2f0c95\" (UID: \"a459ddfe-81df-4abd-867d-1ea8ed2f0c95\") " Feb 20 15:17:16.383127 master-0 kubenswrapper[28120]: I0220 15:17:16.382877 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb061a46-8122-48b7-a549-c7bd5bf1c0ea-operator-scripts\") pod \"fb061a46-8122-48b7-a549-c7bd5bf1c0ea\" (UID: \"fb061a46-8122-48b7-a549-c7bd5bf1c0ea\") " Feb 20 15:17:16.383444 master-0 kubenswrapper[28120]: I0220 15:17:16.383391 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a459ddfe-81df-4abd-867d-1ea8ed2f0c95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a459ddfe-81df-4abd-867d-1ea8ed2f0c95" (UID: "a459ddfe-81df-4abd-867d-1ea8ed2f0c95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:16.383491 master-0 kubenswrapper[28120]: I0220 15:17:16.383451 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb061a46-8122-48b7-a549-c7bd5bf1c0ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb061a46-8122-48b7-a549-c7bd5bf1c0ea" (UID: "fb061a46-8122-48b7-a549-c7bd5bf1c0ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:16.387189 master-0 kubenswrapper[28120]: I0220 15:17:16.387125 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb061a46-8122-48b7-a549-c7bd5bf1c0ea-kube-api-access-j9kgp" (OuterVolumeSpecName: "kube-api-access-j9kgp") pod "fb061a46-8122-48b7-a549-c7bd5bf1c0ea" (UID: "fb061a46-8122-48b7-a549-c7bd5bf1c0ea"). InnerVolumeSpecName "kube-api-access-j9kgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:16.401506 master-0 kubenswrapper[28120]: I0220 15:17:16.401453 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a459ddfe-81df-4abd-867d-1ea8ed2f0c95-kube-api-access-wwkhw" (OuterVolumeSpecName: "kube-api-access-wwkhw") pod "a459ddfe-81df-4abd-867d-1ea8ed2f0c95" (UID: "a459ddfe-81df-4abd-867d-1ea8ed2f0c95"). InnerVolumeSpecName "kube-api-access-wwkhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:16.472944 master-0 kubenswrapper[28120]: I0220 15:17:16.472878 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-jw4jx" event={"ID":"fb061a46-8122-48b7-a549-c7bd5bf1c0ea","Type":"ContainerDied","Data":"9d69089fe904e9199a2b3be2dea98a316432294524d79d72a84fec91a746ad5f"} Feb 20 15:17:16.472944 master-0 kubenswrapper[28120]: I0220 15:17:16.472948 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d69089fe904e9199a2b3be2dea98a316432294524d79d72a84fec91a746ad5f" Feb 20 15:17:16.473496 master-0 kubenswrapper[28120]: I0220 15:17:16.473437 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-jw4jx" Feb 20 15:17:16.477538 master-0 kubenswrapper[28120]: I0220 15:17:16.477492 28120 generic.go:334] "Generic (PLEG): container finished" podID="7c236340-192f-4dc7-b77a-798ac1771a5e" containerID="8d0d95b16efe3caf22501cb057776fd5f6b09608c7ca41fdec256586d6e4e404" exitCode=0 Feb 20 15:17:16.477599 master-0 kubenswrapper[28120]: I0220 15:17:16.477523 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vxpzd" event={"ID":"7c236340-192f-4dc7-b77a-798ac1771a5e","Type":"ContainerDied","Data":"8d0d95b16efe3caf22501cb057776fd5f6b09608c7ca41fdec256586d6e4e404"} Feb 20 15:17:16.479235 master-0 kubenswrapper[28120]: I0220 15:17:16.479201 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6pmxn" event={"ID":"0efac632-d90c-4e67-b0a7-4dd3df2a08f9","Type":"ContainerStarted","Data":"d0614b81c2688cf2f44582c32b3564f9605238cd8e0ea574e48b88446f7d02f6"} Feb 20 15:17:16.481401 master-0 kubenswrapper[28120]: I0220 15:17:16.481369 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-280d-account-create-update-6drk5" event={"ID":"a459ddfe-81df-4abd-867d-1ea8ed2f0c95","Type":"ContainerDied","Data":"0b7356c6e675d5dd92d1d716ff90acdb2c21b2e21c0b939d93c347e5b28da58c"} Feb 20 15:17:16.481468 master-0 kubenswrapper[28120]: I0220 15:17:16.481407 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b7356c6e675d5dd92d1d716ff90acdb2c21b2e21c0b939d93c347e5b28da58c" Feb 20 15:17:16.481554 master-0 kubenswrapper[28120]: I0220 15:17:16.481389 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-280d-account-create-update-6drk5" Feb 20 15:17:16.485384 master-0 kubenswrapper[28120]: I0220 15:17:16.485336 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9kgp\" (UniqueName: \"kubernetes.io/projected/fb061a46-8122-48b7-a549-c7bd5bf1c0ea-kube-api-access-j9kgp\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:16.485384 master-0 kubenswrapper[28120]: I0220 15:17:16.485385 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwkhw\" (UniqueName: \"kubernetes.io/projected/a459ddfe-81df-4abd-867d-1ea8ed2f0c95-kube-api-access-wwkhw\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:16.485499 master-0 kubenswrapper[28120]: I0220 15:17:16.485396 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a459ddfe-81df-4abd-867d-1ea8ed2f0c95-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:16.485499 master-0 kubenswrapper[28120]: I0220 15:17:16.485406 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb061a46-8122-48b7-a549-c7bd5bf1c0ea-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:16.539114 master-0 kubenswrapper[28120]: I0220 15:17:16.539033 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-6pmxn" podStartSLOduration=2.6850915950000003 podStartE2EDuration="8.539014515s" podCreationTimestamp="2026-02-20 15:17:08 +0000 UTC" firstStartedPulling="2026-02-20 15:17:10.150413439 +0000 UTC m=+968.411207002" lastFinishedPulling="2026-02-20 15:17:16.004336349 +0000 UTC m=+974.265129922" observedRunningTime="2026-02-20 15:17:16.530880462 +0000 UTC m=+974.791674025" watchObservedRunningTime="2026-02-20 15:17:16.539014515 +0000 UTC m=+974.799808078" Feb 20 15:17:16.646049 master-0 kubenswrapper[28120]: I0220 15:17:16.645801 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c0df7-default-internal-api-0"] Feb 20 15:17:16.646457 master-0 kubenswrapper[28120]: W0220 15:17:16.646427 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode80723b0_174d_4db6_ba59_b4ab371ed0cc.slice/crio-73fa186047e93843df4493be40e57db3bacd3491b31e8a70a5c874dcffaf1cbd WatchSource:0}: Error finding container 73fa186047e93843df4493be40e57db3bacd3491b31e8a70a5c874dcffaf1cbd: Status 404 returned error can't find the container with id 73fa186047e93843df4493be40e57db3bacd3491b31e8a70a5c874dcffaf1cbd Feb 20 15:17:17.073741 master-0 kubenswrapper[28120]: I0220 15:17:17.073681 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") pod \"glance-c0df7-default-external-api-0\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:17.228992 master-0 kubenswrapper[28120]: I0220 15:17:17.221116 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:17.495126 master-0 kubenswrapper[28120]: I0220 15:17:17.495052 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-internal-api-0" event={"ID":"e80723b0-174d-4db6-ba59-b4ab371ed0cc","Type":"ContainerStarted","Data":"2e7744b8fa231391eb964c207b409e3353315d2aaee5b3c27e0b40008962380c"} Feb 20 15:17:17.495126 master-0 kubenswrapper[28120]: I0220 15:17:17.495125 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-internal-api-0" event={"ID":"e80723b0-174d-4db6-ba59-b4ab371ed0cc","Type":"ContainerStarted","Data":"73fa186047e93843df4493be40e57db3bacd3491b31e8a70a5c874dcffaf1cbd"} Feb 20 15:17:17.895041 master-0 kubenswrapper[28120]: I0220 15:17:17.893118 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:18.021450 master-0 kubenswrapper[28120]: I0220 15:17:18.021389 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-scripts\") pod \"7c236340-192f-4dc7-b77a-798ac1771a5e\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " Feb 20 15:17:18.021700 master-0 kubenswrapper[28120]: I0220 15:17:18.021500 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-combined-ca-bundle\") pod \"7c236340-192f-4dc7-b77a-798ac1771a5e\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " Feb 20 15:17:18.021700 master-0 kubenswrapper[28120]: I0220 15:17:18.021541 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-fernet-keys\") pod \"7c236340-192f-4dc7-b77a-798ac1771a5e\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " Feb 20 15:17:18.021700 master-0 kubenswrapper[28120]: I0220 15:17:18.021608 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-credential-keys\") pod \"7c236340-192f-4dc7-b77a-798ac1771a5e\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " Feb 20 15:17:18.021844 master-0 kubenswrapper[28120]: I0220 15:17:18.021733 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-config-data\") pod \"7c236340-192f-4dc7-b77a-798ac1771a5e\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " Feb 20 15:17:18.021844 master-0 kubenswrapper[28120]: I0220 15:17:18.021753 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjf5v\" (UniqueName: \"kubernetes.io/projected/7c236340-192f-4dc7-b77a-798ac1771a5e-kube-api-access-zjf5v\") pod \"7c236340-192f-4dc7-b77a-798ac1771a5e\" (UID: \"7c236340-192f-4dc7-b77a-798ac1771a5e\") " Feb 20 15:17:18.025852 master-0 kubenswrapper[28120]: I0220 15:17:18.025800 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-scripts" (OuterVolumeSpecName: "scripts") pod "7c236340-192f-4dc7-b77a-798ac1771a5e" (UID: "7c236340-192f-4dc7-b77a-798ac1771a5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:18.026572 master-0 kubenswrapper[28120]: I0220 15:17:18.026515 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c236340-192f-4dc7-b77a-798ac1771a5e-kube-api-access-zjf5v" (OuterVolumeSpecName: "kube-api-access-zjf5v") pod "7c236340-192f-4dc7-b77a-798ac1771a5e" (UID: "7c236340-192f-4dc7-b77a-798ac1771a5e"). InnerVolumeSpecName "kube-api-access-zjf5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:18.027940 master-0 kubenswrapper[28120]: I0220 15:17:18.027859 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7c236340-192f-4dc7-b77a-798ac1771a5e" (UID: "7c236340-192f-4dc7-b77a-798ac1771a5e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:18.030407 master-0 kubenswrapper[28120]: I0220 15:17:18.030356 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7c236340-192f-4dc7-b77a-798ac1771a5e" (UID: "7c236340-192f-4dc7-b77a-798ac1771a5e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:18.055965 master-0 kubenswrapper[28120]: I0220 15:17:18.055885 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c236340-192f-4dc7-b77a-798ac1771a5e" (UID: "7c236340-192f-4dc7-b77a-798ac1771a5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:18.060061 master-0 kubenswrapper[28120]: I0220 15:17:18.059929 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-config-data" (OuterVolumeSpecName: "config-data") pod "7c236340-192f-4dc7-b77a-798ac1771a5e" (UID: "7c236340-192f-4dc7-b77a-798ac1771a5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:18.127434 master-0 kubenswrapper[28120]: I0220 15:17:18.127371 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:18.127434 master-0 kubenswrapper[28120]: I0220 15:17:18.127420 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjf5v\" (UniqueName: \"kubernetes.io/projected/7c236340-192f-4dc7-b77a-798ac1771a5e-kube-api-access-zjf5v\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:18.127434 master-0 kubenswrapper[28120]: I0220 15:17:18.127431 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:18.127434 master-0 kubenswrapper[28120]: I0220 15:17:18.127441 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:18.127434 master-0 kubenswrapper[28120]: I0220 15:17:18.127449 28120 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:18.127788 master-0 kubenswrapper[28120]: I0220 15:17:18.127457 28120 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c236340-192f-4dc7-b77a-798ac1771a5e-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:18.539160 master-0 kubenswrapper[28120]: I0220 15:17:18.535371 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-internal-api-0" event={"ID":"e80723b0-174d-4db6-ba59-b4ab371ed0cc","Type":"ContainerStarted","Data":"e7857b1b473b1248555a33b62f06ca613bf23e6616c09cfeca46f2b7a2aff173"} Feb 20 15:17:18.539160 master-0 kubenswrapper[28120]: I0220 15:17:18.538982 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vxpzd" event={"ID":"7c236340-192f-4dc7-b77a-798ac1771a5e","Type":"ContainerDied","Data":"b8ab22711babda8569d17306ebac807361b0583e23ba568a9311e85b273c34bc"} Feb 20 15:17:18.539160 master-0 kubenswrapper[28120]: I0220 15:17:18.539044 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8ab22711babda8569d17306ebac807361b0583e23ba568a9311e85b273c34bc" Feb 20 15:17:18.539160 master-0 kubenswrapper[28120]: I0220 15:17:18.539124 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vxpzd" Feb 20 15:17:18.553162 master-0 kubenswrapper[28120]: I0220 15:17:18.553076 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c0df7-default-external-api-0"] Feb 20 15:17:18.598324 master-0 kubenswrapper[28120]: I0220 15:17:18.597806 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-c0df7-default-internal-api-0" podStartSLOduration=7.597786437 podStartE2EDuration="7.597786437s" podCreationTimestamp="2026-02-20 15:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:17:18.581290336 +0000 UTC m=+976.842083909" watchObservedRunningTime="2026-02-20 15:17:18.597786437 +0000 UTC m=+976.858580000" Feb 20 15:17:18.625817 master-0 kubenswrapper[28120]: I0220 15:17:18.625764 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vxpzd"] Feb 20 15:17:18.641343 master-0 kubenswrapper[28120]: I0220 15:17:18.641277 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vxpzd"] Feb 20 15:17:18.717809 master-0 kubenswrapper[28120]: I0220 15:17:18.717750 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ktfwm"] Feb 20 15:17:18.718555 master-0 kubenswrapper[28120]: E0220 15:17:18.718537 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb061a46-8122-48b7-a549-c7bd5bf1c0ea" containerName="mariadb-database-create" Feb 20 15:17:18.718555 master-0 kubenswrapper[28120]: I0220 15:17:18.718554 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb061a46-8122-48b7-a549-c7bd5bf1c0ea" containerName="mariadb-database-create" Feb 20 15:17:18.718713 master-0 kubenswrapper[28120]: E0220 15:17:18.718610 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c236340-192f-4dc7-b77a-798ac1771a5e" containerName="keystone-bootstrap" Feb 20 15:17:18.718713 master-0 kubenswrapper[28120]: I0220 15:17:18.718618 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c236340-192f-4dc7-b77a-798ac1771a5e" containerName="keystone-bootstrap" Feb 20 15:17:18.718713 master-0 kubenswrapper[28120]: E0220 15:17:18.718630 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a459ddfe-81df-4abd-867d-1ea8ed2f0c95" containerName="mariadb-account-create-update" Feb 20 15:17:18.718713 master-0 kubenswrapper[28120]: I0220 15:17:18.718637 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a459ddfe-81df-4abd-867d-1ea8ed2f0c95" containerName="mariadb-account-create-update" Feb 20 15:17:18.718979 master-0 kubenswrapper[28120]: I0220 15:17:18.718818 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="a459ddfe-81df-4abd-867d-1ea8ed2f0c95" containerName="mariadb-account-create-update" Feb 20 15:17:18.718979 master-0 kubenswrapper[28120]: I0220 15:17:18.718843 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c236340-192f-4dc7-b77a-798ac1771a5e" containerName="keystone-bootstrap" Feb 20 15:17:18.718979 master-0 kubenswrapper[28120]: I0220 15:17:18.718872 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb061a46-8122-48b7-a549-c7bd5bf1c0ea" containerName="mariadb-database-create" Feb 20 15:17:18.720250 master-0 kubenswrapper[28120]: I0220 15:17:18.719511 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.724761 master-0 kubenswrapper[28120]: I0220 15:17:18.724725 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 20 15:17:18.724894 master-0 kubenswrapper[28120]: I0220 15:17:18.724831 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 20 15:17:18.725032 master-0 kubenswrapper[28120]: I0220 15:17:18.725003 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 20 15:17:18.725173 master-0 kubenswrapper[28120]: I0220 15:17:18.725153 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 20 15:17:18.747590 master-0 kubenswrapper[28120]: I0220 15:17:18.747559 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ktfwm"] Feb 20 15:17:18.846256 master-0 kubenswrapper[28120]: I0220 15:17:18.846116 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-scripts\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.846256 master-0 kubenswrapper[28120]: I0220 15:17:18.846173 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-config-data\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.846256 master-0 kubenswrapper[28120]: I0220 15:17:18.846189 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-combined-ca-bundle\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.848190 master-0 kubenswrapper[28120]: I0220 15:17:18.848149 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-fernet-keys\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.848353 master-0 kubenswrapper[28120]: I0220 15:17:18.848267 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd8tw\" (UniqueName: \"kubernetes.io/projected/1872e13f-a70c-47d2-8f9b-2a068aecec22-kube-api-access-rd8tw\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.848442 master-0 kubenswrapper[28120]: I0220 15:17:18.848423 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-credential-keys\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.950788 master-0 kubenswrapper[28120]: I0220 15:17:18.950735 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-credential-keys\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.951006 master-0 kubenswrapper[28120]: I0220 15:17:18.950813 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-scripts\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.951006 master-0 kubenswrapper[28120]: I0220 15:17:18.950839 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-config-data\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.951006 master-0 kubenswrapper[28120]: I0220 15:17:18.950949 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-combined-ca-bundle\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.951107 master-0 kubenswrapper[28120]: I0220 15:17:18.951059 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-fernet-keys\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.951139 master-0 kubenswrapper[28120]: I0220 15:17:18.951119 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd8tw\" (UniqueName: \"kubernetes.io/projected/1872e13f-a70c-47d2-8f9b-2a068aecec22-kube-api-access-rd8tw\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.954315 master-0 kubenswrapper[28120]: I0220 15:17:18.954251 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-config-data\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.955000 master-0 kubenswrapper[28120]: I0220 15:17:18.954731 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-scripts\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.955000 master-0 kubenswrapper[28120]: I0220 15:17:18.954876 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-credential-keys\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.955600 master-0 kubenswrapper[28120]: I0220 15:17:18.955568 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-fernet-keys\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.960668 master-0 kubenswrapper[28120]: I0220 15:17:18.960644 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-combined-ca-bundle\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:18.967382 master-0 kubenswrapper[28120]: I0220 15:17:18.967115 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd8tw\" (UniqueName: \"kubernetes.io/projected/1872e13f-a70c-47d2-8f9b-2a068aecec22-kube-api-access-rd8tw\") pod \"keystone-bootstrap-ktfwm\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:19.050261 master-0 kubenswrapper[28120]: I0220 15:17:19.050191 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:19.270177 master-0 kubenswrapper[28120]: I0220 15:17:19.270118 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:17:19.346880 master-0 kubenswrapper[28120]: I0220 15:17:19.346289 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67dc4d787c-4fhxl"] Feb 20 15:17:19.346880 master-0 kubenswrapper[28120]: I0220 15:17:19.346661 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" podUID="f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" containerName="dnsmasq-dns" containerID="cri-o://01800cbdf70942edc132b8b6021821c204891c17ac24664d21a8b6d40ef9eb4c" gracePeriod=10 Feb 20 15:17:19.531148 master-0 kubenswrapper[28120]: I0220 15:17:19.531072 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ktfwm"] Feb 20 15:17:19.550815 master-0 kubenswrapper[28120]: I0220 15:17:19.550770 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-external-api-0" event={"ID":"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b","Type":"ContainerStarted","Data":"75f52938914e3c2ceead05696ed96550dd740a1b3e1f9d04aca0d29cf16fd722"} Feb 20 15:17:19.551331 master-0 kubenswrapper[28120]: I0220 15:17:19.551313 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-external-api-0" event={"ID":"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b","Type":"ContainerStarted","Data":"b2b2056d86fa93ae35e0cb2f1b218779c4657bedf9c221d45027c5c2fd17209e"} Feb 20 15:17:19.553806 master-0 kubenswrapper[28120]: I0220 15:17:19.553785 28120 generic.go:334] "Generic (PLEG): container finished" podID="f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" containerID="01800cbdf70942edc132b8b6021821c204891c17ac24664d21a8b6d40ef9eb4c" exitCode=0 Feb 20 15:17:19.554075 master-0 kubenswrapper[28120]: I0220 15:17:19.554032 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" event={"ID":"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8","Type":"ContainerDied","Data":"01800cbdf70942edc132b8b6021821c204891c17ac24664d21a8b6d40ef9eb4c"} Feb 20 15:17:20.027320 master-0 kubenswrapper[28120]: I0220 15:17:20.027255 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-4crzf"] Feb 20 15:17:20.030611 master-0 kubenswrapper[28120]: I0220 15:17:20.030312 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.033444 master-0 kubenswrapper[28120]: I0220 15:17:20.033392 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Feb 20 15:17:20.036143 master-0 kubenswrapper[28120]: I0220 15:17:20.036104 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 20 15:17:20.043907 master-0 kubenswrapper[28120]: I0220 15:17:20.043846 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-4crzf"] Feb 20 15:17:20.075021 master-0 kubenswrapper[28120]: I0220 15:17:20.072712 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c236340-192f-4dc7-b77a-798ac1771a5e" path="/var/lib/kubelet/pods/7c236340-192f-4dc7-b77a-798ac1771a5e/volumes" Feb 20 15:17:20.128206 master-0 kubenswrapper[28120]: I0220 15:17:20.128082 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-combined-ca-bundle\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.128410 master-0 kubenswrapper[28120]: I0220 15:17:20.128227 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e1911cb1-e2d4-4be7-93b8-43bc600e8386-etc-podinfo\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.128448 master-0 kubenswrapper[28120]: I0220 15:17:20.128414 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-config-data\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.128628 master-0 kubenswrapper[28120]: I0220 15:17:20.128579 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9krn9\" (UniqueName: \"kubernetes.io/projected/e1911cb1-e2d4-4be7-93b8-43bc600e8386-kube-api-access-9krn9\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.128821 master-0 kubenswrapper[28120]: I0220 15:17:20.128786 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e1911cb1-e2d4-4be7-93b8-43bc600e8386-config-data-merged\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.128880 master-0 kubenswrapper[28120]: I0220 15:17:20.128859 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-scripts\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.231830 master-0 kubenswrapper[28120]: I0220 15:17:20.231770 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e1911cb1-e2d4-4be7-93b8-43bc600e8386-config-data-merged\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.232317 master-0 kubenswrapper[28120]: I0220 15:17:20.232263 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-scripts\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.232489 master-0 kubenswrapper[28120]: I0220 15:17:20.232456 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-combined-ca-bundle\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.232634 master-0 kubenswrapper[28120]: I0220 15:17:20.232493 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e1911cb1-e2d4-4be7-93b8-43bc600e8386-etc-podinfo\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.232634 master-0 kubenswrapper[28120]: I0220 15:17:20.232531 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e1911cb1-e2d4-4be7-93b8-43bc600e8386-config-data-merged\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.232716 master-0 kubenswrapper[28120]: I0220 15:17:20.232631 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-config-data\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.232963 master-0 kubenswrapper[28120]: I0220 15:17:20.232912 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9krn9\" (UniqueName: \"kubernetes.io/projected/e1911cb1-e2d4-4be7-93b8-43bc600e8386-kube-api-access-9krn9\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.236507 master-0 kubenswrapper[28120]: I0220 15:17:20.236462 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-scripts\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.236750 master-0 kubenswrapper[28120]: I0220 15:17:20.236711 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-combined-ca-bundle\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.237612 master-0 kubenswrapper[28120]: I0220 15:17:20.237535 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e1911cb1-e2d4-4be7-93b8-43bc600e8386-etc-podinfo\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.238011 master-0 kubenswrapper[28120]: I0220 15:17:20.237974 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-config-data\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.254110 master-0 kubenswrapper[28120]: I0220 15:17:20.254066 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9krn9\" (UniqueName: \"kubernetes.io/projected/e1911cb1-e2d4-4be7-93b8-43bc600e8386-kube-api-access-9krn9\") pod \"ironic-db-sync-4crzf\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.353128 master-0 kubenswrapper[28120]: I0220 15:17:20.352990 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:20.569815 master-0 kubenswrapper[28120]: I0220 15:17:20.569734 28120 generic.go:334] "Generic (PLEG): container finished" podID="0efac632-d90c-4e67-b0a7-4dd3df2a08f9" containerID="d0614b81c2688cf2f44582c32b3564f9605238cd8e0ea574e48b88446f7d02f6" exitCode=0 Feb 20 15:17:20.570608 master-0 kubenswrapper[28120]: I0220 15:17:20.569829 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6pmxn" event={"ID":"0efac632-d90c-4e67-b0a7-4dd3df2a08f9","Type":"ContainerDied","Data":"d0614b81c2688cf2f44582c32b3564f9605238cd8e0ea574e48b88446f7d02f6"} Feb 20 15:17:20.573862 master-0 kubenswrapper[28120]: I0220 15:17:20.573806 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-external-api-0" event={"ID":"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b","Type":"ContainerStarted","Data":"7876bd0e3790df9dcf01e0fefd2e587d51e3b8bb2707a59f4fc0b26d9446461c"} Feb 20 15:17:20.630061 master-0 kubenswrapper[28120]: I0220 15:17:20.629869 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-c0df7-default-external-api-0" podStartSLOduration=5.629817302 podStartE2EDuration="5.629817302s" podCreationTimestamp="2026-02-20 15:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:17:20.61809042 +0000 UTC m=+978.878883993" watchObservedRunningTime="2026-02-20 15:17:20.629817302 +0000 UTC m=+978.890610875" Feb 20 15:17:25.150664 master-0 kubenswrapper[28120]: I0220 15:17:25.149587 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:25.150664 master-0 kubenswrapper[28120]: I0220 15:17:25.149646 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:25.196469 master-0 kubenswrapper[28120]: I0220 15:17:25.196407 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:25.199753 master-0 kubenswrapper[28120]: I0220 15:17:25.199715 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:25.256029 master-0 kubenswrapper[28120]: I0220 15:17:25.255962 28120 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" podUID="f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.201:5353: i/o timeout" Feb 20 15:17:25.639908 master-0 kubenswrapper[28120]: I0220 15:17:25.639633 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:25.639908 master-0 kubenswrapper[28120]: I0220 15:17:25.639704 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:27.222217 master-0 kubenswrapper[28120]: I0220 15:17:27.222148 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:27.222217 master-0 kubenswrapper[28120]: I0220 15:17:27.222218 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:27.257074 master-0 kubenswrapper[28120]: I0220 15:17:27.257016 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:27.275319 master-0 kubenswrapper[28120]: I0220 15:17:27.275267 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:27.528549 master-0 kubenswrapper[28120]: I0220 15:17:27.528454 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:27.674616 master-0 kubenswrapper[28120]: I0220 15:17:27.674538 28120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 20 15:17:27.675389 master-0 kubenswrapper[28120]: I0220 15:17:27.675154 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:27.675389 master-0 kubenswrapper[28120]: I0220 15:17:27.675355 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:27.689443 master-0 kubenswrapper[28120]: I0220 15:17:27.689392 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:17:29.123436 master-0 kubenswrapper[28120]: W0220 15:17:29.122748 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1872e13f_a70c_47d2_8f9b_2a068aecec22.slice/crio-6c22488933620e29da466df1d3ab0b6a335f9b81b8b84907314060bf97f351bc WatchSource:0}: Error finding container 6c22488933620e29da466df1d3ab0b6a335f9b81b8b84907314060bf97f351bc: Status 404 returned error can't find the container with id 6c22488933620e29da466df1d3ab0b6a335f9b81b8b84907314060bf97f351bc Feb 20 15:17:29.387433 master-0 kubenswrapper[28120]: I0220 15:17:29.387382 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:29.397686 master-0 kubenswrapper[28120]: I0220 15:17:29.397639 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:17:29.499632 master-0 kubenswrapper[28120]: I0220 15:17:29.499581 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-dns-swift-storage-0\") pod \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " Feb 20 15:17:29.499853 master-0 kubenswrapper[28120]: I0220 15:17:29.499672 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-scripts\") pod \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " Feb 20 15:17:29.499853 master-0 kubenswrapper[28120]: I0220 15:17:29.499742 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-ovsdbserver-nb\") pod \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " Feb 20 15:17:29.499853 master-0 kubenswrapper[28120]: I0220 15:17:29.499767 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-logs\") pod \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " Feb 20 15:17:29.499853 master-0 kubenswrapper[28120]: I0220 15:17:29.499795 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-ovsdbserver-sb\") pod \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " Feb 20 15:17:29.499853 master-0 kubenswrapper[28120]: I0220 15:17:29.499832 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpmww\" (UniqueName: \"kubernetes.io/projected/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-kube-api-access-vpmww\") pod \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " Feb 20 15:17:29.500170 master-0 kubenswrapper[28120]: I0220 15:17:29.499857 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-config\") pod \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " Feb 20 15:17:29.500170 master-0 kubenswrapper[28120]: I0220 15:17:29.499887 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-config-data\") pod \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " Feb 20 15:17:29.500170 master-0 kubenswrapper[28120]: I0220 15:17:29.499903 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t72vq\" (UniqueName: \"kubernetes.io/projected/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-kube-api-access-t72vq\") pod \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " Feb 20 15:17:29.501313 master-0 kubenswrapper[28120]: I0220 15:17:29.500397 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-dns-svc\") pod \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\" (UID: \"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8\") " Feb 20 15:17:29.501313 master-0 kubenswrapper[28120]: I0220 15:17:29.500592 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-combined-ca-bundle\") pod \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\" (UID: \"0efac632-d90c-4e67-b0a7-4dd3df2a08f9\") " Feb 20 15:17:29.501747 master-0 kubenswrapper[28120]: I0220 15:17:29.501573 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-logs" (OuterVolumeSpecName: "logs") pod "0efac632-d90c-4e67-b0a7-4dd3df2a08f9" (UID: "0efac632-d90c-4e67-b0a7-4dd3df2a08f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:17:29.505531 master-0 kubenswrapper[28120]: I0220 15:17:29.505466 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-kube-api-access-t72vq" (OuterVolumeSpecName: "kube-api-access-t72vq") pod "f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" (UID: "f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8"). InnerVolumeSpecName "kube-api-access-t72vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:29.505987 master-0 kubenswrapper[28120]: I0220 15:17:29.505911 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-scripts" (OuterVolumeSpecName: "scripts") pod "0efac632-d90c-4e67-b0a7-4dd3df2a08f9" (UID: "0efac632-d90c-4e67-b0a7-4dd3df2a08f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:29.507836 master-0 kubenswrapper[28120]: I0220 15:17:29.507467 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-kube-api-access-vpmww" (OuterVolumeSpecName: "kube-api-access-vpmww") pod "0efac632-d90c-4e67-b0a7-4dd3df2a08f9" (UID: "0efac632-d90c-4e67-b0a7-4dd3df2a08f9"). InnerVolumeSpecName "kube-api-access-vpmww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:29.541023 master-0 kubenswrapper[28120]: I0220 15:17:29.540952 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0efac632-d90c-4e67-b0a7-4dd3df2a08f9" (UID: "0efac632-d90c-4e67-b0a7-4dd3df2a08f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:29.545713 master-0 kubenswrapper[28120]: I0220 15:17:29.545406 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:29.577533 master-0 kubenswrapper[28120]: I0220 15:17:29.574064 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-config-data" (OuterVolumeSpecName: "config-data") pod "0efac632-d90c-4e67-b0a7-4dd3df2a08f9" (UID: "0efac632-d90c-4e67-b0a7-4dd3df2a08f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:29.577533 master-0 kubenswrapper[28120]: I0220 15:17:29.577347 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" (UID: "f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:29.583364 master-0 kubenswrapper[28120]: I0220 15:17:29.583218 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" (UID: "f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:29.588410 master-0 kubenswrapper[28120]: I0220 15:17:29.587594 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-config" (OuterVolumeSpecName: "config") pod "f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" (UID: "f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:29.603293 master-0 kubenswrapper[28120]: I0220 15:17:29.603215 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:29.603293 master-0 kubenswrapper[28120]: I0220 15:17:29.603275 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:29.603293 master-0 kubenswrapper[28120]: I0220 15:17:29.603286 28120 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:29.603293 master-0 kubenswrapper[28120]: I0220 15:17:29.603297 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:29.603293 master-0 kubenswrapper[28120]: I0220 15:17:29.603309 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpmww\" (UniqueName: \"kubernetes.io/projected/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-kube-api-access-vpmww\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:29.603609 master-0 kubenswrapper[28120]: I0220 15:17:29.603322 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:29.603609 master-0 kubenswrapper[28120]: I0220 15:17:29.603334 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0efac632-d90c-4e67-b0a7-4dd3df2a08f9-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:29.603609 master-0 kubenswrapper[28120]: I0220 15:17:29.603346 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t72vq\" (UniqueName: \"kubernetes.io/projected/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-kube-api-access-t72vq\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:29.603609 master-0 kubenswrapper[28120]: I0220 15:17:29.603356 28120 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:29.607226 master-0 kubenswrapper[28120]: I0220 15:17:29.607168 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" (UID: "f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:29.610938 master-0 kubenswrapper[28120]: I0220 15:17:29.610875 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" (UID: "f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:29.650448 master-0 kubenswrapper[28120]: I0220 15:17:29.649460 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:17:29.704406 master-0 kubenswrapper[28120]: I0220 15:17:29.701971 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" event={"ID":"f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8","Type":"ContainerDied","Data":"9c0380b1b47e2ae4b0ea539d6643e8046df73477bcb87a26671b1f505779ba8e"} Feb 20 15:17:29.704406 master-0 kubenswrapper[28120]: I0220 15:17:29.702025 28120 scope.go:117] "RemoveContainer" containerID="01800cbdf70942edc132b8b6021821c204891c17ac24664d21a8b6d40ef9eb4c" Feb 20 15:17:29.704406 master-0 kubenswrapper[28120]: I0220 15:17:29.702131 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" Feb 20 15:17:29.711534 master-0 kubenswrapper[28120]: I0220 15:17:29.711119 28120 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:29.711534 master-0 kubenswrapper[28120]: I0220 15:17:29.711162 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:29.715617 master-0 kubenswrapper[28120]: I0220 15:17:29.715579 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6pmxn" event={"ID":"0efac632-d90c-4e67-b0a7-4dd3df2a08f9","Type":"ContainerDied","Data":"ec41fd3c9682d28f23e2bb1267b7c34cbbdcb0b4880609c23860dd35a3c88016"} Feb 20 15:17:29.715617 master-0 kubenswrapper[28120]: I0220 15:17:29.715616 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec41fd3c9682d28f23e2bb1267b7c34cbbdcb0b4880609c23860dd35a3c88016" Feb 20 15:17:29.715766 master-0 kubenswrapper[28120]: I0220 15:17:29.715743 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6pmxn" Feb 20 15:17:29.719271 master-0 kubenswrapper[28120]: I0220 15:17:29.719041 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ktfwm" event={"ID":"1872e13f-a70c-47d2-8f9b-2a068aecec22","Type":"ContainerStarted","Data":"5f24e68d1509bbc88a4b27575429cd440bcc6f1bc52e0cea939137ca70189da9"} Feb 20 15:17:29.719271 master-0 kubenswrapper[28120]: I0220 15:17:29.719071 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ktfwm" event={"ID":"1872e13f-a70c-47d2-8f9b-2a068aecec22","Type":"ContainerStarted","Data":"6c22488933620e29da466df1d3ab0b6a335f9b81b8b84907314060bf97f351bc"} Feb 20 15:17:29.757805 master-0 kubenswrapper[28120]: I0220 15:17:29.757617 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ktfwm" podStartSLOduration=11.757597869 podStartE2EDuration="11.757597869s" podCreationTimestamp="2026-02-20 15:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:17:29.745321523 +0000 UTC m=+988.006115106" watchObservedRunningTime="2026-02-20 15:17:29.757597869 +0000 UTC m=+988.018391432" Feb 20 15:17:29.793967 master-0 kubenswrapper[28120]: I0220 15:17:29.793899 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67dc4d787c-4fhxl"] Feb 20 15:17:29.802484 master-0 kubenswrapper[28120]: I0220 15:17:29.802440 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67dc4d787c-4fhxl"] Feb 20 15:17:29.802959 master-0 kubenswrapper[28120]: I0220 15:17:29.802907 28120 scope.go:117] "RemoveContainer" containerID="f47025edb1499fcf9ab38b0aea9abbad52d8f524d6827879d741702f3793da79" Feb 20 15:17:29.812159 master-0 kubenswrapper[28120]: I0220 15:17:29.812107 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-4crzf"] Feb 20 15:17:29.838150 master-0 kubenswrapper[28120]: W0220 15:17:29.838049 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1911cb1_e2d4_4be7_93b8_43bc600e8386.slice/crio-ae7326db1cc79fe93bc34c43dd8198cde5e42b373206278616b80a554f9c2e4d WatchSource:0}: Error finding container ae7326db1cc79fe93bc34c43dd8198cde5e42b373206278616b80a554f9c2e4d: Status 404 returned error can't find the container with id ae7326db1cc79fe93bc34c43dd8198cde5e42b373206278616b80a554f9c2e4d Feb 20 15:17:30.081963 master-0 kubenswrapper[28120]: I0220 15:17:30.077141 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" path="/var/lib/kubelet/pods/f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8/volumes" Feb 20 15:17:30.256300 master-0 kubenswrapper[28120]: I0220 15:17:30.256216 28120 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67dc4d787c-4fhxl" podUID="f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.201:5353: i/o timeout" Feb 20 15:17:30.583940 master-0 kubenswrapper[28120]: I0220 15:17:30.582286 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7fb458c87b-4xkt8"] Feb 20 15:17:30.587857 master-0 kubenswrapper[28120]: E0220 15:17:30.584938 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" containerName="dnsmasq-dns" Feb 20 15:17:30.587857 master-0 kubenswrapper[28120]: I0220 15:17:30.584978 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" containerName="dnsmasq-dns" Feb 20 15:17:30.587857 master-0 kubenswrapper[28120]: E0220 15:17:30.585015 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0efac632-d90c-4e67-b0a7-4dd3df2a08f9" containerName="placement-db-sync" Feb 20 15:17:30.587857 master-0 kubenswrapper[28120]: I0220 15:17:30.585024 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="0efac632-d90c-4e67-b0a7-4dd3df2a08f9" containerName="placement-db-sync" Feb 20 15:17:30.587857 master-0 kubenswrapper[28120]: E0220 15:17:30.585059 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" containerName="init" Feb 20 15:17:30.587857 master-0 kubenswrapper[28120]: I0220 15:17:30.585069 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" containerName="init" Feb 20 15:17:30.587857 master-0 kubenswrapper[28120]: I0220 15:17:30.587293 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1bcbfd6-13f6-4fa6-9afc-f5fd68e4a3e8" containerName="dnsmasq-dns" Feb 20 15:17:30.587857 master-0 kubenswrapper[28120]: I0220 15:17:30.587335 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="0efac632-d90c-4e67-b0a7-4dd3df2a08f9" containerName="placement-db-sync" Feb 20 15:17:30.589712 master-0 kubenswrapper[28120]: I0220 15:17:30.588825 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.590743 master-0 kubenswrapper[28120]: I0220 15:17:30.590694 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 20 15:17:30.591996 master-0 kubenswrapper[28120]: I0220 15:17:30.591956 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 20 15:17:30.593109 master-0 kubenswrapper[28120]: I0220 15:17:30.592267 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 20 15:17:30.593109 master-0 kubenswrapper[28120]: I0220 15:17:30.592413 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 20 15:17:30.608606 master-0 kubenswrapper[28120]: I0220 15:17:30.607821 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7fb458c87b-4xkt8"] Feb 20 15:17:30.635251 master-0 kubenswrapper[28120]: I0220 15:17:30.634656 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhhmj\" (UniqueName: \"kubernetes.io/projected/b0d27db3-3e55-4041-aca6-83c2da3645bb-kube-api-access-fhhmj\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.635251 master-0 kubenswrapper[28120]: I0220 15:17:30.634741 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-combined-ca-bundle\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.635251 master-0 kubenswrapper[28120]: I0220 15:17:30.634827 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-scripts\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.635251 master-0 kubenswrapper[28120]: I0220 15:17:30.634853 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-internal-tls-certs\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.635251 master-0 kubenswrapper[28120]: I0220 15:17:30.635013 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-public-tls-certs\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.635251 master-0 kubenswrapper[28120]: I0220 15:17:30.635062 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-config-data\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.635251 master-0 kubenswrapper[28120]: I0220 15:17:30.635152 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d27db3-3e55-4041-aca6-83c2da3645bb-logs\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.735258 master-0 kubenswrapper[28120]: I0220 15:17:30.735190 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-4crzf" event={"ID":"e1911cb1-e2d4-4be7-93b8-43bc600e8386","Type":"ContainerStarted","Data":"ae7326db1cc79fe93bc34c43dd8198cde5e42b373206278616b80a554f9c2e4d"} Feb 20 15:17:30.739318 master-0 kubenswrapper[28120]: I0220 15:17:30.739276 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-db-sync-qnpmn" event={"ID":"395d8eca-6336-4d08-b510-0d5d5b95e114","Type":"ContainerStarted","Data":"c2e588188b062e8484fa6e357c9f0ccf6a2db0441d33faa4f33c8634ec8d42bc"} Feb 20 15:17:30.739601 master-0 kubenswrapper[28120]: I0220 15:17:30.737218 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-scripts\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.739694 master-0 kubenswrapper[28120]: I0220 15:17:30.739667 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-internal-tls-certs\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.740021 master-0 kubenswrapper[28120]: I0220 15:17:30.739984 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-public-tls-certs\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.740208 master-0 kubenswrapper[28120]: I0220 15:17:30.740179 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-config-data\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.740303 master-0 kubenswrapper[28120]: I0220 15:17:30.740276 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d27db3-3e55-4041-aca6-83c2da3645bb-logs\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.740483 master-0 kubenswrapper[28120]: I0220 15:17:30.740455 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhhmj\" (UniqueName: \"kubernetes.io/projected/b0d27db3-3e55-4041-aca6-83c2da3645bb-kube-api-access-fhhmj\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.740554 master-0 kubenswrapper[28120]: I0220 15:17:30.740514 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-combined-ca-bundle\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.741382 master-0 kubenswrapper[28120]: I0220 15:17:30.741352 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d27db3-3e55-4041-aca6-83c2da3645bb-logs\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.745526 master-0 kubenswrapper[28120]: I0220 15:17:30.744874 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-scripts\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.745526 master-0 kubenswrapper[28120]: I0220 15:17:30.745131 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-config-data\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.746984 master-0 kubenswrapper[28120]: I0220 15:17:30.746946 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-public-tls-certs\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.749389 master-0 kubenswrapper[28120]: I0220 15:17:30.749349 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-internal-tls-certs\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.750435 master-0 kubenswrapper[28120]: I0220 15:17:30.750402 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-combined-ca-bundle\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.759069 master-0 kubenswrapper[28120]: I0220 15:17:30.759028 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhhmj\" (UniqueName: \"kubernetes.io/projected/b0d27db3-3e55-4041-aca6-83c2da3645bb-kube-api-access-fhhmj\") pod \"placement-7fb458c87b-4xkt8\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:30.773113 master-0 kubenswrapper[28120]: I0220 15:17:30.772996 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-eea69-db-sync-qnpmn" podStartSLOduration=3.703890207 podStartE2EDuration="22.772971525s" podCreationTimestamp="2026-02-20 15:17:08 +0000 UTC" firstStartedPulling="2026-02-20 15:17:10.202632761 +0000 UTC m=+968.463426324" lastFinishedPulling="2026-02-20 15:17:29.271714069 +0000 UTC m=+987.532507642" observedRunningTime="2026-02-20 15:17:30.763742625 +0000 UTC m=+989.024536198" watchObservedRunningTime="2026-02-20 15:17:30.772971525 +0000 UTC m=+989.033765088" Feb 20 15:17:30.919963 master-0 kubenswrapper[28120]: I0220 15:17:30.919823 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:31.428265 master-0 kubenswrapper[28120]: W0220 15:17:31.428216 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0d27db3_3e55_4041_aca6_83c2da3645bb.slice/crio-f26c7456736ad037c9044e68bc655f286f78202921a216d2edf384eab37787a5 WatchSource:0}: Error finding container f26c7456736ad037c9044e68bc655f286f78202921a216d2edf384eab37787a5: Status 404 returned error can't find the container with id f26c7456736ad037c9044e68bc655f286f78202921a216d2edf384eab37787a5 Feb 20 15:17:31.441397 master-0 kubenswrapper[28120]: I0220 15:17:31.441352 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7fb458c87b-4xkt8"] Feb 20 15:17:31.755220 master-0 kubenswrapper[28120]: I0220 15:17:31.755168 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fb458c87b-4xkt8" event={"ID":"b0d27db3-3e55-4041-aca6-83c2da3645bb","Type":"ContainerStarted","Data":"c7028f36b135e2d26580aa9046750f8ddd4628a1f64c90c9f12aa64e8e042b3a"} Feb 20 15:17:31.755493 master-0 kubenswrapper[28120]: I0220 15:17:31.755442 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fb458c87b-4xkt8" event={"ID":"b0d27db3-3e55-4041-aca6-83c2da3645bb","Type":"ContainerStarted","Data":"f26c7456736ad037c9044e68bc655f286f78202921a216d2edf384eab37787a5"} Feb 20 15:17:32.775447 master-0 kubenswrapper[28120]: I0220 15:17:32.775341 28120 generic.go:334] "Generic (PLEG): container finished" podID="1872e13f-a70c-47d2-8f9b-2a068aecec22" containerID="5f24e68d1509bbc88a4b27575429cd440bcc6f1bc52e0cea939137ca70189da9" exitCode=0 Feb 20 15:17:32.775447 master-0 kubenswrapper[28120]: I0220 15:17:32.775435 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ktfwm" event={"ID":"1872e13f-a70c-47d2-8f9b-2a068aecec22","Type":"ContainerDied","Data":"5f24e68d1509bbc88a4b27575429cd440bcc6f1bc52e0cea939137ca70189da9"} Feb 20 15:17:35.650051 master-0 kubenswrapper[28120]: I0220 15:17:35.647321 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:35.668420 master-0 kubenswrapper[28120]: I0220 15:17:35.668301 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-config-data\") pod \"1872e13f-a70c-47d2-8f9b-2a068aecec22\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " Feb 20 15:17:35.675447 master-0 kubenswrapper[28120]: I0220 15:17:35.668459 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-fernet-keys\") pod \"1872e13f-a70c-47d2-8f9b-2a068aecec22\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " Feb 20 15:17:35.675447 master-0 kubenswrapper[28120]: I0220 15:17:35.669804 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-credential-keys\") pod \"1872e13f-a70c-47d2-8f9b-2a068aecec22\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " Feb 20 15:17:35.675447 master-0 kubenswrapper[28120]: I0220 15:17:35.669847 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-combined-ca-bundle\") pod \"1872e13f-a70c-47d2-8f9b-2a068aecec22\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " Feb 20 15:17:35.675447 master-0 kubenswrapper[28120]: I0220 15:17:35.669897 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd8tw\" (UniqueName: \"kubernetes.io/projected/1872e13f-a70c-47d2-8f9b-2a068aecec22-kube-api-access-rd8tw\") pod \"1872e13f-a70c-47d2-8f9b-2a068aecec22\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " Feb 20 15:17:35.675447 master-0 kubenswrapper[28120]: I0220 15:17:35.670041 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-scripts\") pod \"1872e13f-a70c-47d2-8f9b-2a068aecec22\" (UID: \"1872e13f-a70c-47d2-8f9b-2a068aecec22\") " Feb 20 15:17:35.675447 master-0 kubenswrapper[28120]: I0220 15:17:35.675335 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1872e13f-a70c-47d2-8f9b-2a068aecec22" (UID: "1872e13f-a70c-47d2-8f9b-2a068aecec22"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:35.684270 master-0 kubenswrapper[28120]: I0220 15:17:35.678117 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1872e13f-a70c-47d2-8f9b-2a068aecec22" (UID: "1872e13f-a70c-47d2-8f9b-2a068aecec22"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:35.684270 master-0 kubenswrapper[28120]: I0220 15:17:35.678264 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1872e13f-a70c-47d2-8f9b-2a068aecec22-kube-api-access-rd8tw" (OuterVolumeSpecName: "kube-api-access-rd8tw") pod "1872e13f-a70c-47d2-8f9b-2a068aecec22" (UID: "1872e13f-a70c-47d2-8f9b-2a068aecec22"). InnerVolumeSpecName "kube-api-access-rd8tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:35.689233 master-0 kubenswrapper[28120]: I0220 15:17:35.689159 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-scripts" (OuterVolumeSpecName: "scripts") pod "1872e13f-a70c-47d2-8f9b-2a068aecec22" (UID: "1872e13f-a70c-47d2-8f9b-2a068aecec22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:35.729913 master-0 kubenswrapper[28120]: I0220 15:17:35.729711 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-config-data" (OuterVolumeSpecName: "config-data") pod "1872e13f-a70c-47d2-8f9b-2a068aecec22" (UID: "1872e13f-a70c-47d2-8f9b-2a068aecec22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:35.731812 master-0 kubenswrapper[28120]: I0220 15:17:35.731724 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1872e13f-a70c-47d2-8f9b-2a068aecec22" (UID: "1872e13f-a70c-47d2-8f9b-2a068aecec22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:35.773558 master-0 kubenswrapper[28120]: I0220 15:17:35.773482 28120 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:35.773657 master-0 kubenswrapper[28120]: I0220 15:17:35.773564 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:35.773657 master-0 kubenswrapper[28120]: I0220 15:17:35.773589 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd8tw\" (UniqueName: \"kubernetes.io/projected/1872e13f-a70c-47d2-8f9b-2a068aecec22-kube-api-access-rd8tw\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:35.773657 master-0 kubenswrapper[28120]: I0220 15:17:35.773612 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:35.773657 master-0 kubenswrapper[28120]: I0220 15:17:35.773633 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:35.773657 master-0 kubenswrapper[28120]: I0220 15:17:35.773652 28120 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1872e13f-a70c-47d2-8f9b-2a068aecec22-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:35.808139 master-0 kubenswrapper[28120]: I0220 15:17:35.808109 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ktfwm" event={"ID":"1872e13f-a70c-47d2-8f9b-2a068aecec22","Type":"ContainerDied","Data":"6c22488933620e29da466df1d3ab0b6a335f9b81b8b84907314060bf97f351bc"} Feb 20 15:17:35.808334 master-0 kubenswrapper[28120]: I0220 15:17:35.808320 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c22488933620e29da466df1d3ab0b6a335f9b81b8b84907314060bf97f351bc" Feb 20 15:17:35.808409 master-0 kubenswrapper[28120]: I0220 15:17:35.808336 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ktfwm" Feb 20 15:17:36.827545 master-0 kubenswrapper[28120]: I0220 15:17:36.827472 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fb458c87b-4xkt8" event={"ID":"b0d27db3-3e55-4041-aca6-83c2da3645bb","Type":"ContainerStarted","Data":"649c0f3ec84afd9a73d4e8c5a8c0c02c1f3b46c8574299bfc9e2473557ea8456"} Feb 20 15:17:36.829450 master-0 kubenswrapper[28120]: I0220 15:17:36.829341 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:36.829813 master-0 kubenswrapper[28120]: I0220 15:17:36.829749 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:36.836472 master-0 kubenswrapper[28120]: I0220 15:17:36.836388 28120 generic.go:334] "Generic (PLEG): container finished" podID="e1911cb1-e2d4-4be7-93b8-43bc600e8386" containerID="b4586c0759c3d47725d154a29aae6d71c8b037cac5f9dd4ab0306b4c89441a31" exitCode=0 Feb 20 15:17:36.836736 master-0 kubenswrapper[28120]: I0220 15:17:36.836474 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-4crzf" event={"ID":"e1911cb1-e2d4-4be7-93b8-43bc600e8386","Type":"ContainerDied","Data":"b4586c0759c3d47725d154a29aae6d71c8b037cac5f9dd4ab0306b4c89441a31"} Feb 20 15:17:36.882507 master-0 kubenswrapper[28120]: I0220 15:17:36.882450 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-856c7f5b5-b8rb7"] Feb 20 15:17:36.882994 master-0 kubenswrapper[28120]: E0220 15:17:36.882945 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1872e13f-a70c-47d2-8f9b-2a068aecec22" containerName="keystone-bootstrap" Feb 20 15:17:36.882994 master-0 kubenswrapper[28120]: I0220 15:17:36.882965 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1872e13f-a70c-47d2-8f9b-2a068aecec22" containerName="keystone-bootstrap" Feb 20 15:17:36.883271 master-0 kubenswrapper[28120]: I0220 15:17:36.883251 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="1872e13f-a70c-47d2-8f9b-2a068aecec22" containerName="keystone-bootstrap" Feb 20 15:17:36.893101 master-0 kubenswrapper[28120]: I0220 15:17:36.893042 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:36.900717 master-0 kubenswrapper[28120]: I0220 15:17:36.900653 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 20 15:17:36.900956 master-0 kubenswrapper[28120]: I0220 15:17:36.900774 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7fb458c87b-4xkt8" podStartSLOduration=6.900743521 podStartE2EDuration="6.900743521s" podCreationTimestamp="2026-02-20 15:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:17:36.875439691 +0000 UTC m=+995.136233264" watchObservedRunningTime="2026-02-20 15:17:36.900743521 +0000 UTC m=+995.161537104" Feb 20 15:17:36.900956 master-0 kubenswrapper[28120]: I0220 15:17:36.900892 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 20 15:17:36.901076 master-0 kubenswrapper[28120]: I0220 15:17:36.901058 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 20 15:17:36.902835 master-0 kubenswrapper[28120]: I0220 15:17:36.902258 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 20 15:17:36.902835 master-0 kubenswrapper[28120]: I0220 15:17:36.902489 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-856c7f5b5-b8rb7"] Feb 20 15:17:36.902835 master-0 kubenswrapper[28120]: I0220 15:17:36.902542 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 20 15:17:37.005702 master-0 kubenswrapper[28120]: I0220 15:17:37.005631 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-combined-ca-bundle\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.005702 master-0 kubenswrapper[28120]: I0220 15:17:37.005688 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-internal-tls-certs\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.005702 master-0 kubenswrapper[28120]: I0220 15:17:37.005710 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-config-data\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.006048 master-0 kubenswrapper[28120]: I0220 15:17:37.005778 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-fernet-keys\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.006048 master-0 kubenswrapper[28120]: I0220 15:17:37.005799 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhmgh\" (UniqueName: \"kubernetes.io/projected/157bbfc1-6155-4fcf-b305-12ad4424bdaa-kube-api-access-dhmgh\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.006048 master-0 kubenswrapper[28120]: I0220 15:17:37.005837 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-scripts\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.006048 master-0 kubenswrapper[28120]: I0220 15:17:37.005856 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-credential-keys\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.006048 master-0 kubenswrapper[28120]: I0220 15:17:37.005912 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-public-tls-certs\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.110146 master-0 kubenswrapper[28120]: I0220 15:17:37.108147 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-scripts\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.110146 master-0 kubenswrapper[28120]: I0220 15:17:37.108323 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-credential-keys\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.110146 master-0 kubenswrapper[28120]: I0220 15:17:37.108552 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-public-tls-certs\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.110146 master-0 kubenswrapper[28120]: I0220 15:17:37.108991 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-combined-ca-bundle\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.110146 master-0 kubenswrapper[28120]: I0220 15:17:37.109164 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-internal-tls-certs\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.110146 master-0 kubenswrapper[28120]: I0220 15:17:37.109194 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-config-data\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.110146 master-0 kubenswrapper[28120]: I0220 15:17:37.109280 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-fernet-keys\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.110146 master-0 kubenswrapper[28120]: I0220 15:17:37.109301 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhmgh\" (UniqueName: \"kubernetes.io/projected/157bbfc1-6155-4fcf-b305-12ad4424bdaa-kube-api-access-dhmgh\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.114956 master-0 kubenswrapper[28120]: I0220 15:17:37.112721 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-public-tls-certs\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.114956 master-0 kubenswrapper[28120]: I0220 15:17:37.112744 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-config-data\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.114956 master-0 kubenswrapper[28120]: I0220 15:17:37.112739 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-scripts\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.114956 master-0 kubenswrapper[28120]: I0220 15:17:37.113069 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-credential-keys\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.114956 master-0 kubenswrapper[28120]: I0220 15:17:37.113174 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-combined-ca-bundle\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.116472 master-0 kubenswrapper[28120]: I0220 15:17:37.116429 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-fernet-keys\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.118561 master-0 kubenswrapper[28120]: I0220 15:17:37.118513 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157bbfc1-6155-4fcf-b305-12ad4424bdaa-internal-tls-certs\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.131657 master-0 kubenswrapper[28120]: I0220 15:17:37.131593 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhmgh\" (UniqueName: \"kubernetes.io/projected/157bbfc1-6155-4fcf-b305-12ad4424bdaa-kube-api-access-dhmgh\") pod \"keystone-856c7f5b5-b8rb7\" (UID: \"157bbfc1-6155-4fcf-b305-12ad4424bdaa\") " pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.333103 master-0 kubenswrapper[28120]: I0220 15:17:37.326988 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:37.848675 master-0 kubenswrapper[28120]: I0220 15:17:37.848323 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-4crzf" event={"ID":"e1911cb1-e2d4-4be7-93b8-43bc600e8386","Type":"ContainerStarted","Data":"919e5789fc57a597088dc917fd76f5e01e9a881355125b3026f487282c73de4e"} Feb 20 15:17:37.893674 master-0 kubenswrapper[28120]: I0220 15:17:37.893600 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-856c7f5b5-b8rb7"] Feb 20 15:17:37.903898 master-0 kubenswrapper[28120]: W0220 15:17:37.903832 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod157bbfc1_6155_4fcf_b305_12ad4424bdaa.slice/crio-75b9e6f4804483dc7eef96eb64231b5ee6a3eb58fd6e1aabb7cde310ac78a9c3 WatchSource:0}: Error finding container 75b9e6f4804483dc7eef96eb64231b5ee6a3eb58fd6e1aabb7cde310ac78a9c3: Status 404 returned error can't find the container with id 75b9e6f4804483dc7eef96eb64231b5ee6a3eb58fd6e1aabb7cde310ac78a9c3 Feb 20 15:17:37.957657 master-0 kubenswrapper[28120]: I0220 15:17:37.957126 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-4crzf" podStartSLOduration=13.259859474 podStartE2EDuration="18.957107139s" podCreationTimestamp="2026-02-20 15:17:19 +0000 UTC" firstStartedPulling="2026-02-20 15:17:29.846620768 +0000 UTC m=+988.107414321" lastFinishedPulling="2026-02-20 15:17:35.543868413 +0000 UTC m=+993.804661986" observedRunningTime="2026-02-20 15:17:37.942732861 +0000 UTC m=+996.203526424" watchObservedRunningTime="2026-02-20 15:17:37.957107139 +0000 UTC m=+996.217900702" Feb 20 15:17:38.082212 master-0 kubenswrapper[28120]: I0220 15:17:38.079869 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:17:38.866535 master-0 kubenswrapper[28120]: I0220 15:17:38.866410 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-856c7f5b5-b8rb7" event={"ID":"157bbfc1-6155-4fcf-b305-12ad4424bdaa","Type":"ContainerStarted","Data":"442f572cefe0db4ed330eef1acdd706d47ee5f62064aae11d944eed0aef96297"} Feb 20 15:17:38.866535 master-0 kubenswrapper[28120]: I0220 15:17:38.866479 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-856c7f5b5-b8rb7" event={"ID":"157bbfc1-6155-4fcf-b305-12ad4424bdaa","Type":"ContainerStarted","Data":"75b9e6f4804483dc7eef96eb64231b5ee6a3eb58fd6e1aabb7cde310ac78a9c3"} Feb 20 15:17:38.867635 master-0 kubenswrapper[28120]: I0220 15:17:38.867517 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:17:39.880022 master-0 kubenswrapper[28120]: I0220 15:17:39.879870 28120 generic.go:334] "Generic (PLEG): container finished" podID="395d8eca-6336-4d08-b510-0d5d5b95e114" containerID="c2e588188b062e8484fa6e357c9f0ccf6a2db0441d33faa4f33c8634ec8d42bc" exitCode=0 Feb 20 15:17:39.880022 master-0 kubenswrapper[28120]: I0220 15:17:39.879966 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-db-sync-qnpmn" event={"ID":"395d8eca-6336-4d08-b510-0d5d5b95e114","Type":"ContainerDied","Data":"c2e588188b062e8484fa6e357c9f0ccf6a2db0441d33faa4f33c8634ec8d42bc"} Feb 20 15:17:39.904715 master-0 kubenswrapper[28120]: I0220 15:17:39.904636 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-856c7f5b5-b8rb7" podStartSLOduration=3.9046192079999997 podStartE2EDuration="3.904619208s" podCreationTimestamp="2026-02-20 15:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:17:38.903876466 +0000 UTC m=+997.164670069" watchObservedRunningTime="2026-02-20 15:17:39.904619208 +0000 UTC m=+998.165412771" Feb 20 15:17:41.378747 master-0 kubenswrapper[28120]: I0220 15:17:41.378681 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:41.544269 master-0 kubenswrapper[28120]: I0220 15:17:41.544119 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-db-sync-config-data\") pod \"395d8eca-6336-4d08-b510-0d5d5b95e114\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " Feb 20 15:17:41.544475 master-0 kubenswrapper[28120]: I0220 15:17:41.544277 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-combined-ca-bundle\") pod \"395d8eca-6336-4d08-b510-0d5d5b95e114\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " Feb 20 15:17:41.544475 master-0 kubenswrapper[28120]: I0220 15:17:41.544432 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-config-data\") pod \"395d8eca-6336-4d08-b510-0d5d5b95e114\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " Feb 20 15:17:41.544569 master-0 kubenswrapper[28120]: I0220 15:17:41.544525 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4lb9\" (UniqueName: \"kubernetes.io/projected/395d8eca-6336-4d08-b510-0d5d5b95e114-kube-api-access-h4lb9\") pod \"395d8eca-6336-4d08-b510-0d5d5b95e114\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " Feb 20 15:17:41.544809 master-0 kubenswrapper[28120]: I0220 15:17:41.544743 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/395d8eca-6336-4d08-b510-0d5d5b95e114-etc-machine-id\") pod \"395d8eca-6336-4d08-b510-0d5d5b95e114\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " Feb 20 15:17:41.545021 master-0 kubenswrapper[28120]: I0220 15:17:41.544958 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/395d8eca-6336-4d08-b510-0d5d5b95e114-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "395d8eca-6336-4d08-b510-0d5d5b95e114" (UID: "395d8eca-6336-4d08-b510-0d5d5b95e114"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:41.545173 master-0 kubenswrapper[28120]: I0220 15:17:41.545127 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-scripts\") pod \"395d8eca-6336-4d08-b510-0d5d5b95e114\" (UID: \"395d8eca-6336-4d08-b510-0d5d5b95e114\") " Feb 20 15:17:41.546085 master-0 kubenswrapper[28120]: I0220 15:17:41.546035 28120 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/395d8eca-6336-4d08-b510-0d5d5b95e114-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:41.548016 master-0 kubenswrapper[28120]: I0220 15:17:41.547914 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "395d8eca-6336-4d08-b510-0d5d5b95e114" (UID: "395d8eca-6336-4d08-b510-0d5d5b95e114"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:41.549061 master-0 kubenswrapper[28120]: I0220 15:17:41.549003 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/395d8eca-6336-4d08-b510-0d5d5b95e114-kube-api-access-h4lb9" (OuterVolumeSpecName: "kube-api-access-h4lb9") pod "395d8eca-6336-4d08-b510-0d5d5b95e114" (UID: "395d8eca-6336-4d08-b510-0d5d5b95e114"). InnerVolumeSpecName "kube-api-access-h4lb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:41.550936 master-0 kubenswrapper[28120]: I0220 15:17:41.550875 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-scripts" (OuterVolumeSpecName: "scripts") pod "395d8eca-6336-4d08-b510-0d5d5b95e114" (UID: "395d8eca-6336-4d08-b510-0d5d5b95e114"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:41.595911 master-0 kubenswrapper[28120]: I0220 15:17:41.595835 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-config-data" (OuterVolumeSpecName: "config-data") pod "395d8eca-6336-4d08-b510-0d5d5b95e114" (UID: "395d8eca-6336-4d08-b510-0d5d5b95e114"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:41.599821 master-0 kubenswrapper[28120]: I0220 15:17:41.599768 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "395d8eca-6336-4d08-b510-0d5d5b95e114" (UID: "395d8eca-6336-4d08-b510-0d5d5b95e114"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:41.650225 master-0 kubenswrapper[28120]: I0220 15:17:41.650170 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:41.650225 master-0 kubenswrapper[28120]: I0220 15:17:41.650218 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:41.650225 master-0 kubenswrapper[28120]: I0220 15:17:41.650234 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4lb9\" (UniqueName: \"kubernetes.io/projected/395d8eca-6336-4d08-b510-0d5d5b95e114-kube-api-access-h4lb9\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:41.650559 master-0 kubenswrapper[28120]: I0220 15:17:41.650247 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:41.650559 master-0 kubenswrapper[28120]: I0220 15:17:41.650258 28120 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/395d8eca-6336-4d08-b510-0d5d5b95e114-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:41.928552 master-0 kubenswrapper[28120]: I0220 15:17:41.928424 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-db-sync-qnpmn" event={"ID":"395d8eca-6336-4d08-b510-0d5d5b95e114","Type":"ContainerDied","Data":"59c0bda23f5f33c479ada018592e94d515c85f7785f93b41dcba438fdf92b249"} Feb 20 15:17:41.928552 master-0 kubenswrapper[28120]: I0220 15:17:41.928495 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59c0bda23f5f33c479ada018592e94d515c85f7785f93b41dcba438fdf92b249" Feb 20 15:17:41.928552 master-0 kubenswrapper[28120]: I0220 15:17:41.928517 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-db-sync-qnpmn" Feb 20 15:17:42.247065 master-0 kubenswrapper[28120]: I0220 15:17:42.246085 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-eea69-scheduler-0"] Feb 20 15:17:42.248096 master-0 kubenswrapper[28120]: E0220 15:17:42.248046 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395d8eca-6336-4d08-b510-0d5d5b95e114" containerName="cinder-eea69-db-sync" Feb 20 15:17:42.248400 master-0 kubenswrapper[28120]: I0220 15:17:42.248096 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="395d8eca-6336-4d08-b510-0d5d5b95e114" containerName="cinder-eea69-db-sync" Feb 20 15:17:42.248809 master-0 kubenswrapper[28120]: I0220 15:17:42.248787 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="395d8eca-6336-4d08-b510-0d5d5b95e114" containerName="cinder-eea69-db-sync" Feb 20 15:17:42.251917 master-0 kubenswrapper[28120]: I0220 15:17:42.251876 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.258561 master-0 kubenswrapper[28120]: I0220 15:17:42.254463 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-eea69-scripts" Feb 20 15:17:42.258561 master-0 kubenswrapper[28120]: I0220 15:17:42.255228 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-eea69-scheduler-config-data" Feb 20 15:17:42.258561 master-0 kubenswrapper[28120]: I0220 15:17:42.255315 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-eea69-config-data" Feb 20 15:17:42.264852 master-0 kubenswrapper[28120]: I0220 15:17:42.264794 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-eea69-volume-lvm-iscsi-0"] Feb 20 15:17:42.365908 master-0 kubenswrapper[28120]: I0220 15:17:42.364985 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-scheduler-0"] Feb 20 15:17:42.365908 master-0 kubenswrapper[28120]: I0220 15:17:42.365050 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-volume-lvm-iscsi-0"] Feb 20 15:17:42.365908 master-0 kubenswrapper[28120]: I0220 15:17:42.365065 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dd74dd7c9-spm9x"] Feb 20 15:17:42.365908 master-0 kubenswrapper[28120]: I0220 15:17:42.365153 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.370373 master-0 kubenswrapper[28120]: I0220 15:17:42.370298 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-eea69-volume-lvm-iscsi-config-data" Feb 20 15:17:42.370775 master-0 kubenswrapper[28120]: I0220 15:17:42.370733 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-combined-ca-bundle\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.370861 master-0 kubenswrapper[28120]: I0220 15:17:42.370777 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/101c0fb0-2857-4ea7-98b0-2fdd08b39966-etc-machine-id\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.370861 master-0 kubenswrapper[28120]: I0220 15:17:42.370830 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-config-data\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.370969 master-0 kubenswrapper[28120]: I0220 15:17:42.370891 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-scripts\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.371229 master-0 kubenswrapper[28120]: I0220 15:17:42.371202 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xczk5\" (UniqueName: \"kubernetes.io/projected/101c0fb0-2857-4ea7-98b0-2fdd08b39966-kube-api-access-xczk5\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.371322 master-0 kubenswrapper[28120]: I0220 15:17:42.371298 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-config-data-custom\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.375122 master-0 kubenswrapper[28120]: I0220 15:17:42.375087 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dd74dd7c9-spm9x"] Feb 20 15:17:42.375288 master-0 kubenswrapper[28120]: I0220 15:17:42.375195 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.432495 master-0 kubenswrapper[28120]: I0220 15:17:42.432422 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-eea69-backup-0"] Feb 20 15:17:42.435302 master-0 kubenswrapper[28120]: I0220 15:17:42.435092 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.446524 master-0 kubenswrapper[28120]: I0220 15:17:42.446461 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-eea69-backup-config-data" Feb 20 15:17:42.478161 master-0 kubenswrapper[28120]: I0220 15:17:42.478100 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-dev\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.478488 master-0 kubenswrapper[28120]: I0220 15:17:42.478221 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-combined-ca-bundle\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.478488 master-0 kubenswrapper[28120]: I0220 15:17:42.478271 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-scripts\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.478488 master-0 kubenswrapper[28120]: I0220 15:17:42.478294 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-config\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.478488 master-0 kubenswrapper[28120]: I0220 15:17:42.478326 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/101c0fb0-2857-4ea7-98b0-2fdd08b39966-etc-machine-id\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.478488 master-0 kubenswrapper[28120]: I0220 15:17:42.478347 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-combined-ca-bundle\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.478488 master-0 kubenswrapper[28120]: I0220 15:17:42.478372 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-run\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.478488 master-0 kubenswrapper[28120]: I0220 15:17:42.478400 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-config-data\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.478488 master-0 kubenswrapper[28120]: I0220 15:17:42.478423 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g2gh\" (UniqueName: \"kubernetes.io/projected/5531e029-f3da-4a66-8c34-1189e0ecad07-kube-api-access-7g2gh\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.478488 master-0 kubenswrapper[28120]: I0220 15:17:42.478448 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-machine-id\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.479096 master-0 kubenswrapper[28120]: I0220 15:17:42.478484 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-scripts\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.479096 master-0 kubenswrapper[28120]: I0220 15:17:42.478832 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-locks-brick\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.479096 master-0 kubenswrapper[28120]: I0220 15:17:42.478971 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-iscsi\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.479096 master-0 kubenswrapper[28120]: I0220 15:17:42.479008 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-config-data\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.479096 master-0 kubenswrapper[28120]: I0220 15:17:42.479056 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-dns-swift-storage-0\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.479288 master-0 kubenswrapper[28120]: I0220 15:17:42.479103 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-ovsdbserver-sb\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.479288 master-0 kubenswrapper[28120]: I0220 15:17:42.479156 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhqs6\" (UniqueName: \"kubernetes.io/projected/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-kube-api-access-vhqs6\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.479288 master-0 kubenswrapper[28120]: I0220 15:17:42.479225 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-lib-cinder\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.480883 master-0 kubenswrapper[28120]: I0220 15:17:42.480839 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-locks-cinder\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.481139 master-0 kubenswrapper[28120]: I0220 15:17:42.481115 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-sys\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.481205 master-0 kubenswrapper[28120]: I0220 15:17:42.481192 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xczk5\" (UniqueName: \"kubernetes.io/projected/101c0fb0-2857-4ea7-98b0-2fdd08b39966-kube-api-access-xczk5\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.481387 master-0 kubenswrapper[28120]: I0220 15:17:42.481249 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-ovsdbserver-nb\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.481387 master-0 kubenswrapper[28120]: I0220 15:17:42.481313 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-config-data-custom\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.481387 master-0 kubenswrapper[28120]: I0220 15:17:42.481378 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-config-data-custom\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.481524 master-0 kubenswrapper[28120]: I0220 15:17:42.481446 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-dns-svc\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.481524 master-0 kubenswrapper[28120]: I0220 15:17:42.481488 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-lib-modules\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.481585 master-0 kubenswrapper[28120]: I0220 15:17:42.481533 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-nvme\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.483832 master-0 kubenswrapper[28120]: I0220 15:17:42.483789 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-config-data\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.485225 master-0 kubenswrapper[28120]: I0220 15:17:42.485180 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-backup-0"] Feb 20 15:17:42.486395 master-0 kubenswrapper[28120]: I0220 15:17:42.486329 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-scripts\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.488296 master-0 kubenswrapper[28120]: I0220 15:17:42.488232 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/101c0fb0-2857-4ea7-98b0-2fdd08b39966-etc-machine-id\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.488454 master-0 kubenswrapper[28120]: I0220 15:17:42.488426 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-config-data-custom\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.492764 master-0 kubenswrapper[28120]: I0220 15:17:42.492716 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-combined-ca-bundle\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.511355 master-0 kubenswrapper[28120]: I0220 15:17:42.511223 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xczk5\" (UniqueName: \"kubernetes.io/projected/101c0fb0-2857-4ea7-98b0-2fdd08b39966-kube-api-access-xczk5\") pod \"cinder-eea69-scheduler-0\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.590043 master-0 kubenswrapper[28120]: I0220 15:17:42.589988 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-sys\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.590243 master-0 kubenswrapper[28120]: I0220 15:17:42.590074 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-ovsdbserver-nb\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.590243 master-0 kubenswrapper[28120]: I0220 15:17:42.590115 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-combined-ca-bundle\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.590243 master-0 kubenswrapper[28120]: I0220 15:17:42.590158 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-run\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.590243 master-0 kubenswrapper[28120]: I0220 15:17:42.590181 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-nvme\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.590243 master-0 kubenswrapper[28120]: I0220 15:17:42.590210 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-config-data-custom\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.590243 master-0 kubenswrapper[28120]: I0220 15:17:42.590238 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-locks-brick\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.590420 master-0 kubenswrapper[28120]: I0220 15:17:42.590266 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-config-data-custom\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.590420 master-0 kubenswrapper[28120]: I0220 15:17:42.590292 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-scripts\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.590420 master-0 kubenswrapper[28120]: I0220 15:17:42.590314 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-dns-svc\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.590420 master-0 kubenswrapper[28120]: I0220 15:17:42.590335 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-lib-modules\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.590420 master-0 kubenswrapper[28120]: I0220 15:17:42.590362 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-nvme\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.590420 master-0 kubenswrapper[28120]: I0220 15:17:42.590390 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-combined-ca-bundle\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.590420 master-0 kubenswrapper[28120]: I0220 15:17:42.590408 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-machine-id\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.590624 master-0 kubenswrapper[28120]: I0220 15:17:42.590428 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-dev\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.590624 master-0 kubenswrapper[28120]: I0220 15:17:42.590448 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-dev\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.590624 master-0 kubenswrapper[28120]: I0220 15:17:42.590476 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-scripts\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.590624 master-0 kubenswrapper[28120]: I0220 15:17:42.590496 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-config\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.590624 master-0 kubenswrapper[28120]: I0220 15:17:42.590530 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-run\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.590624 master-0 kubenswrapper[28120]: I0220 15:17:42.590563 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-machine-id\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.590624 master-0 kubenswrapper[28120]: I0220 15:17:42.590583 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g2gh\" (UniqueName: \"kubernetes.io/projected/5531e029-f3da-4a66-8c34-1189e0ecad07-kube-api-access-7g2gh\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.590624 master-0 kubenswrapper[28120]: I0220 15:17:42.590628 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-locks-brick\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.590906 master-0 kubenswrapper[28120]: I0220 15:17:42.590647 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-iscsi\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.590906 master-0 kubenswrapper[28120]: I0220 15:17:42.590678 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-config-data\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.590906 master-0 kubenswrapper[28120]: I0220 15:17:42.590706 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-sys\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.590906 master-0 kubenswrapper[28120]: I0220 15:17:42.590731 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-dns-swift-storage-0\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.590906 master-0 kubenswrapper[28120]: I0220 15:17:42.590756 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v58mh\" (UniqueName: \"kubernetes.io/projected/6cccfb03-002e-4e09-b630-501bfe258139-kube-api-access-v58mh\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.590906 master-0 kubenswrapper[28120]: I0220 15:17:42.590785 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-locks-cinder\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.590906 master-0 kubenswrapper[28120]: I0220 15:17:42.590803 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-lib-cinder\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.590906 master-0 kubenswrapper[28120]: I0220 15:17:42.590823 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-ovsdbserver-sb\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.590906 master-0 kubenswrapper[28120]: I0220 15:17:42.590863 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhqs6\" (UniqueName: \"kubernetes.io/projected/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-kube-api-access-vhqs6\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.590906 master-0 kubenswrapper[28120]: I0220 15:17:42.590892 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-lib-modules\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.591840 master-0 kubenswrapper[28120]: I0220 15:17:42.591011 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-config-data\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.591840 master-0 kubenswrapper[28120]: I0220 15:17:42.591040 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-lib-cinder\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.591840 master-0 kubenswrapper[28120]: I0220 15:17:42.591063 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-locks-cinder\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.591840 master-0 kubenswrapper[28120]: I0220 15:17:42.591086 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-iscsi\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.591840 master-0 kubenswrapper[28120]: I0220 15:17:42.591334 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-sys\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.592283 master-0 kubenswrapper[28120]: I0220 15:17:42.592262 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-run\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.592765 master-0 kubenswrapper[28120]: I0220 15:17:42.592718 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-ovsdbserver-nb\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.594104 master-0 kubenswrapper[28120]: I0220 15:17:42.594057 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-machine-id\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.595067 master-0 kubenswrapper[28120]: I0220 15:17:42.594702 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-locks-brick\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.595067 master-0 kubenswrapper[28120]: I0220 15:17:42.594736 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-iscsi\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.599445 master-0 kubenswrapper[28120]: I0220 15:17:42.599397 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-config-data\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.603027 master-0 kubenswrapper[28120]: I0220 15:17:42.602909 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-dns-swift-storage-0\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.603132 master-0 kubenswrapper[28120]: I0220 15:17:42.603011 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-ovsdbserver-sb\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.603209 master-0 kubenswrapper[28120]: I0220 15:17:42.603177 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-lib-cinder\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.603252 master-0 kubenswrapper[28120]: I0220 15:17:42.603217 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-locks-cinder\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.603291 master-0 kubenswrapper[28120]: I0220 15:17:42.603238 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-dev\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.604062 master-0 kubenswrapper[28120]: I0220 15:17:42.604007 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-config\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.604062 master-0 kubenswrapper[28120]: I0220 15:17:42.604012 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-nvme\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.604161 master-0 kubenswrapper[28120]: I0220 15:17:42.604059 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-lib-modules\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.604196 master-0 kubenswrapper[28120]: I0220 15:17:42.604170 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-dns-svc\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.604624 master-0 kubenswrapper[28120]: I0220 15:17:42.604593 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-config-data-custom\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.608216 master-0 kubenswrapper[28120]: I0220 15:17:42.605777 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-scripts\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.608905 master-0 kubenswrapper[28120]: I0220 15:17:42.608798 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-combined-ca-bundle\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.644725 master-0 kubenswrapper[28120]: I0220 15:17:42.644676 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhqs6\" (UniqueName: \"kubernetes.io/projected/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-kube-api-access-vhqs6\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.653084 master-0 kubenswrapper[28120]: I0220 15:17:42.652739 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g2gh\" (UniqueName: \"kubernetes.io/projected/5531e029-f3da-4a66-8c34-1189e0ecad07-kube-api-access-7g2gh\") pod \"dnsmasq-dns-dd74dd7c9-spm9x\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.653302 master-0 kubenswrapper[28120]: I0220 15:17:42.653196 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-eea69-api-0"] Feb 20 15:17:42.664409 master-0 kubenswrapper[28120]: I0220 15:17:42.663159 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.666698 master-0 kubenswrapper[28120]: I0220 15:17:42.666667 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-eea69-api-config-data" Feb 20 15:17:42.675864 master-0 kubenswrapper[28120]: I0220 15:17:42.675806 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-api-0"] Feb 20 15:17:42.684703 master-0 kubenswrapper[28120]: I0220 15:17:42.684617 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:42.693262 master-0 kubenswrapper[28120]: I0220 15:17:42.693208 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-lib-modules\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.693627 master-0 kubenswrapper[28120]: I0220 15:17:42.693577 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-lib-modules\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.693756 master-0 kubenswrapper[28120]: I0220 15:17:42.693620 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-config-data\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.694560 master-0 kubenswrapper[28120]: I0220 15:17:42.694325 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-iscsi\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.694857 master-0 kubenswrapper[28120]: I0220 15:17:42.694483 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-iscsi\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.695347 master-0 kubenswrapper[28120]: I0220 15:17:42.695291 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-combined-ca-bundle\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.695559 master-0 kubenswrapper[28120]: I0220 15:17:42.695545 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-run\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.695727 master-0 kubenswrapper[28120]: I0220 15:17:42.695704 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-nvme\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.696037 master-0 kubenswrapper[28120]: I0220 15:17:42.695646 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-run\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.696037 master-0 kubenswrapper[28120]: I0220 15:17:42.695888 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-nvme\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.696250 master-0 kubenswrapper[28120]: I0220 15:17:42.696234 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-locks-brick\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.696347 master-0 kubenswrapper[28120]: I0220 15:17:42.696333 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-config-data-custom\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.696427 master-0 kubenswrapper[28120]: I0220 15:17:42.696415 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-scripts\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.696621 master-0 kubenswrapper[28120]: I0220 15:17:42.696604 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-machine-id\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.696728 master-0 kubenswrapper[28120]: I0220 15:17:42.696714 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-dev\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.697829 master-0 kubenswrapper[28120]: I0220 15:17:42.697555 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-locks-brick\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.697829 master-0 kubenswrapper[28120]: I0220 15:17:42.697729 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-dev\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.698387 master-0 kubenswrapper[28120]: I0220 15:17:42.698369 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-machine-id\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.698533 master-0 kubenswrapper[28120]: I0220 15:17:42.698404 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-sys\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.698667 master-0 kubenswrapper[28120]: I0220 15:17:42.698652 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v58mh\" (UniqueName: \"kubernetes.io/projected/6cccfb03-002e-4e09-b630-501bfe258139-kube-api-access-v58mh\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.698804 master-0 kubenswrapper[28120]: I0220 15:17:42.698789 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-locks-cinder\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.698974 master-0 kubenswrapper[28120]: I0220 15:17:42.698896 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-lib-cinder\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.699623 master-0 kubenswrapper[28120]: I0220 15:17:42.699533 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-combined-ca-bundle\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.701027 master-0 kubenswrapper[28120]: I0220 15:17:42.698423 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-sys\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.701027 master-0 kubenswrapper[28120]: I0220 15:17:42.700757 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-config-data\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.701912 master-0 kubenswrapper[28120]: I0220 15:17:42.701875 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-locks-cinder\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.701985 master-0 kubenswrapper[28120]: I0220 15:17:42.701935 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-lib-cinder\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.711570 master-0 kubenswrapper[28120]: I0220 15:17:42.710628 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:42.711570 master-0 kubenswrapper[28120]: I0220 15:17:42.711220 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-config-data-custom\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.711570 master-0 kubenswrapper[28120]: I0220 15:17:42.711515 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-scripts\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.720458 master-0 kubenswrapper[28120]: I0220 15:17:42.720415 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v58mh\" (UniqueName: \"kubernetes.io/projected/6cccfb03-002e-4e09-b630-501bfe258139-kube-api-access-v58mh\") pod \"cinder-eea69-backup-0\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.749225 master-0 kubenswrapper[28120]: I0220 15:17:42.748598 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:42.802826 master-0 kubenswrapper[28120]: I0220 15:17:42.802771 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-config-data\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.802936 master-0 kubenswrapper[28120]: I0220 15:17:42.802854 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-combined-ca-bundle\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.802936 master-0 kubenswrapper[28120]: I0220 15:17:42.802879 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-scripts\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.803196 master-0 kubenswrapper[28120]: I0220 15:17:42.803159 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w45fw\" (UniqueName: \"kubernetes.io/projected/a7cfdf4a-f4f0-49aa-836b-ab2865026873-kube-api-access-w45fw\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.803278 master-0 kubenswrapper[28120]: I0220 15:17:42.803258 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7cfdf4a-f4f0-49aa-836b-ab2865026873-logs\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.803507 master-0 kubenswrapper[28120]: I0220 15:17:42.803469 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a7cfdf4a-f4f0-49aa-836b-ab2865026873-etc-machine-id\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.803618 master-0 kubenswrapper[28120]: I0220 15:17:42.803602 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-config-data-custom\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.893823 master-0 kubenswrapper[28120]: I0220 15:17:42.893572 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:42.905855 master-0 kubenswrapper[28120]: I0220 15:17:42.905811 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-config-data\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.905993 master-0 kubenswrapper[28120]: I0220 15:17:42.905879 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-combined-ca-bundle\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.905993 master-0 kubenswrapper[28120]: I0220 15:17:42.905907 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-scripts\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.906060 master-0 kubenswrapper[28120]: I0220 15:17:42.905989 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w45fw\" (UniqueName: \"kubernetes.io/projected/a7cfdf4a-f4f0-49aa-836b-ab2865026873-kube-api-access-w45fw\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.906060 master-0 kubenswrapper[28120]: I0220 15:17:42.906030 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7cfdf4a-f4f0-49aa-836b-ab2865026873-logs\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.906129 master-0 kubenswrapper[28120]: I0220 15:17:42.906080 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a7cfdf4a-f4f0-49aa-836b-ab2865026873-etc-machine-id\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.906129 master-0 kubenswrapper[28120]: I0220 15:17:42.906115 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-config-data-custom\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.909913 master-0 kubenswrapper[28120]: I0220 15:17:42.909883 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-config-data-custom\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.914401 master-0 kubenswrapper[28120]: I0220 15:17:42.914108 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7cfdf4a-f4f0-49aa-836b-ab2865026873-logs\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.914401 master-0 kubenswrapper[28120]: I0220 15:17:42.914414 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a7cfdf4a-f4f0-49aa-836b-ab2865026873-etc-machine-id\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.916064 master-0 kubenswrapper[28120]: I0220 15:17:42.915321 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-config-data\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.917403 master-0 kubenswrapper[28120]: I0220 15:17:42.917332 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-combined-ca-bundle\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.918161 master-0 kubenswrapper[28120]: I0220 15:17:42.918124 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-scripts\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:42.935766 master-0 kubenswrapper[28120]: I0220 15:17:42.933550 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w45fw\" (UniqueName: \"kubernetes.io/projected/a7cfdf4a-f4f0-49aa-836b-ab2865026873-kube-api-access-w45fw\") pod \"cinder-eea69-api-0\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:43.064179 master-0 kubenswrapper[28120]: I0220 15:17:43.064033 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-api-0" Feb 20 15:17:43.240836 master-0 kubenswrapper[28120]: I0220 15:17:43.240667 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-scheduler-0"] Feb 20 15:17:43.358847 master-0 kubenswrapper[28120]: W0220 15:17:43.353627 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5531e029_f3da_4a66_8c34_1189e0ecad07.slice/crio-fc0a58727cd4842d89400e37f72eebe5dfcb084f95431ae535dc0d9feaa206a6 WatchSource:0}: Error finding container fc0a58727cd4842d89400e37f72eebe5dfcb084f95431ae535dc0d9feaa206a6: Status 404 returned error can't find the container with id fc0a58727cd4842d89400e37f72eebe5dfcb084f95431ae535dc0d9feaa206a6 Feb 20 15:17:43.358847 master-0 kubenswrapper[28120]: I0220 15:17:43.354762 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dd74dd7c9-spm9x"] Feb 20 15:17:43.367210 master-0 kubenswrapper[28120]: I0220 15:17:43.367155 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-volume-lvm-iscsi-0"] Feb 20 15:17:43.624436 master-0 kubenswrapper[28120]: I0220 15:17:43.624364 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-backup-0"] Feb 20 15:17:43.655120 master-0 kubenswrapper[28120]: I0220 15:17:43.654945 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-api-0"] Feb 20 15:17:43.655207 master-0 kubenswrapper[28120]: W0220 15:17:43.655173 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7cfdf4a_f4f0_49aa_836b_ab2865026873.slice/crio-2d7d155866f7d86c1e1b8ae367a7dc89621bdf47cf671593ab250fa80ab87e4f WatchSource:0}: Error finding container 2d7d155866f7d86c1e1b8ae367a7dc89621bdf47cf671593ab250fa80ab87e4f: Status 404 returned error can't find the container with id 2d7d155866f7d86c1e1b8ae367a7dc89621bdf47cf671593ab250fa80ab87e4f Feb 20 15:17:43.958685 master-0 kubenswrapper[28120]: I0220 15:17:43.958618 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" event={"ID":"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f","Type":"ContainerStarted","Data":"a8722c078ad03c029798e957cc12c18c8f080e2503180ba8114b9727b6a45450"} Feb 20 15:17:43.960491 master-0 kubenswrapper[28120]: I0220 15:17:43.960450 28120 generic.go:334] "Generic (PLEG): container finished" podID="5531e029-f3da-4a66-8c34-1189e0ecad07" containerID="1822ae92c14462d245b0ac6b21f04833c0ef528478e56d1ba73585c63abaa65f" exitCode=0 Feb 20 15:17:43.960559 master-0 kubenswrapper[28120]: I0220 15:17:43.960499 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" event={"ID":"5531e029-f3da-4a66-8c34-1189e0ecad07","Type":"ContainerDied","Data":"1822ae92c14462d245b0ac6b21f04833c0ef528478e56d1ba73585c63abaa65f"} Feb 20 15:17:43.960609 master-0 kubenswrapper[28120]: I0220 15:17:43.960558 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" event={"ID":"5531e029-f3da-4a66-8c34-1189e0ecad07","Type":"ContainerStarted","Data":"fc0a58727cd4842d89400e37f72eebe5dfcb084f95431ae535dc0d9feaa206a6"} Feb 20 15:17:43.966213 master-0 kubenswrapper[28120]: I0220 15:17:43.966147 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-scheduler-0" event={"ID":"101c0fb0-2857-4ea7-98b0-2fdd08b39966","Type":"ContainerStarted","Data":"fc1045efeae51519cbce875abbf40dd60e2d6bdfdc4ef5b097e0150930228682"} Feb 20 15:17:43.968059 master-0 kubenswrapper[28120]: I0220 15:17:43.968030 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-backup-0" event={"ID":"6cccfb03-002e-4e09-b630-501bfe258139","Type":"ContainerStarted","Data":"f71839a100f03b4edbb3bf9cc08f406c1af0768463e65dafacd63ad850cbe0f3"} Feb 20 15:17:43.969382 master-0 kubenswrapper[28120]: I0220 15:17:43.969328 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-api-0" event={"ID":"a7cfdf4a-f4f0-49aa-836b-ab2865026873","Type":"ContainerStarted","Data":"2d7d155866f7d86c1e1b8ae367a7dc89621bdf47cf671593ab250fa80ab87e4f"} Feb 20 15:17:44.985652 master-0 kubenswrapper[28120]: I0220 15:17:44.985600 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-api-0" event={"ID":"a7cfdf4a-f4f0-49aa-836b-ab2865026873","Type":"ContainerStarted","Data":"cab228cc9a52716948f26793326d2f4e880d482b70511c32526e6011cd9834da"} Feb 20 15:17:44.992845 master-0 kubenswrapper[28120]: I0220 15:17:44.992265 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" event={"ID":"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f","Type":"ContainerStarted","Data":"839e01f27fa816464a6dce414026ba4dff21b3cd5a7cc75432e24844bd954570"} Feb 20 15:17:44.996404 master-0 kubenswrapper[28120]: I0220 15:17:44.996350 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" event={"ID":"5531e029-f3da-4a66-8c34-1189e0ecad07","Type":"ContainerStarted","Data":"0d2b6ebea9fbcefe070661455a2348a5d56a200c2e5380c81855192df7e20a87"} Feb 20 15:17:44.996904 master-0 kubenswrapper[28120]: I0220 15:17:44.996875 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:44.999540 master-0 kubenswrapper[28120]: I0220 15:17:44.999475 28120 generic.go:334] "Generic (PLEG): container finished" podID="556b936a-2d3e-4b78-b7d1-a7020060be7e" containerID="595479c3e8f2aafb02a35789c9d2e81b12722779ae6b63e5331d9b53df76367f" exitCode=0 Feb 20 15:17:44.999634 master-0 kubenswrapper[28120]: I0220 15:17:44.999554 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rfmtx" event={"ID":"556b936a-2d3e-4b78-b7d1-a7020060be7e","Type":"ContainerDied","Data":"595479c3e8f2aafb02a35789c9d2e81b12722779ae6b63e5331d9b53df76367f"} Feb 20 15:17:45.022814 master-0 kubenswrapper[28120]: I0220 15:17:45.022719 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" podStartSLOduration=3.022701348 podStartE2EDuration="3.022701348s" podCreationTimestamp="2026-02-20 15:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:17:45.013262463 +0000 UTC m=+1003.274056026" watchObservedRunningTime="2026-02-20 15:17:45.022701348 +0000 UTC m=+1003.283494911" Feb 20 15:17:45.707078 master-0 kubenswrapper[28120]: I0220 15:17:45.707018 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-eea69-api-0"] Feb 20 15:17:46.119894 master-0 kubenswrapper[28120]: I0220 15:17:46.109852 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-eea69-api-0" Feb 20 15:17:46.119894 master-0 kubenswrapper[28120]: I0220 15:17:46.109896 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-api-0" event={"ID":"a7cfdf4a-f4f0-49aa-836b-ab2865026873","Type":"ContainerStarted","Data":"75c84887d72906085d89af6d4e0c6e3285970261918e0037878de1c32594312b"} Feb 20 15:17:46.120586 master-0 kubenswrapper[28120]: I0220 15:17:46.120493 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" event={"ID":"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f","Type":"ContainerStarted","Data":"30da80d308a13eb09dcdfe16d7c4a73151b9f84e490b85b75f8bbf6b90da2687"} Feb 20 15:17:46.121678 master-0 kubenswrapper[28120]: I0220 15:17:46.121617 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-eea69-api-0" podStartSLOduration=4.121597767 podStartE2EDuration="4.121597767s" podCreationTimestamp="2026-02-20 15:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:17:46.115359441 +0000 UTC m=+1004.376153004" watchObservedRunningTime="2026-02-20 15:17:46.121597767 +0000 UTC m=+1004.382391330" Feb 20 15:17:46.140181 master-0 kubenswrapper[28120]: I0220 15:17:46.139314 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-scheduler-0" event={"ID":"101c0fb0-2857-4ea7-98b0-2fdd08b39966","Type":"ContainerStarted","Data":"c8dd508c01b7d112c698a8a3eec28c78d4458edcdcc852653467a3fc36dccfe4"} Feb 20 15:17:46.144119 master-0 kubenswrapper[28120]: I0220 15:17:46.144061 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-backup-0" event={"ID":"6cccfb03-002e-4e09-b630-501bfe258139","Type":"ContainerStarted","Data":"89403f7f6619936bec0bf0b11cc89ad1ad6b86855249706936fdbce192a9bb3a"} Feb 20 15:17:46.144305 master-0 kubenswrapper[28120]: I0220 15:17:46.144131 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-backup-0" event={"ID":"6cccfb03-002e-4e09-b630-501bfe258139","Type":"ContainerStarted","Data":"4f5a4da6d27c99e36867ddeb79ffe2e3637197208020d30465ce5adb68736955"} Feb 20 15:17:46.172696 master-0 kubenswrapper[28120]: I0220 15:17:46.172552 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" podStartSLOduration=3.141394587 podStartE2EDuration="4.172531226s" podCreationTimestamp="2026-02-20 15:17:42 +0000 UTC" firstStartedPulling="2026-02-20 15:17:43.37412415 +0000 UTC m=+1001.634917713" lastFinishedPulling="2026-02-20 15:17:44.405260799 +0000 UTC m=+1002.666054352" observedRunningTime="2026-02-20 15:17:46.16063827 +0000 UTC m=+1004.421431833" watchObservedRunningTime="2026-02-20 15:17:46.172531226 +0000 UTC m=+1004.433324789" Feb 20 15:17:46.212547 master-0 kubenswrapper[28120]: I0220 15:17:46.212038 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-eea69-backup-0" podStartSLOduration=3.082024537 podStartE2EDuration="4.21201313s" podCreationTimestamp="2026-02-20 15:17:42 +0000 UTC" firstStartedPulling="2026-02-20 15:17:43.648781326 +0000 UTC m=+1001.909574889" lastFinishedPulling="2026-02-20 15:17:44.778769919 +0000 UTC m=+1003.039563482" observedRunningTime="2026-02-20 15:17:46.199717184 +0000 UTC m=+1004.460510747" watchObservedRunningTime="2026-02-20 15:17:46.21201313 +0000 UTC m=+1004.472806693" Feb 20 15:17:46.691157 master-0 kubenswrapper[28120]: I0220 15:17:46.691104 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rfmtx" Feb 20 15:17:46.890590 master-0 kubenswrapper[28120]: I0220 15:17:46.889460 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9qkz\" (UniqueName: \"kubernetes.io/projected/556b936a-2d3e-4b78-b7d1-a7020060be7e-kube-api-access-v9qkz\") pod \"556b936a-2d3e-4b78-b7d1-a7020060be7e\" (UID: \"556b936a-2d3e-4b78-b7d1-a7020060be7e\") " Feb 20 15:17:46.890590 master-0 kubenswrapper[28120]: I0220 15:17:46.889609 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/556b936a-2d3e-4b78-b7d1-a7020060be7e-config\") pod \"556b936a-2d3e-4b78-b7d1-a7020060be7e\" (UID: \"556b936a-2d3e-4b78-b7d1-a7020060be7e\") " Feb 20 15:17:46.890590 master-0 kubenswrapper[28120]: I0220 15:17:46.889710 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/556b936a-2d3e-4b78-b7d1-a7020060be7e-combined-ca-bundle\") pod \"556b936a-2d3e-4b78-b7d1-a7020060be7e\" (UID: \"556b936a-2d3e-4b78-b7d1-a7020060be7e\") " Feb 20 15:17:46.907009 master-0 kubenswrapper[28120]: I0220 15:17:46.904149 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/556b936a-2d3e-4b78-b7d1-a7020060be7e-kube-api-access-v9qkz" (OuterVolumeSpecName: "kube-api-access-v9qkz") pod "556b936a-2d3e-4b78-b7d1-a7020060be7e" (UID: "556b936a-2d3e-4b78-b7d1-a7020060be7e"). InnerVolumeSpecName "kube-api-access-v9qkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:46.928845 master-0 kubenswrapper[28120]: I0220 15:17:46.928769 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/556b936a-2d3e-4b78-b7d1-a7020060be7e-config" (OuterVolumeSpecName: "config") pod "556b936a-2d3e-4b78-b7d1-a7020060be7e" (UID: "556b936a-2d3e-4b78-b7d1-a7020060be7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:46.994386 master-0 kubenswrapper[28120]: I0220 15:17:46.994328 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/556b936a-2d3e-4b78-b7d1-a7020060be7e-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:46.994386 master-0 kubenswrapper[28120]: I0220 15:17:46.994366 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9qkz\" (UniqueName: \"kubernetes.io/projected/556b936a-2d3e-4b78-b7d1-a7020060be7e-kube-api-access-v9qkz\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:47.009197 master-0 kubenswrapper[28120]: I0220 15:17:47.002116 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/556b936a-2d3e-4b78-b7d1-a7020060be7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "556b936a-2d3e-4b78-b7d1-a7020060be7e" (UID: "556b936a-2d3e-4b78-b7d1-a7020060be7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:47.097918 master-0 kubenswrapper[28120]: I0220 15:17:47.097839 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/556b936a-2d3e-4b78-b7d1-a7020060be7e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:47.228334 master-0 kubenswrapper[28120]: I0220 15:17:47.228276 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rfmtx" event={"ID":"556b936a-2d3e-4b78-b7d1-a7020060be7e","Type":"ContainerDied","Data":"c548a9bd3eaf9517340a4ff97d4cabe198c64078d3b5587ee85b16f6c5d5b516"} Feb 20 15:17:47.228334 master-0 kubenswrapper[28120]: I0220 15:17:47.228330 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c548a9bd3eaf9517340a4ff97d4cabe198c64078d3b5587ee85b16f6c5d5b516" Feb 20 15:17:47.234295 master-0 kubenswrapper[28120]: I0220 15:17:47.228398 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rfmtx" Feb 20 15:17:47.258318 master-0 kubenswrapper[28120]: I0220 15:17:47.256889 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-eea69-api-0" podUID="a7cfdf4a-f4f0-49aa-836b-ab2865026873" containerName="cinder-eea69-api-log" containerID="cri-o://cab228cc9a52716948f26793326d2f4e880d482b70511c32526e6011cd9834da" gracePeriod=30 Feb 20 15:17:47.258318 master-0 kubenswrapper[28120]: I0220 15:17:47.257732 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-scheduler-0" event={"ID":"101c0fb0-2857-4ea7-98b0-2fdd08b39966","Type":"ContainerStarted","Data":"5daafc80726bb950b11554a078c88cb62946a87b63e3cd743d00987939962692"} Feb 20 15:17:47.258318 master-0 kubenswrapper[28120]: I0220 15:17:47.258011 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-eea69-api-0" podUID="a7cfdf4a-f4f0-49aa-836b-ab2865026873" containerName="cinder-api" containerID="cri-o://75c84887d72906085d89af6d4e0c6e3285970261918e0037878de1c32594312b" gracePeriod=30 Feb 20 15:17:47.305990 master-0 kubenswrapper[28120]: I0220 15:17:47.305047 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dd74dd7c9-spm9x"] Feb 20 15:17:47.305990 master-0 kubenswrapper[28120]: I0220 15:17:47.305379 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" podUID="5531e029-f3da-4a66-8c34-1189e0ecad07" containerName="dnsmasq-dns" containerID="cri-o://0d2b6ebea9fbcefe070661455a2348a5d56a200c2e5380c81855192df7e20a87" gracePeriod=10 Feb 20 15:17:47.342070 master-0 kubenswrapper[28120]: I0220 15:17:47.336162 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c54fb858c-bpkz9"] Feb 20 15:17:47.342070 master-0 kubenswrapper[28120]: E0220 15:17:47.336680 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556b936a-2d3e-4b78-b7d1-a7020060be7e" containerName="neutron-db-sync" Feb 20 15:17:47.342070 master-0 kubenswrapper[28120]: I0220 15:17:47.336694 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="556b936a-2d3e-4b78-b7d1-a7020060be7e" containerName="neutron-db-sync" Feb 20 15:17:47.342070 master-0 kubenswrapper[28120]: I0220 15:17:47.336932 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="556b936a-2d3e-4b78-b7d1-a7020060be7e" containerName="neutron-db-sync" Feb 20 15:17:47.342070 master-0 kubenswrapper[28120]: I0220 15:17:47.339247 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.357849 master-0 kubenswrapper[28120]: I0220 15:17:47.349362 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c54fb858c-bpkz9"] Feb 20 15:17:47.357849 master-0 kubenswrapper[28120]: I0220 15:17:47.351796 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-eea69-scheduler-0" podStartSLOduration=4.216924433 podStartE2EDuration="5.351774377s" podCreationTimestamp="2026-02-20 15:17:42 +0000 UTC" firstStartedPulling="2026-02-20 15:17:43.271106833 +0000 UTC m=+1001.531900396" lastFinishedPulling="2026-02-20 15:17:44.405956767 +0000 UTC m=+1002.666750340" observedRunningTime="2026-02-20 15:17:47.336578719 +0000 UTC m=+1005.597372282" watchObservedRunningTime="2026-02-20 15:17:47.351774377 +0000 UTC m=+1005.612567940" Feb 20 15:17:47.390958 master-0 kubenswrapper[28120]: I0220 15:17:47.390007 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-79855fb6c6-bhshx"] Feb 20 15:17:47.394524 master-0 kubenswrapper[28120]: I0220 15:17:47.392052 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.394524 master-0 kubenswrapper[28120]: I0220 15:17:47.393479 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 20 15:17:47.397542 master-0 kubenswrapper[28120]: I0220 15:17:47.395794 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-79855fb6c6-bhshx"] Feb 20 15:17:47.400662 master-0 kubenswrapper[28120]: I0220 15:17:47.398942 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 20 15:17:47.400662 master-0 kubenswrapper[28120]: I0220 15:17:47.399246 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 20 15:17:47.522431 master-0 kubenswrapper[28120]: I0220 15:17:47.522372 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcjhn\" (UniqueName: \"kubernetes.io/projected/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-kube-api-access-gcjhn\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.522640 master-0 kubenswrapper[28120]: I0220 15:17:47.522443 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-dns-swift-storage-0\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.522640 master-0 kubenswrapper[28120]: I0220 15:17:47.522486 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-combined-ca-bundle\") pod \"neutron-79855fb6c6-bhshx\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.522640 master-0 kubenswrapper[28120]: I0220 15:17:47.522509 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-config\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.522640 master-0 kubenswrapper[28120]: I0220 15:17:47.522528 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-ovsdbserver-nb\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.523217 master-0 kubenswrapper[28120]: I0220 15:17:47.522872 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-ovndb-tls-certs\") pod \"neutron-79855fb6c6-bhshx\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.523404 master-0 kubenswrapper[28120]: I0220 15:17:47.523381 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-ovsdbserver-sb\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.523704 master-0 kubenswrapper[28120]: I0220 15:17:47.523689 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj27p\" (UniqueName: \"kubernetes.io/projected/619993c5-d801-4e79-bf70-d7a94e307239-kube-api-access-sj27p\") pod \"neutron-79855fb6c6-bhshx\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.523848 master-0 kubenswrapper[28120]: I0220 15:17:47.523835 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-httpd-config\") pod \"neutron-79855fb6c6-bhshx\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.523941 master-0 kubenswrapper[28120]: I0220 15:17:47.523916 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-dns-svc\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.524080 master-0 kubenswrapper[28120]: I0220 15:17:47.524067 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-config\") pod \"neutron-79855fb6c6-bhshx\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.626146 master-0 kubenswrapper[28120]: I0220 15:17:47.626054 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcjhn\" (UniqueName: \"kubernetes.io/projected/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-kube-api-access-gcjhn\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.626289 master-0 kubenswrapper[28120]: I0220 15:17:47.626203 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-dns-swift-storage-0\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.626289 master-0 kubenswrapper[28120]: I0220 15:17:47.626280 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-combined-ca-bundle\") pod \"neutron-79855fb6c6-bhshx\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.626414 master-0 kubenswrapper[28120]: I0220 15:17:47.626339 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-config\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.626414 master-0 kubenswrapper[28120]: I0220 15:17:47.626366 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-ovsdbserver-nb\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.626414 master-0 kubenswrapper[28120]: I0220 15:17:47.626399 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-ovndb-tls-certs\") pod \"neutron-79855fb6c6-bhshx\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.626556 master-0 kubenswrapper[28120]: I0220 15:17:47.626441 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-ovsdbserver-sb\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.626609 master-0 kubenswrapper[28120]: I0220 15:17:47.626560 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj27p\" (UniqueName: \"kubernetes.io/projected/619993c5-d801-4e79-bf70-d7a94e307239-kube-api-access-sj27p\") pod \"neutron-79855fb6c6-bhshx\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.626658 master-0 kubenswrapper[28120]: I0220 15:17:47.626610 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-httpd-config\") pod \"neutron-79855fb6c6-bhshx\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.626658 master-0 kubenswrapper[28120]: I0220 15:17:47.626640 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-dns-svc\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.626741 master-0 kubenswrapper[28120]: I0220 15:17:47.626680 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-config\") pod \"neutron-79855fb6c6-bhshx\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.627524 master-0 kubenswrapper[28120]: I0220 15:17:47.627458 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-dns-swift-storage-0\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.627659 master-0 kubenswrapper[28120]: I0220 15:17:47.627614 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-config\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.629512 master-0 kubenswrapper[28120]: I0220 15:17:47.629481 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-dns-svc\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.629675 master-0 kubenswrapper[28120]: I0220 15:17:47.629612 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-ovsdbserver-sb\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.630206 master-0 kubenswrapper[28120]: I0220 15:17:47.630063 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-ovsdbserver-nb\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.631663 master-0 kubenswrapper[28120]: I0220 15:17:47.631593 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-config\") pod \"neutron-79855fb6c6-bhshx\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.632409 master-0 kubenswrapper[28120]: I0220 15:17:47.632387 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-httpd-config\") pod \"neutron-79855fb6c6-bhshx\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.633743 master-0 kubenswrapper[28120]: I0220 15:17:47.633513 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-ovndb-tls-certs\") pod \"neutron-79855fb6c6-bhshx\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.634246 master-0 kubenswrapper[28120]: I0220 15:17:47.634201 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-combined-ca-bundle\") pod \"neutron-79855fb6c6-bhshx\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.649890 master-0 kubenswrapper[28120]: I0220 15:17:47.647983 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcjhn\" (UniqueName: \"kubernetes.io/projected/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-kube-api-access-gcjhn\") pod \"dnsmasq-dns-c54fb858c-bpkz9\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.656831 master-0 kubenswrapper[28120]: I0220 15:17:47.656790 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj27p\" (UniqueName: \"kubernetes.io/projected/619993c5-d801-4e79-bf70-d7a94e307239-kube-api-access-sj27p\") pod \"neutron-79855fb6c6-bhshx\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.685028 master-0 kubenswrapper[28120]: I0220 15:17:47.684835 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:47.727981 master-0 kubenswrapper[28120]: I0220 15:17:47.727851 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:47.757469 master-0 kubenswrapper[28120]: I0220 15:17:47.757153 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:47.771979 master-0 kubenswrapper[28120]: I0220 15:17:47.768560 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:47.897068 master-0 kubenswrapper[28120]: I0220 15:17:47.897025 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:48.035659 master-0 kubenswrapper[28120]: I0220 15:17:48.035613 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:48.052632 master-0 kubenswrapper[28120]: I0220 15:17:48.051447 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-ovsdbserver-sb\") pod \"5531e029-f3da-4a66-8c34-1189e0ecad07\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " Feb 20 15:17:48.052632 master-0 kubenswrapper[28120]: I0220 15:17:48.051560 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-dns-swift-storage-0\") pod \"5531e029-f3da-4a66-8c34-1189e0ecad07\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " Feb 20 15:17:48.052632 master-0 kubenswrapper[28120]: I0220 15:17:48.051653 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-ovsdbserver-nb\") pod \"5531e029-f3da-4a66-8c34-1189e0ecad07\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " Feb 20 15:17:48.052632 master-0 kubenswrapper[28120]: I0220 15:17:48.051694 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-config\") pod \"5531e029-f3da-4a66-8c34-1189e0ecad07\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " Feb 20 15:17:48.052632 master-0 kubenswrapper[28120]: I0220 15:17:48.051801 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g2gh\" (UniqueName: \"kubernetes.io/projected/5531e029-f3da-4a66-8c34-1189e0ecad07-kube-api-access-7g2gh\") pod \"5531e029-f3da-4a66-8c34-1189e0ecad07\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " Feb 20 15:17:48.062957 master-0 kubenswrapper[28120]: I0220 15:17:48.059136 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5531e029-f3da-4a66-8c34-1189e0ecad07-kube-api-access-7g2gh" (OuterVolumeSpecName: "kube-api-access-7g2gh") pod "5531e029-f3da-4a66-8c34-1189e0ecad07" (UID: "5531e029-f3da-4a66-8c34-1189e0ecad07"). InnerVolumeSpecName "kube-api-access-7g2gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:48.128522 master-0 kubenswrapper[28120]: I0220 15:17:48.128458 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5531e029-f3da-4a66-8c34-1189e0ecad07" (UID: "5531e029-f3da-4a66-8c34-1189e0ecad07"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:48.131556 master-0 kubenswrapper[28120]: I0220 15:17:48.131501 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-config" (OuterVolumeSpecName: "config") pod "5531e029-f3da-4a66-8c34-1189e0ecad07" (UID: "5531e029-f3da-4a66-8c34-1189e0ecad07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:48.149368 master-0 kubenswrapper[28120]: I0220 15:17:48.149211 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5531e029-f3da-4a66-8c34-1189e0ecad07" (UID: "5531e029-f3da-4a66-8c34-1189e0ecad07"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:48.155179 master-0 kubenswrapper[28120]: I0220 15:17:48.153599 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-dns-svc\") pod \"5531e029-f3da-4a66-8c34-1189e0ecad07\" (UID: \"5531e029-f3da-4a66-8c34-1189e0ecad07\") " Feb 20 15:17:48.155179 master-0 kubenswrapper[28120]: I0220 15:17:48.154382 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g2gh\" (UniqueName: \"kubernetes.io/projected/5531e029-f3da-4a66-8c34-1189e0ecad07-kube-api-access-7g2gh\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:48.155179 master-0 kubenswrapper[28120]: I0220 15:17:48.154397 28120 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:48.155179 master-0 kubenswrapper[28120]: I0220 15:17:48.154408 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:48.155179 master-0 kubenswrapper[28120]: I0220 15:17:48.154416 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:48.158494 master-0 kubenswrapper[28120]: I0220 15:17:48.158443 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5531e029-f3da-4a66-8c34-1189e0ecad07" (UID: "5531e029-f3da-4a66-8c34-1189e0ecad07"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:48.256238 master-0 kubenswrapper[28120]: I0220 15:17:48.233544 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5531e029-f3da-4a66-8c34-1189e0ecad07" (UID: "5531e029-f3da-4a66-8c34-1189e0ecad07"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:48.257166 master-0 kubenswrapper[28120]: I0220 15:17:48.256472 28120 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:48.257166 master-0 kubenswrapper[28120]: I0220 15:17:48.256514 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5531e029-f3da-4a66-8c34-1189e0ecad07-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:48.284186 master-0 kubenswrapper[28120]: I0220 15:17:48.284118 28120 generic.go:334] "Generic (PLEG): container finished" podID="5531e029-f3da-4a66-8c34-1189e0ecad07" containerID="0d2b6ebea9fbcefe070661455a2348a5d56a200c2e5380c81855192df7e20a87" exitCode=0 Feb 20 15:17:48.284379 master-0 kubenswrapper[28120]: I0220 15:17:48.284198 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" event={"ID":"5531e029-f3da-4a66-8c34-1189e0ecad07","Type":"ContainerDied","Data":"0d2b6ebea9fbcefe070661455a2348a5d56a200c2e5380c81855192df7e20a87"} Feb 20 15:17:48.284379 master-0 kubenswrapper[28120]: I0220 15:17:48.284228 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" event={"ID":"5531e029-f3da-4a66-8c34-1189e0ecad07","Type":"ContainerDied","Data":"fc0a58727cd4842d89400e37f72eebe5dfcb084f95431ae535dc0d9feaa206a6"} Feb 20 15:17:48.284379 master-0 kubenswrapper[28120]: I0220 15:17:48.284244 28120 scope.go:117] "RemoveContainer" containerID="0d2b6ebea9fbcefe070661455a2348a5d56a200c2e5380c81855192df7e20a87" Feb 20 15:17:48.284479 master-0 kubenswrapper[28120]: I0220 15:17:48.284399 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd74dd7c9-spm9x" Feb 20 15:17:48.304145 master-0 kubenswrapper[28120]: I0220 15:17:48.304103 28120 generic.go:334] "Generic (PLEG): container finished" podID="a7cfdf4a-f4f0-49aa-836b-ab2865026873" containerID="75c84887d72906085d89af6d4e0c6e3285970261918e0037878de1c32594312b" exitCode=0 Feb 20 15:17:48.304145 master-0 kubenswrapper[28120]: I0220 15:17:48.304139 28120 generic.go:334] "Generic (PLEG): container finished" podID="a7cfdf4a-f4f0-49aa-836b-ab2865026873" containerID="cab228cc9a52716948f26793326d2f4e880d482b70511c32526e6011cd9834da" exitCode=143 Feb 20 15:17:48.304680 master-0 kubenswrapper[28120]: I0220 15:17:48.304631 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-api-0" event={"ID":"a7cfdf4a-f4f0-49aa-836b-ab2865026873","Type":"ContainerDied","Data":"75c84887d72906085d89af6d4e0c6e3285970261918e0037878de1c32594312b"} Feb 20 15:17:48.304748 master-0 kubenswrapper[28120]: I0220 15:17:48.304719 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-api-0" event={"ID":"a7cfdf4a-f4f0-49aa-836b-ab2865026873","Type":"ContainerDied","Data":"cab228cc9a52716948f26793326d2f4e880d482b70511c32526e6011cd9834da"} Feb 20 15:17:48.388162 master-0 kubenswrapper[28120]: I0220 15:17:48.384221 28120 scope.go:117] "RemoveContainer" containerID="1822ae92c14462d245b0ac6b21f04833c0ef528478e56d1ba73585c63abaa65f" Feb 20 15:17:48.388162 master-0 kubenswrapper[28120]: I0220 15:17:48.385221 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-api-0" Feb 20 15:17:48.419012 master-0 kubenswrapper[28120]: I0220 15:17:48.417777 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dd74dd7c9-spm9x"] Feb 20 15:17:48.426129 master-0 kubenswrapper[28120]: I0220 15:17:48.425266 28120 scope.go:117] "RemoveContainer" containerID="0d2b6ebea9fbcefe070661455a2348a5d56a200c2e5380c81855192df7e20a87" Feb 20 15:17:48.436097 master-0 kubenswrapper[28120]: E0220 15:17:48.435170 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d2b6ebea9fbcefe070661455a2348a5d56a200c2e5380c81855192df7e20a87\": container with ID starting with 0d2b6ebea9fbcefe070661455a2348a5d56a200c2e5380c81855192df7e20a87 not found: ID does not exist" containerID="0d2b6ebea9fbcefe070661455a2348a5d56a200c2e5380c81855192df7e20a87" Feb 20 15:17:48.436097 master-0 kubenswrapper[28120]: I0220 15:17:48.435226 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d2b6ebea9fbcefe070661455a2348a5d56a200c2e5380c81855192df7e20a87"} err="failed to get container status \"0d2b6ebea9fbcefe070661455a2348a5d56a200c2e5380c81855192df7e20a87\": rpc error: code = NotFound desc = could not find container \"0d2b6ebea9fbcefe070661455a2348a5d56a200c2e5380c81855192df7e20a87\": container with ID starting with 0d2b6ebea9fbcefe070661455a2348a5d56a200c2e5380c81855192df7e20a87 not found: ID does not exist" Feb 20 15:17:48.436097 master-0 kubenswrapper[28120]: I0220 15:17:48.435255 28120 scope.go:117] "RemoveContainer" containerID="1822ae92c14462d245b0ac6b21f04833c0ef528478e56d1ba73585c63abaa65f" Feb 20 15:17:48.438889 master-0 kubenswrapper[28120]: I0220 15:17:48.438838 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dd74dd7c9-spm9x"] Feb 20 15:17:48.446001 master-0 kubenswrapper[28120]: E0220 15:17:48.444083 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1822ae92c14462d245b0ac6b21f04833c0ef528478e56d1ba73585c63abaa65f\": container with ID starting with 1822ae92c14462d245b0ac6b21f04833c0ef528478e56d1ba73585c63abaa65f not found: ID does not exist" containerID="1822ae92c14462d245b0ac6b21f04833c0ef528478e56d1ba73585c63abaa65f" Feb 20 15:17:48.446001 master-0 kubenswrapper[28120]: I0220 15:17:48.444141 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1822ae92c14462d245b0ac6b21f04833c0ef528478e56d1ba73585c63abaa65f"} err="failed to get container status \"1822ae92c14462d245b0ac6b21f04833c0ef528478e56d1ba73585c63abaa65f\": rpc error: code = NotFound desc = could not find container \"1822ae92c14462d245b0ac6b21f04833c0ef528478e56d1ba73585c63abaa65f\": container with ID starting with 1822ae92c14462d245b0ac6b21f04833c0ef528478e56d1ba73585c63abaa65f not found: ID does not exist" Feb 20 15:17:48.469947 master-0 kubenswrapper[28120]: I0220 15:17:48.463636 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-scripts\") pod \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " Feb 20 15:17:48.469947 master-0 kubenswrapper[28120]: I0220 15:17:48.463817 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-config-data-custom\") pod \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " Feb 20 15:17:48.469947 master-0 kubenswrapper[28120]: I0220 15:17:48.463846 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-config-data\") pod \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " Feb 20 15:17:48.469947 master-0 kubenswrapper[28120]: I0220 15:17:48.463964 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w45fw\" (UniqueName: \"kubernetes.io/projected/a7cfdf4a-f4f0-49aa-836b-ab2865026873-kube-api-access-w45fw\") pod \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " Feb 20 15:17:48.469947 master-0 kubenswrapper[28120]: I0220 15:17:48.464179 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a7cfdf4a-f4f0-49aa-836b-ab2865026873-etc-machine-id\") pod \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " Feb 20 15:17:48.469947 master-0 kubenswrapper[28120]: I0220 15:17:48.464565 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7cfdf4a-f4f0-49aa-836b-ab2865026873-logs\") pod \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " Feb 20 15:17:48.469947 master-0 kubenswrapper[28120]: I0220 15:17:48.464602 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-combined-ca-bundle\") pod \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\" (UID: \"a7cfdf4a-f4f0-49aa-836b-ab2865026873\") " Feb 20 15:17:48.470398 master-0 kubenswrapper[28120]: I0220 15:17:48.470359 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7cfdf4a-f4f0-49aa-836b-ab2865026873-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a7cfdf4a-f4f0-49aa-836b-ab2865026873" (UID: "a7cfdf4a-f4f0-49aa-836b-ab2865026873"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:48.476066 master-0 kubenswrapper[28120]: I0220 15:17:48.470683 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7cfdf4a-f4f0-49aa-836b-ab2865026873-logs" (OuterVolumeSpecName: "logs") pod "a7cfdf4a-f4f0-49aa-836b-ab2865026873" (UID: "a7cfdf4a-f4f0-49aa-836b-ab2865026873"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:17:48.476066 master-0 kubenswrapper[28120]: I0220 15:17:48.470744 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7cfdf4a-f4f0-49aa-836b-ab2865026873-kube-api-access-w45fw" (OuterVolumeSpecName: "kube-api-access-w45fw") pod "a7cfdf4a-f4f0-49aa-836b-ab2865026873" (UID: "a7cfdf4a-f4f0-49aa-836b-ab2865026873"). InnerVolumeSpecName "kube-api-access-w45fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:48.476066 master-0 kubenswrapper[28120]: I0220 15:17:48.475497 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w45fw\" (UniqueName: \"kubernetes.io/projected/a7cfdf4a-f4f0-49aa-836b-ab2865026873-kube-api-access-w45fw\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:48.476066 master-0 kubenswrapper[28120]: I0220 15:17:48.475532 28120 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a7cfdf4a-f4f0-49aa-836b-ab2865026873-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:48.476066 master-0 kubenswrapper[28120]: I0220 15:17:48.475544 28120 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7cfdf4a-f4f0-49aa-836b-ab2865026873-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:48.494639 master-0 kubenswrapper[28120]: I0220 15:17:48.477366 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-scripts" (OuterVolumeSpecName: "scripts") pod "a7cfdf4a-f4f0-49aa-836b-ab2865026873" (UID: "a7cfdf4a-f4f0-49aa-836b-ab2865026873"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:48.512444 master-0 kubenswrapper[28120]: I0220 15:17:48.512381 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a7cfdf4a-f4f0-49aa-836b-ab2865026873" (UID: "a7cfdf4a-f4f0-49aa-836b-ab2865026873"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:48.558811 master-0 kubenswrapper[28120]: I0220 15:17:48.557512 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7cfdf4a-f4f0-49aa-836b-ab2865026873" (UID: "a7cfdf4a-f4f0-49aa-836b-ab2865026873"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:48.575649 master-0 kubenswrapper[28120]: I0220 15:17:48.575605 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-79855fb6c6-bhshx"] Feb 20 15:17:48.576230 master-0 kubenswrapper[28120]: I0220 15:17:48.576178 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-config-data" (OuterVolumeSpecName: "config-data") pod "a7cfdf4a-f4f0-49aa-836b-ab2865026873" (UID: "a7cfdf4a-f4f0-49aa-836b-ab2865026873"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:48.577426 master-0 kubenswrapper[28120]: I0220 15:17:48.577407 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:48.577510 master-0 kubenswrapper[28120]: I0220 15:17:48.577499 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:48.577572 master-0 kubenswrapper[28120]: I0220 15:17:48.577562 28120 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:48.577648 master-0 kubenswrapper[28120]: I0220 15:17:48.577638 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7cfdf4a-f4f0-49aa-836b-ab2865026873-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:48.628081 master-0 kubenswrapper[28120]: I0220 15:17:48.625836 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c54fb858c-bpkz9"] Feb 20 15:17:49.315368 master-0 kubenswrapper[28120]: I0220 15:17:49.315316 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-api-0" event={"ID":"a7cfdf4a-f4f0-49aa-836b-ab2865026873","Type":"ContainerDied","Data":"2d7d155866f7d86c1e1b8ae367a7dc89621bdf47cf671593ab250fa80ab87e4f"} Feb 20 15:17:49.315368 master-0 kubenswrapper[28120]: I0220 15:17:49.315372 28120 scope.go:117] "RemoveContainer" containerID="75c84887d72906085d89af6d4e0c6e3285970261918e0037878de1c32594312b" Feb 20 15:17:49.315992 master-0 kubenswrapper[28120]: I0220 15:17:49.315376 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.317600 master-0 kubenswrapper[28120]: I0220 15:17:49.317575 28120 generic.go:334] "Generic (PLEG): container finished" podID="fa98495d-36f2-4d3b-ad8a-139b1d11a4df" containerID="e6f32a45a3d25836ab37e8f3290cecea8876a39d99ad6f7689d8d741d28473b4" exitCode=0 Feb 20 15:17:49.317689 master-0 kubenswrapper[28120]: I0220 15:17:49.317629 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" event={"ID":"fa98495d-36f2-4d3b-ad8a-139b1d11a4df","Type":"ContainerDied","Data":"e6f32a45a3d25836ab37e8f3290cecea8876a39d99ad6f7689d8d741d28473b4"} Feb 20 15:17:49.317689 master-0 kubenswrapper[28120]: I0220 15:17:49.317646 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" event={"ID":"fa98495d-36f2-4d3b-ad8a-139b1d11a4df","Type":"ContainerStarted","Data":"77215333360e9a286ad82a4148c4a9522a7e6928a417b61852d6bf5bfc80b3ac"} Feb 20 15:17:49.318940 master-0 kubenswrapper[28120]: I0220 15:17:49.318902 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79855fb6c6-bhshx" event={"ID":"619993c5-d801-4e79-bf70-d7a94e307239","Type":"ContainerStarted","Data":"6454b6a7258cdfe184e33c928247e68c106e172685eb33be40aa80c26db4617e"} Feb 20 15:17:49.319014 master-0 kubenswrapper[28120]: I0220 15:17:49.318941 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79855fb6c6-bhshx" event={"ID":"619993c5-d801-4e79-bf70-d7a94e307239","Type":"ContainerStarted","Data":"80d3e7c2fbedda1a0f1237fbcb29ccf7960d1509734b46a0827c9b43d4405715"} Feb 20 15:17:49.319014 master-0 kubenswrapper[28120]: I0220 15:17:49.318952 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79855fb6c6-bhshx" event={"ID":"619993c5-d801-4e79-bf70-d7a94e307239","Type":"ContainerStarted","Data":"c2cb37a87f61003975dc046876921754ad67955cfd3a7110a8bbceb23e5b323d"} Feb 20 15:17:49.319111 master-0 kubenswrapper[28120]: I0220 15:17:49.319053 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:17:49.351189 master-0 kubenswrapper[28120]: I0220 15:17:49.348185 28120 scope.go:117] "RemoveContainer" containerID="cab228cc9a52716948f26793326d2f4e880d482b70511c32526e6011cd9834da" Feb 20 15:17:49.402062 master-0 kubenswrapper[28120]: I0220 15:17:49.395694 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-79855fb6c6-bhshx" podStartSLOduration=2.395669138 podStartE2EDuration="2.395669138s" podCreationTimestamp="2026-02-20 15:17:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:17:49.383912325 +0000 UTC m=+1007.644705888" watchObservedRunningTime="2026-02-20 15:17:49.395669138 +0000 UTC m=+1007.656462721" Feb 20 15:17:49.464062 master-0 kubenswrapper[28120]: I0220 15:17:49.463998 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-eea69-api-0"] Feb 20 15:17:49.482151 master-0 kubenswrapper[28120]: I0220 15:17:49.482080 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-eea69-api-0"] Feb 20 15:17:49.492032 master-0 kubenswrapper[28120]: I0220 15:17:49.491984 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-eea69-api-0"] Feb 20 15:17:49.492517 master-0 kubenswrapper[28120]: E0220 15:17:49.492494 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7cfdf4a-f4f0-49aa-836b-ab2865026873" containerName="cinder-api" Feb 20 15:17:49.492517 master-0 kubenswrapper[28120]: I0220 15:17:49.492512 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7cfdf4a-f4f0-49aa-836b-ab2865026873" containerName="cinder-api" Feb 20 15:17:49.492624 master-0 kubenswrapper[28120]: E0220 15:17:49.492524 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5531e029-f3da-4a66-8c34-1189e0ecad07" containerName="dnsmasq-dns" Feb 20 15:17:49.492624 master-0 kubenswrapper[28120]: I0220 15:17:49.492530 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5531e029-f3da-4a66-8c34-1189e0ecad07" containerName="dnsmasq-dns" Feb 20 15:17:49.492624 master-0 kubenswrapper[28120]: E0220 15:17:49.492585 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5531e029-f3da-4a66-8c34-1189e0ecad07" containerName="init" Feb 20 15:17:49.492624 master-0 kubenswrapper[28120]: I0220 15:17:49.492593 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5531e029-f3da-4a66-8c34-1189e0ecad07" containerName="init" Feb 20 15:17:49.492624 master-0 kubenswrapper[28120]: E0220 15:17:49.492613 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7cfdf4a-f4f0-49aa-836b-ab2865026873" containerName="cinder-eea69-api-log" Feb 20 15:17:49.492624 master-0 kubenswrapper[28120]: I0220 15:17:49.492620 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7cfdf4a-f4f0-49aa-836b-ab2865026873" containerName="cinder-eea69-api-log" Feb 20 15:17:49.492826 master-0 kubenswrapper[28120]: I0220 15:17:49.492807 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7cfdf4a-f4f0-49aa-836b-ab2865026873" containerName="cinder-api" Feb 20 15:17:49.492872 master-0 kubenswrapper[28120]: I0220 15:17:49.492843 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="5531e029-f3da-4a66-8c34-1189e0ecad07" containerName="dnsmasq-dns" Feb 20 15:17:49.492872 master-0 kubenswrapper[28120]: I0220 15:17:49.492859 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7cfdf4a-f4f0-49aa-836b-ab2865026873" containerName="cinder-eea69-api-log" Feb 20 15:17:49.496595 master-0 kubenswrapper[28120]: I0220 15:17:49.496071 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.501200 master-0 kubenswrapper[28120]: I0220 15:17:49.500029 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-scripts\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.501200 master-0 kubenswrapper[28120]: I0220 15:17:49.500072 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5df2fc98-95ed-424c-81be-ca2f479332bb-etc-machine-id\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.501200 master-0 kubenswrapper[28120]: I0220 15:17:49.500105 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-config-data-custom\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.501200 master-0 kubenswrapper[28120]: I0220 15:17:49.500124 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-internal-tls-certs\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.501200 master-0 kubenswrapper[28120]: I0220 15:17:49.500148 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-combined-ca-bundle\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.501200 master-0 kubenswrapper[28120]: I0220 15:17:49.500171 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4xds\" (UniqueName: \"kubernetes.io/projected/5df2fc98-95ed-424c-81be-ca2f479332bb-kube-api-access-l4xds\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.501200 master-0 kubenswrapper[28120]: I0220 15:17:49.500244 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-public-tls-certs\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.501200 master-0 kubenswrapper[28120]: I0220 15:17:49.500262 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-config-data\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.501200 master-0 kubenswrapper[28120]: I0220 15:17:49.500307 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df2fc98-95ed-424c-81be-ca2f479332bb-logs\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.505509 master-0 kubenswrapper[28120]: I0220 15:17:49.504791 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-api-0"] Feb 20 15:17:49.506472 master-0 kubenswrapper[28120]: I0220 15:17:49.506442 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 20 15:17:49.510133 master-0 kubenswrapper[28120]: I0220 15:17:49.507059 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 20 15:17:49.510133 master-0 kubenswrapper[28120]: I0220 15:17:49.507902 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-eea69-api-config-data" Feb 20 15:17:49.602678 master-0 kubenswrapper[28120]: I0220 15:17:49.602085 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df2fc98-95ed-424c-81be-ca2f479332bb-logs\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.602678 master-0 kubenswrapper[28120]: I0220 15:17:49.602161 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-scripts\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.602678 master-0 kubenswrapper[28120]: I0220 15:17:49.602196 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5df2fc98-95ed-424c-81be-ca2f479332bb-etc-machine-id\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.602678 master-0 kubenswrapper[28120]: I0220 15:17:49.602231 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-config-data-custom\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.602678 master-0 kubenswrapper[28120]: I0220 15:17:49.602248 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-internal-tls-certs\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.602678 master-0 kubenswrapper[28120]: I0220 15:17:49.602283 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-combined-ca-bundle\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.602678 master-0 kubenswrapper[28120]: I0220 15:17:49.602305 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4xds\" (UniqueName: \"kubernetes.io/projected/5df2fc98-95ed-424c-81be-ca2f479332bb-kube-api-access-l4xds\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.602678 master-0 kubenswrapper[28120]: I0220 15:17:49.602388 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-public-tls-certs\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.602678 master-0 kubenswrapper[28120]: I0220 15:17:49.602408 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-config-data\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.610951 master-0 kubenswrapper[28120]: I0220 15:17:49.607173 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5df2fc98-95ed-424c-81be-ca2f479332bb-etc-machine-id\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.610951 master-0 kubenswrapper[28120]: I0220 15:17:49.610688 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-scripts\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.612994 master-0 kubenswrapper[28120]: I0220 15:17:49.611231 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5df2fc98-95ed-424c-81be-ca2f479332bb-logs\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.612994 master-0 kubenswrapper[28120]: I0220 15:17:49.611374 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-config-data\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.614825 master-0 kubenswrapper[28120]: I0220 15:17:49.614778 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-config-data-custom\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.615948 master-0 kubenswrapper[28120]: I0220 15:17:49.615885 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-combined-ca-bundle\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.622157 master-0 kubenswrapper[28120]: I0220 15:17:49.620964 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-public-tls-certs\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.623051 master-0 kubenswrapper[28120]: I0220 15:17:49.622960 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5df2fc98-95ed-424c-81be-ca2f479332bb-internal-tls-certs\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.656977 master-0 kubenswrapper[28120]: I0220 15:17:49.656113 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4xds\" (UniqueName: \"kubernetes.io/projected/5df2fc98-95ed-424c-81be-ca2f479332bb-kube-api-access-l4xds\") pod \"cinder-eea69-api-0\" (UID: \"5df2fc98-95ed-424c-81be-ca2f479332bb\") " pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.664046 master-0 kubenswrapper[28120]: I0220 15:17:49.663991 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9fd7b4d69-vstx7"] Feb 20 15:17:49.666286 master-0 kubenswrapper[28120]: I0220 15:17:49.665953 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.668138 master-0 kubenswrapper[28120]: I0220 15:17:49.668100 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 20 15:17:49.668415 master-0 kubenswrapper[28120]: I0220 15:17:49.668389 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 20 15:17:49.688805 master-0 kubenswrapper[28120]: I0220 15:17:49.688741 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9fd7b4d69-vstx7"] Feb 20 15:17:49.806942 master-0 kubenswrapper[28120]: I0220 15:17:49.806873 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-combined-ca-bundle\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.807165 master-0 kubenswrapper[28120]: I0220 15:17:49.806965 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-internal-tls-certs\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.807165 master-0 kubenswrapper[28120]: I0220 15:17:49.806989 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-ovndb-tls-certs\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.807165 master-0 kubenswrapper[28120]: I0220 15:17:49.807011 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-config\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.807289 master-0 kubenswrapper[28120]: I0220 15:17:49.807202 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-httpd-config\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.807363 master-0 kubenswrapper[28120]: I0220 15:17:49.807322 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqc5n\" (UniqueName: \"kubernetes.io/projected/e45745ff-1548-4241-a152-def6fa24ac77-kube-api-access-kqc5n\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.807431 master-0 kubenswrapper[28120]: I0220 15:17:49.807408 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-public-tls-certs\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.833403 master-0 kubenswrapper[28120]: I0220 15:17:49.833297 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-api-0" Feb 20 15:17:49.912518 master-0 kubenswrapper[28120]: I0220 15:17:49.910425 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-combined-ca-bundle\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.912518 master-0 kubenswrapper[28120]: I0220 15:17:49.910516 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-internal-tls-certs\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.912518 master-0 kubenswrapper[28120]: I0220 15:17:49.910552 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-ovndb-tls-certs\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.912518 master-0 kubenswrapper[28120]: I0220 15:17:49.910581 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-config\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.912518 master-0 kubenswrapper[28120]: I0220 15:17:49.910614 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-httpd-config\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.912518 master-0 kubenswrapper[28120]: I0220 15:17:49.910654 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqc5n\" (UniqueName: \"kubernetes.io/projected/e45745ff-1548-4241-a152-def6fa24ac77-kube-api-access-kqc5n\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.912518 master-0 kubenswrapper[28120]: I0220 15:17:49.910723 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-public-tls-certs\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.917149 master-0 kubenswrapper[28120]: I0220 15:17:49.915759 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-httpd-config\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.917149 master-0 kubenswrapper[28120]: I0220 15:17:49.915924 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-combined-ca-bundle\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.927316 master-0 kubenswrapper[28120]: I0220 15:17:49.919250 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-internal-tls-certs\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.927316 master-0 kubenswrapper[28120]: I0220 15:17:49.919374 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-config\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.927316 master-0 kubenswrapper[28120]: I0220 15:17:49.924510 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-public-tls-certs\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.927316 master-0 kubenswrapper[28120]: I0220 15:17:49.925052 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45745ff-1548-4241-a152-def6fa24ac77-ovndb-tls-certs\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:49.960220 master-0 kubenswrapper[28120]: I0220 15:17:49.959672 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqc5n\" (UniqueName: \"kubernetes.io/projected/e45745ff-1548-4241-a152-def6fa24ac77-kube-api-access-kqc5n\") pod \"neutron-9fd7b4d69-vstx7\" (UID: \"e45745ff-1548-4241-a152-def6fa24ac77\") " pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:50.060730 master-0 kubenswrapper[28120]: I0220 15:17:50.060621 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:50.079354 master-0 kubenswrapper[28120]: I0220 15:17:50.079307 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5531e029-f3da-4a66-8c34-1189e0ecad07" path="/var/lib/kubelet/pods/5531e029-f3da-4a66-8c34-1189e0ecad07/volumes" Feb 20 15:17:50.080045 master-0 kubenswrapper[28120]: I0220 15:17:50.080018 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7cfdf4a-f4f0-49aa-836b-ab2865026873" path="/var/lib/kubelet/pods/a7cfdf4a-f4f0-49aa-836b-ab2865026873/volumes" Feb 20 15:17:50.313991 master-0 kubenswrapper[28120]: W0220 15:17:50.313912 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5df2fc98_95ed_424c_81be_ca2f479332bb.slice/crio-40943320faf98d942af4c6fd07001f77eb23453b6c94a8136777b0d4e820d26d WatchSource:0}: Error finding container 40943320faf98d942af4c6fd07001f77eb23453b6c94a8136777b0d4e820d26d: Status 404 returned error can't find the container with id 40943320faf98d942af4c6fd07001f77eb23453b6c94a8136777b0d4e820d26d Feb 20 15:17:50.320860 master-0 kubenswrapper[28120]: I0220 15:17:50.320799 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-api-0"] Feb 20 15:17:50.338042 master-0 kubenswrapper[28120]: I0220 15:17:50.337098 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-api-0" event={"ID":"5df2fc98-95ed-424c-81be-ca2f479332bb","Type":"ContainerStarted","Data":"40943320faf98d942af4c6fd07001f77eb23453b6c94a8136777b0d4e820d26d"} Feb 20 15:17:50.343912 master-0 kubenswrapper[28120]: I0220 15:17:50.343847 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" event={"ID":"fa98495d-36f2-4d3b-ad8a-139b1d11a4df","Type":"ContainerStarted","Data":"8e54cb78c719e825a618d3ebdccf42c1ca9861421fbd8b4dadf610d3b87fdfe3"} Feb 20 15:17:50.344144 master-0 kubenswrapper[28120]: I0220 15:17:50.344000 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:50.379711 master-0 kubenswrapper[28120]: I0220 15:17:50.379621 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" podStartSLOduration=3.379603581 podStartE2EDuration="3.379603581s" podCreationTimestamp="2026-02-20 15:17:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:17:50.368905035 +0000 UTC m=+1008.629698608" watchObservedRunningTime="2026-02-20 15:17:50.379603581 +0000 UTC m=+1008.640397144" Feb 20 15:17:50.590204 master-0 kubenswrapper[28120]: I0220 15:17:50.590151 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9fd7b4d69-vstx7"] Feb 20 15:17:51.365532 master-0 kubenswrapper[28120]: I0220 15:17:51.365458 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9fd7b4d69-vstx7" event={"ID":"e45745ff-1548-4241-a152-def6fa24ac77","Type":"ContainerStarted","Data":"6a82090ed85ccc50e23f924780e07a12588873ffd6f87c27755e4910e21940e4"} Feb 20 15:17:51.365532 master-0 kubenswrapper[28120]: I0220 15:17:51.365524 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9fd7b4d69-vstx7" event={"ID":"e45745ff-1548-4241-a152-def6fa24ac77","Type":"ContainerStarted","Data":"7afb7da6a2b58481f837f8a2f2f22158e5e34d78c8177972986b5f21244ec4de"} Feb 20 15:17:51.365532 master-0 kubenswrapper[28120]: I0220 15:17:51.365541 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9fd7b4d69-vstx7" event={"ID":"e45745ff-1548-4241-a152-def6fa24ac77","Type":"ContainerStarted","Data":"925b791bab49330ddbc039a75eb1e60e1f1d16973894fbc18bf5dc3542067b9a"} Feb 20 15:17:51.367330 master-0 kubenswrapper[28120]: I0220 15:17:51.367297 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:17:51.372755 master-0 kubenswrapper[28120]: I0220 15:17:51.372595 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-api-0" event={"ID":"5df2fc98-95ed-424c-81be-ca2f479332bb","Type":"ContainerStarted","Data":"2b6fb9bf97dfce9829976f36b13d3d74063f0058985f8acf319d49d128b7c551"} Feb 20 15:17:51.407088 master-0 kubenswrapper[28120]: I0220 15:17:51.402615 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-9fd7b4d69-vstx7" podStartSLOduration=2.402596088 podStartE2EDuration="2.402596088s" podCreationTimestamp="2026-02-20 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:17:51.391013189 +0000 UTC m=+1009.651806752" watchObservedRunningTime="2026-02-20 15:17:51.402596088 +0000 UTC m=+1009.663389651" Feb 20 15:17:52.390560 master-0 kubenswrapper[28120]: I0220 15:17:52.390491 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-api-0" event={"ID":"5df2fc98-95ed-424c-81be-ca2f479332bb","Type":"ContainerStarted","Data":"451eec84210a64d9f548393661c35987c5b593dfe517b0115f962dae3ec20659"} Feb 20 15:17:52.391093 master-0 kubenswrapper[28120]: I0220 15:17:52.390602 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-eea69-api-0" Feb 20 15:17:52.406346 master-0 kubenswrapper[28120]: I0220 15:17:52.406271 28120 generic.go:334] "Generic (PLEG): container finished" podID="e1911cb1-e2d4-4be7-93b8-43bc600e8386" containerID="919e5789fc57a597088dc917fd76f5e01e9a881355125b3026f487282c73de4e" exitCode=0 Feb 20 15:17:52.408255 master-0 kubenswrapper[28120]: I0220 15:17:52.408206 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-4crzf" event={"ID":"e1911cb1-e2d4-4be7-93b8-43bc600e8386","Type":"ContainerDied","Data":"919e5789fc57a597088dc917fd76f5e01e9a881355125b3026f487282c73de4e"} Feb 20 15:17:52.427851 master-0 kubenswrapper[28120]: I0220 15:17:52.427725 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-eea69-api-0" podStartSLOduration=3.427700636 podStartE2EDuration="3.427700636s" podCreationTimestamp="2026-02-20 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:17:52.416655211 +0000 UTC m=+1010.677448814" watchObservedRunningTime="2026-02-20 15:17:52.427700636 +0000 UTC m=+1010.688494199" Feb 20 15:17:52.887610 master-0 kubenswrapper[28120]: I0220 15:17:52.887340 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:52.950537 master-0 kubenswrapper[28120]: I0220 15:17:52.947140 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-eea69-scheduler-0"] Feb 20 15:17:52.978033 master-0 kubenswrapper[28120]: I0220 15:17:52.977568 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:53.048043 master-0 kubenswrapper[28120]: I0220 15:17:53.046451 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-eea69-volume-lvm-iscsi-0"] Feb 20 15:17:53.207305 master-0 kubenswrapper[28120]: I0220 15:17:53.207106 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:53.273277 master-0 kubenswrapper[28120]: I0220 15:17:53.273200 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-eea69-backup-0"] Feb 20 15:17:53.424016 master-0 kubenswrapper[28120]: I0220 15:17:53.423872 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-eea69-backup-0" podUID="6cccfb03-002e-4e09-b630-501bfe258139" containerName="cinder-backup" containerID="cri-o://4f5a4da6d27c99e36867ddeb79ffe2e3637197208020d30465ce5adb68736955" gracePeriod=30 Feb 20 15:17:53.427730 master-0 kubenswrapper[28120]: I0220 15:17:53.425951 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-eea69-scheduler-0" podUID="101c0fb0-2857-4ea7-98b0-2fdd08b39966" containerName="cinder-scheduler" containerID="cri-o://c8dd508c01b7d112c698a8a3eec28c78d4458edcdcc852653467a3fc36dccfe4" gracePeriod=30 Feb 20 15:17:53.427730 master-0 kubenswrapper[28120]: I0220 15:17:53.426188 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-eea69-scheduler-0" podUID="101c0fb0-2857-4ea7-98b0-2fdd08b39966" containerName="probe" containerID="cri-o://5daafc80726bb950b11554a078c88cb62946a87b63e3cd743d00987939962692" gracePeriod=30 Feb 20 15:17:53.427730 master-0 kubenswrapper[28120]: I0220 15:17:53.426153 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" podUID="bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" containerName="cinder-volume" containerID="cri-o://839e01f27fa816464a6dce414026ba4dff21b3cd5a7cc75432e24844bd954570" gracePeriod=30 Feb 20 15:17:53.427730 master-0 kubenswrapper[28120]: I0220 15:17:53.426957 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" podUID="bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" containerName="probe" containerID="cri-o://30da80d308a13eb09dcdfe16d7c4a73151b9f84e490b85b75f8bbf6b90da2687" gracePeriod=30 Feb 20 15:17:53.429019 master-0 kubenswrapper[28120]: I0220 15:17:53.428740 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-eea69-backup-0" podUID="6cccfb03-002e-4e09-b630-501bfe258139" containerName="probe" containerID="cri-o://89403f7f6619936bec0bf0b11cc89ad1ad6b86855249706936fdbce192a9bb3a" gracePeriod=30 Feb 20 15:17:53.990263 master-0 kubenswrapper[28120]: I0220 15:17:53.989834 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:54.066573 master-0 kubenswrapper[28120]: I0220 15:17:54.065913 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e1911cb1-e2d4-4be7-93b8-43bc600e8386-etc-podinfo\") pod \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " Feb 20 15:17:54.066840 master-0 kubenswrapper[28120]: I0220 15:17:54.066825 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9krn9\" (UniqueName: \"kubernetes.io/projected/e1911cb1-e2d4-4be7-93b8-43bc600e8386-kube-api-access-9krn9\") pod \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " Feb 20 15:17:54.066984 master-0 kubenswrapper[28120]: I0220 15:17:54.066970 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-scripts\") pod \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " Feb 20 15:17:54.067089 master-0 kubenswrapper[28120]: I0220 15:17:54.067078 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e1911cb1-e2d4-4be7-93b8-43bc600e8386-config-data-merged\") pod \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " Feb 20 15:17:54.067212 master-0 kubenswrapper[28120]: I0220 15:17:54.067200 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-config-data\") pod \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " Feb 20 15:17:54.067307 master-0 kubenswrapper[28120]: I0220 15:17:54.067293 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-combined-ca-bundle\") pod \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\" (UID: \"e1911cb1-e2d4-4be7-93b8-43bc600e8386\") " Feb 20 15:17:54.068905 master-0 kubenswrapper[28120]: I0220 15:17:54.068874 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e1911cb1-e2d4-4be7-93b8-43bc600e8386-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "e1911cb1-e2d4-4be7-93b8-43bc600e8386" (UID: "e1911cb1-e2d4-4be7-93b8-43bc600e8386"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 20 15:17:54.073014 master-0 kubenswrapper[28120]: I0220 15:17:54.070769 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-scripts" (OuterVolumeSpecName: "scripts") pod "e1911cb1-e2d4-4be7-93b8-43bc600e8386" (UID: "e1911cb1-e2d4-4be7-93b8-43bc600e8386"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:54.073206 master-0 kubenswrapper[28120]: I0220 15:17:54.073184 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1911cb1-e2d4-4be7-93b8-43bc600e8386-kube-api-access-9krn9" (OuterVolumeSpecName: "kube-api-access-9krn9") pod "e1911cb1-e2d4-4be7-93b8-43bc600e8386" (UID: "e1911cb1-e2d4-4be7-93b8-43bc600e8386"). InnerVolumeSpecName "kube-api-access-9krn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:54.073456 master-0 kubenswrapper[28120]: I0220 15:17:54.073436 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1911cb1-e2d4-4be7-93b8-43bc600e8386-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "e1911cb1-e2d4-4be7-93b8-43bc600e8386" (UID: "e1911cb1-e2d4-4be7-93b8-43bc600e8386"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:17:54.118001 master-0 kubenswrapper[28120]: I0220 15:17:54.113113 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-config-data" (OuterVolumeSpecName: "config-data") pod "e1911cb1-e2d4-4be7-93b8-43bc600e8386" (UID: "e1911cb1-e2d4-4be7-93b8-43bc600e8386"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:54.155934 master-0 kubenswrapper[28120]: I0220 15:17:54.152067 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1911cb1-e2d4-4be7-93b8-43bc600e8386" (UID: "e1911cb1-e2d4-4be7-93b8-43bc600e8386"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:54.171550 master-0 kubenswrapper[28120]: I0220 15:17:54.171493 28120 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e1911cb1-e2d4-4be7-93b8-43bc600e8386-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.171550 master-0 kubenswrapper[28120]: I0220 15:17:54.171535 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.171550 master-0 kubenswrapper[28120]: I0220 15:17:54.171544 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.171550 master-0 kubenswrapper[28120]: I0220 15:17:54.171553 28120 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e1911cb1-e2d4-4be7-93b8-43bc600e8386-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.171740 master-0 kubenswrapper[28120]: I0220 15:17:54.171561 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9krn9\" (UniqueName: \"kubernetes.io/projected/e1911cb1-e2d4-4be7-93b8-43bc600e8386-kube-api-access-9krn9\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.171740 master-0 kubenswrapper[28120]: I0220 15:17:54.171571 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1911cb1-e2d4-4be7-93b8-43bc600e8386-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.456009 master-0 kubenswrapper[28120]: I0220 15:17:54.455873 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-4crzf" event={"ID":"e1911cb1-e2d4-4be7-93b8-43bc600e8386","Type":"ContainerDied","Data":"ae7326db1cc79fe93bc34c43dd8198cde5e42b373206278616b80a554f9c2e4d"} Feb 20 15:17:54.456009 master-0 kubenswrapper[28120]: I0220 15:17:54.455951 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae7326db1cc79fe93bc34c43dd8198cde5e42b373206278616b80a554f9c2e4d" Feb 20 15:17:54.456009 master-0 kubenswrapper[28120]: I0220 15:17:54.455983 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-4crzf" Feb 20 15:17:54.459907 master-0 kubenswrapper[28120]: I0220 15:17:54.459849 28120 generic.go:334] "Generic (PLEG): container finished" podID="bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" containerID="30da80d308a13eb09dcdfe16d7c4a73151b9f84e490b85b75f8bbf6b90da2687" exitCode=0 Feb 20 15:17:54.459907 master-0 kubenswrapper[28120]: I0220 15:17:54.459888 28120 generic.go:334] "Generic (PLEG): container finished" podID="bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" containerID="839e01f27fa816464a6dce414026ba4dff21b3cd5a7cc75432e24844bd954570" exitCode=0 Feb 20 15:17:54.460119 master-0 kubenswrapper[28120]: I0220 15:17:54.459923 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" event={"ID":"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f","Type":"ContainerDied","Data":"30da80d308a13eb09dcdfe16d7c4a73151b9f84e490b85b75f8bbf6b90da2687"} Feb 20 15:17:54.460119 master-0 kubenswrapper[28120]: I0220 15:17:54.459961 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" event={"ID":"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f","Type":"ContainerDied","Data":"839e01f27fa816464a6dce414026ba4dff21b3cd5a7cc75432e24844bd954570"} Feb 20 15:17:54.462555 master-0 kubenswrapper[28120]: I0220 15:17:54.462502 28120 generic.go:334] "Generic (PLEG): container finished" podID="101c0fb0-2857-4ea7-98b0-2fdd08b39966" containerID="5daafc80726bb950b11554a078c88cb62946a87b63e3cd743d00987939962692" exitCode=0 Feb 20 15:17:54.462555 master-0 kubenswrapper[28120]: I0220 15:17:54.462540 28120 generic.go:334] "Generic (PLEG): container finished" podID="101c0fb0-2857-4ea7-98b0-2fdd08b39966" containerID="c8dd508c01b7d112c698a8a3eec28c78d4458edcdcc852653467a3fc36dccfe4" exitCode=0 Feb 20 15:17:54.462725 master-0 kubenswrapper[28120]: I0220 15:17:54.462579 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-scheduler-0" event={"ID":"101c0fb0-2857-4ea7-98b0-2fdd08b39966","Type":"ContainerDied","Data":"5daafc80726bb950b11554a078c88cb62946a87b63e3cd743d00987939962692"} Feb 20 15:17:54.462725 master-0 kubenswrapper[28120]: I0220 15:17:54.462605 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-scheduler-0" event={"ID":"101c0fb0-2857-4ea7-98b0-2fdd08b39966","Type":"ContainerDied","Data":"c8dd508c01b7d112c698a8a3eec28c78d4458edcdcc852653467a3fc36dccfe4"} Feb 20 15:17:54.479080 master-0 kubenswrapper[28120]: I0220 15:17:54.479034 28120 generic.go:334] "Generic (PLEG): container finished" podID="6cccfb03-002e-4e09-b630-501bfe258139" containerID="89403f7f6619936bec0bf0b11cc89ad1ad6b86855249706936fdbce192a9bb3a" exitCode=0 Feb 20 15:17:54.479080 master-0 kubenswrapper[28120]: I0220 15:17:54.479075 28120 generic.go:334] "Generic (PLEG): container finished" podID="6cccfb03-002e-4e09-b630-501bfe258139" containerID="4f5a4da6d27c99e36867ddeb79ffe2e3637197208020d30465ce5adb68736955" exitCode=0 Feb 20 15:17:54.479249 master-0 kubenswrapper[28120]: I0220 15:17:54.479070 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-backup-0" event={"ID":"6cccfb03-002e-4e09-b630-501bfe258139","Type":"ContainerDied","Data":"89403f7f6619936bec0bf0b11cc89ad1ad6b86855249706936fdbce192a9bb3a"} Feb 20 15:17:54.479249 master-0 kubenswrapper[28120]: I0220 15:17:54.479113 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-backup-0" event={"ID":"6cccfb03-002e-4e09-b630-501bfe258139","Type":"ContainerDied","Data":"4f5a4da6d27c99e36867ddeb79ffe2e3637197208020d30465ce5adb68736955"} Feb 20 15:17:54.494414 master-0 kubenswrapper[28120]: I0220 15:17:54.494379 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:54.578694 master-0 kubenswrapper[28120]: I0220 15:17:54.578655 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-dev\") pod \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " Feb 20 15:17:54.578789 master-0 kubenswrapper[28120]: I0220 15:17:54.578736 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-dev" (OuterVolumeSpecName: "dev") pod "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" (UID: "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:54.578829 master-0 kubenswrapper[28120]: I0220 15:17:54.578794 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhqs6\" (UniqueName: \"kubernetes.io/projected/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-kube-api-access-vhqs6\") pod \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " Feb 20 15:17:54.578873 master-0 kubenswrapper[28120]: I0220 15:17:54.578864 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-sys\") pod \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " Feb 20 15:17:54.578916 master-0 kubenswrapper[28120]: I0220 15:17:54.578887 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-combined-ca-bundle\") pod \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " Feb 20 15:17:54.578970 master-0 kubenswrapper[28120]: I0220 15:17:54.578937 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-lib-modules\") pod \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " Feb 20 15:17:54.579011 master-0 kubenswrapper[28120]: I0220 15:17:54.578985 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-scripts\") pod \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " Feb 20 15:17:54.579011 master-0 kubenswrapper[28120]: I0220 15:17:54.579009 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-run\") pod \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " Feb 20 15:17:54.579073 master-0 kubenswrapper[28120]: I0220 15:17:54.579011 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-sys" (OuterVolumeSpecName: "sys") pod "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" (UID: "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:54.579073 master-0 kubenswrapper[28120]: I0220 15:17:54.579048 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-locks-brick\") pod \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " Feb 20 15:17:54.579137 master-0 kubenswrapper[28120]: I0220 15:17:54.579087 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-nvme\") pod \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " Feb 20 15:17:54.579137 master-0 kubenswrapper[28120]: I0220 15:17:54.579095 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" (UID: "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:54.579137 master-0 kubenswrapper[28120]: I0220 15:17:54.579112 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-iscsi\") pod \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " Feb 20 15:17:54.579227 master-0 kubenswrapper[28120]: I0220 15:17:54.579151 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-machine-id\") pod \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " Feb 20 15:17:54.579227 master-0 kubenswrapper[28120]: I0220 15:17:54.579190 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-config-data\") pod \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " Feb 20 15:17:54.579227 master-0 kubenswrapper[28120]: I0220 15:17:54.579222 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-lib-cinder\") pod \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " Feb 20 15:17:54.579347 master-0 kubenswrapper[28120]: I0220 15:17:54.579252 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-config-data-custom\") pod \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " Feb 20 15:17:54.579347 master-0 kubenswrapper[28120]: I0220 15:17:54.579319 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-locks-cinder\") pod \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\" (UID: \"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f\") " Feb 20 15:17:54.579544 master-0 kubenswrapper[28120]: I0220 15:17:54.579506 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" (UID: "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:54.579844 master-0 kubenswrapper[28120]: I0220 15:17:54.579794 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" (UID: "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:54.579903 master-0 kubenswrapper[28120]: I0220 15:17:54.579868 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-run" (OuterVolumeSpecName: "run") pod "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" (UID: "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:54.579903 master-0 kubenswrapper[28120]: I0220 15:17:54.579895 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" (UID: "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:54.580104 master-0 kubenswrapper[28120]: I0220 15:17:54.579916 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" (UID: "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:54.580104 master-0 kubenswrapper[28120]: I0220 15:17:54.579946 28120 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-dev\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.580104 master-0 kubenswrapper[28120]: I0220 15:17:54.579975 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" (UID: "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:54.580594 master-0 kubenswrapper[28120]: I0220 15:17:54.580572 28120 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-sys\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.580651 master-0 kubenswrapper[28120]: I0220 15:17:54.580597 28120 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.580651 master-0 kubenswrapper[28120]: I0220 15:17:54.580607 28120 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.581179 master-0 kubenswrapper[28120]: I0220 15:17:54.581150 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" (UID: "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:54.583030 master-0 kubenswrapper[28120]: I0220 15:17:54.582994 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-kube-api-access-vhqs6" (OuterVolumeSpecName: "kube-api-access-vhqs6") pod "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" (UID: "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f"). InnerVolumeSpecName "kube-api-access-vhqs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:54.583964 master-0 kubenswrapper[28120]: I0220 15:17:54.583862 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" (UID: "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:54.584198 master-0 kubenswrapper[28120]: I0220 15:17:54.584158 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-scripts" (OuterVolumeSpecName: "scripts") pod "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" (UID: "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:54.683615 master-0 kubenswrapper[28120]: I0220 15:17:54.682776 28120 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.683615 master-0 kubenswrapper[28120]: I0220 15:17:54.682818 28120 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.683615 master-0 kubenswrapper[28120]: I0220 15:17:54.682827 28120 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.683615 master-0 kubenswrapper[28120]: I0220 15:17:54.682836 28120 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.683615 master-0 kubenswrapper[28120]: I0220 15:17:54.682845 28120 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.683615 master-0 kubenswrapper[28120]: I0220 15:17:54.682854 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhqs6\" (UniqueName: \"kubernetes.io/projected/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-kube-api-access-vhqs6\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.683615 master-0 kubenswrapper[28120]: I0220 15:17:54.682862 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.683615 master-0 kubenswrapper[28120]: I0220 15:17:54.682872 28120 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-run\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.683615 master-0 kubenswrapper[28120]: I0220 15:17:54.682880 28120 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.706774 master-0 kubenswrapper[28120]: I0220 15:17:54.705172 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" (UID: "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:54.794089 master-0 kubenswrapper[28120]: I0220 15:17:54.793939 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:54.857418 master-0 kubenswrapper[28120]: I0220 15:17:54.847877 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-kxg76"] Feb 20 15:17:54.857418 master-0 kubenswrapper[28120]: E0220 15:17:54.848407 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" containerName="probe" Feb 20 15:17:54.857418 master-0 kubenswrapper[28120]: I0220 15:17:54.848422 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" containerName="probe" Feb 20 15:17:54.857418 master-0 kubenswrapper[28120]: E0220 15:17:54.848443 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1911cb1-e2d4-4be7-93b8-43bc600e8386" containerName="init" Feb 20 15:17:54.857418 master-0 kubenswrapper[28120]: I0220 15:17:54.848449 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1911cb1-e2d4-4be7-93b8-43bc600e8386" containerName="init" Feb 20 15:17:54.857418 master-0 kubenswrapper[28120]: E0220 15:17:54.848455 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1911cb1-e2d4-4be7-93b8-43bc600e8386" containerName="ironic-db-sync" Feb 20 15:17:54.857418 master-0 kubenswrapper[28120]: I0220 15:17:54.848461 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1911cb1-e2d4-4be7-93b8-43bc600e8386" containerName="ironic-db-sync" Feb 20 15:17:54.857418 master-0 kubenswrapper[28120]: E0220 15:17:54.848477 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" containerName="cinder-volume" Feb 20 15:17:54.857418 master-0 kubenswrapper[28120]: I0220 15:17:54.848484 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" containerName="cinder-volume" Feb 20 15:17:54.857418 master-0 kubenswrapper[28120]: I0220 15:17:54.848694 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" containerName="cinder-volume" Feb 20 15:17:54.857418 master-0 kubenswrapper[28120]: I0220 15:17:54.848707 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1911cb1-e2d4-4be7-93b8-43bc600e8386" containerName="ironic-db-sync" Feb 20 15:17:54.857418 master-0 kubenswrapper[28120]: I0220 15:17:54.848751 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" containerName="probe" Feb 20 15:17:54.857418 master-0 kubenswrapper[28120]: I0220 15:17:54.849401 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-kxg76" Feb 20 15:17:54.875717 master-0 kubenswrapper[28120]: I0220 15:17:54.868993 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-kxg76"] Feb 20 15:17:54.916274 master-0 kubenswrapper[28120]: I0220 15:17:54.900518 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnwpw\" (UniqueName: \"kubernetes.io/projected/cb8ac0c0-7862-4b33-a63e-3d5ac82e350c-kube-api-access-gnwpw\") pod \"ironic-inspector-db-create-kxg76\" (UID: \"cb8ac0c0-7862-4b33-a63e-3d5ac82e350c\") " pod="openstack/ironic-inspector-db-create-kxg76" Feb 20 15:17:54.916274 master-0 kubenswrapper[28120]: I0220 15:17:54.900575 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb8ac0c0-7862-4b33-a63e-3d5ac82e350c-operator-scripts\") pod \"ironic-inspector-db-create-kxg76\" (UID: \"cb8ac0c0-7862-4b33-a63e-3d5ac82e350c\") " pod="openstack/ironic-inspector-db-create-kxg76" Feb 20 15:17:54.920172 master-0 kubenswrapper[28120]: I0220 15:17:54.920107 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-config-data" (OuterVolumeSpecName: "config-data") pod "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" (UID: "bab44cbd-6925-4f66-9d4c-9b6489b4fa6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:54.956021 master-0 kubenswrapper[28120]: I0220 15:17:54.955605 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-4668-account-create-update-5kcbc"] Feb 20 15:17:54.957592 master-0 kubenswrapper[28120]: I0220 15:17:54.957539 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-4668-account-create-update-5kcbc" Feb 20 15:17:54.962462 master-0 kubenswrapper[28120]: I0220 15:17:54.962285 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-7dbf56d775-btzqc"] Feb 20 15:17:54.972476 master-0 kubenswrapper[28120]: I0220 15:17:54.972413 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Feb 20 15:17:54.988703 master-0 kubenswrapper[28120]: I0220 15:17:54.988444 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-4668-account-create-update-5kcbc"] Feb 20 15:17:54.988703 master-0 kubenswrapper[28120]: I0220 15:17:54.988488 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-7dbf56d775-btzqc"] Feb 20 15:17:54.988703 master-0 kubenswrapper[28120]: I0220 15:17:54.988663 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" Feb 20 15:17:54.991749 master-0 kubenswrapper[28120]: I0220 15:17:54.991725 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Feb 20 15:17:54.999589 master-0 kubenswrapper[28120]: I0220 15:17:54.999565 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:55.015877 master-0 kubenswrapper[28120]: I0220 15:17:55.015322 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnwpw\" (UniqueName: \"kubernetes.io/projected/cb8ac0c0-7862-4b33-a63e-3d5ac82e350c-kube-api-access-gnwpw\") pod \"ironic-inspector-db-create-kxg76\" (UID: \"cb8ac0c0-7862-4b33-a63e-3d5ac82e350c\") " pod="openstack/ironic-inspector-db-create-kxg76" Feb 20 15:17:55.015877 master-0 kubenswrapper[28120]: I0220 15:17:55.015386 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg75t\" (UniqueName: \"kubernetes.io/projected/b640c921-f9d7-44ff-9bfe-44f48306a404-kube-api-access-bg75t\") pod \"ironic-inspector-4668-account-create-update-5kcbc\" (UID: \"b640c921-f9d7-44ff-9bfe-44f48306a404\") " pod="openstack/ironic-inspector-4668-account-create-update-5kcbc" Feb 20 15:17:55.015877 master-0 kubenswrapper[28120]: I0220 15:17:55.015560 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb8ac0c0-7862-4b33-a63e-3d5ac82e350c-operator-scripts\") pod \"ironic-inspector-db-create-kxg76\" (UID: \"cb8ac0c0-7862-4b33-a63e-3d5ac82e350c\") " pod="openstack/ironic-inspector-db-create-kxg76" Feb 20 15:17:55.016749 master-0 kubenswrapper[28120]: I0220 15:17:55.016661 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b640c921-f9d7-44ff-9bfe-44f48306a404-operator-scripts\") pod \"ironic-inspector-4668-account-create-update-5kcbc\" (UID: \"b640c921-f9d7-44ff-9bfe-44f48306a404\") " pod="openstack/ironic-inspector-4668-account-create-update-5kcbc" Feb 20 15:17:55.017391 master-0 kubenswrapper[28120]: I0220 15:17:55.016982 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.017391 master-0 kubenswrapper[28120]: I0220 15:17:55.017324 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb8ac0c0-7862-4b33-a63e-3d5ac82e350c-operator-scripts\") pod \"ironic-inspector-db-create-kxg76\" (UID: \"cb8ac0c0-7862-4b33-a63e-3d5ac82e350c\") " pod="openstack/ironic-inspector-db-create-kxg76" Feb 20 15:17:55.055113 master-0 kubenswrapper[28120]: I0220 15:17:55.042979 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnwpw\" (UniqueName: \"kubernetes.io/projected/cb8ac0c0-7862-4b33-a63e-3d5ac82e350c-kube-api-access-gnwpw\") pod \"ironic-inspector-db-create-kxg76\" (UID: \"cb8ac0c0-7862-4b33-a63e-3d5ac82e350c\") " pod="openstack/ironic-inspector-db-create-kxg76" Feb 20 15:17:55.122474 master-0 kubenswrapper[28120]: I0220 15:17:55.118130 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-scripts\") pod \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " Feb 20 15:17:55.122474 master-0 kubenswrapper[28120]: I0220 15:17:55.118200 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-combined-ca-bundle\") pod \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " Feb 20 15:17:55.122474 master-0 kubenswrapper[28120]: I0220 15:17:55.118231 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/101c0fb0-2857-4ea7-98b0-2fdd08b39966-etc-machine-id\") pod \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " Feb 20 15:17:55.122474 master-0 kubenswrapper[28120]: I0220 15:17:55.118261 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-config-data-custom\") pod \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " Feb 20 15:17:55.122474 master-0 kubenswrapper[28120]: I0220 15:17:55.118291 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-config-data\") pod \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " Feb 20 15:17:55.122474 master-0 kubenswrapper[28120]: I0220 15:17:55.118355 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xczk5\" (UniqueName: \"kubernetes.io/projected/101c0fb0-2857-4ea7-98b0-2fdd08b39966-kube-api-access-xczk5\") pod \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\" (UID: \"101c0fb0-2857-4ea7-98b0-2fdd08b39966\") " Feb 20 15:17:55.122474 master-0 kubenswrapper[28120]: I0220 15:17:55.118544 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg75t\" (UniqueName: \"kubernetes.io/projected/b640c921-f9d7-44ff-9bfe-44f48306a404-kube-api-access-bg75t\") pod \"ironic-inspector-4668-account-create-update-5kcbc\" (UID: \"b640c921-f9d7-44ff-9bfe-44f48306a404\") " pod="openstack/ironic-inspector-4668-account-create-update-5kcbc" Feb 20 15:17:55.122474 master-0 kubenswrapper[28120]: I0220 15:17:55.118617 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-878r7\" (UniqueName: \"kubernetes.io/projected/266c4a44-1f0a-468c-99c1-1dbdab46f6ad-kube-api-access-878r7\") pod \"ironic-neutron-agent-7dbf56d775-btzqc\" (UID: \"266c4a44-1f0a-468c-99c1-1dbdab46f6ad\") " pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" Feb 20 15:17:55.122474 master-0 kubenswrapper[28120]: I0220 15:17:55.118648 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/266c4a44-1f0a-468c-99c1-1dbdab46f6ad-config\") pod \"ironic-neutron-agent-7dbf56d775-btzqc\" (UID: \"266c4a44-1f0a-468c-99c1-1dbdab46f6ad\") " pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" Feb 20 15:17:55.122474 master-0 kubenswrapper[28120]: I0220 15:17:55.118723 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266c4a44-1f0a-468c-99c1-1dbdab46f6ad-combined-ca-bundle\") pod \"ironic-neutron-agent-7dbf56d775-btzqc\" (UID: \"266c4a44-1f0a-468c-99c1-1dbdab46f6ad\") " pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" Feb 20 15:17:55.122474 master-0 kubenswrapper[28120]: I0220 15:17:55.118745 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b640c921-f9d7-44ff-9bfe-44f48306a404-operator-scripts\") pod \"ironic-inspector-4668-account-create-update-5kcbc\" (UID: \"b640c921-f9d7-44ff-9bfe-44f48306a404\") " pod="openstack/ironic-inspector-4668-account-create-update-5kcbc" Feb 20 15:17:55.132447 master-0 kubenswrapper[28120]: I0220 15:17:55.125113 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-scripts" (OuterVolumeSpecName: "scripts") pod "101c0fb0-2857-4ea7-98b0-2fdd08b39966" (UID: "101c0fb0-2857-4ea7-98b0-2fdd08b39966"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:55.132447 master-0 kubenswrapper[28120]: I0220 15:17:55.126652 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b640c921-f9d7-44ff-9bfe-44f48306a404-operator-scripts\") pod \"ironic-inspector-4668-account-create-update-5kcbc\" (UID: \"b640c921-f9d7-44ff-9bfe-44f48306a404\") " pod="openstack/ironic-inspector-4668-account-create-update-5kcbc" Feb 20 15:17:55.132447 master-0 kubenswrapper[28120]: I0220 15:17:55.128091 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/101c0fb0-2857-4ea7-98b0-2fdd08b39966-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "101c0fb0-2857-4ea7-98b0-2fdd08b39966" (UID: "101c0fb0-2857-4ea7-98b0-2fdd08b39966"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:55.142953 master-0 kubenswrapper[28120]: I0220 15:17:55.140246 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/101c0fb0-2857-4ea7-98b0-2fdd08b39966-kube-api-access-xczk5" (OuterVolumeSpecName: "kube-api-access-xczk5") pod "101c0fb0-2857-4ea7-98b0-2fdd08b39966" (UID: "101c0fb0-2857-4ea7-98b0-2fdd08b39966"). InnerVolumeSpecName "kube-api-access-xczk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:55.172947 master-0 kubenswrapper[28120]: I0220 15:17:55.145788 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg75t\" (UniqueName: \"kubernetes.io/projected/b640c921-f9d7-44ff-9bfe-44f48306a404-kube-api-access-bg75t\") pod \"ironic-inspector-4668-account-create-update-5kcbc\" (UID: \"b640c921-f9d7-44ff-9bfe-44f48306a404\") " pod="openstack/ironic-inspector-4668-account-create-update-5kcbc" Feb 20 15:17:55.174065 master-0 kubenswrapper[28120]: I0220 15:17:55.174024 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c54fb858c-bpkz9"] Feb 20 15:17:55.174302 master-0 kubenswrapper[28120]: I0220 15:17:55.174272 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" podUID="fa98495d-36f2-4d3b-ad8a-139b1d11a4df" containerName="dnsmasq-dns" containerID="cri-o://8e54cb78c719e825a618d3ebdccf42c1ca9861421fbd8b4dadf610d3b87fdfe3" gracePeriod=10 Feb 20 15:17:55.176075 master-0 kubenswrapper[28120]: I0220 15:17:55.176050 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:55.201176 master-0 kubenswrapper[28120]: I0220 15:17:55.183248 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "101c0fb0-2857-4ea7-98b0-2fdd08b39966" (UID: "101c0fb0-2857-4ea7-98b0-2fdd08b39966"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:55.227966 master-0 kubenswrapper[28120]: I0220 15:17:55.223808 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266c4a44-1f0a-468c-99c1-1dbdab46f6ad-combined-ca-bundle\") pod \"ironic-neutron-agent-7dbf56d775-btzqc\" (UID: \"266c4a44-1f0a-468c-99c1-1dbdab46f6ad\") " pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" Feb 20 15:17:55.227966 master-0 kubenswrapper[28120]: I0220 15:17:55.223960 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-878r7\" (UniqueName: \"kubernetes.io/projected/266c4a44-1f0a-468c-99c1-1dbdab46f6ad-kube-api-access-878r7\") pod \"ironic-neutron-agent-7dbf56d775-btzqc\" (UID: \"266c4a44-1f0a-468c-99c1-1dbdab46f6ad\") " pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" Feb 20 15:17:55.227966 master-0 kubenswrapper[28120]: I0220 15:17:55.223995 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/266c4a44-1f0a-468c-99c1-1dbdab46f6ad-config\") pod \"ironic-neutron-agent-7dbf56d775-btzqc\" (UID: \"266c4a44-1f0a-468c-99c1-1dbdab46f6ad\") " pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" Feb 20 15:17:55.227966 master-0 kubenswrapper[28120]: I0220 15:17:55.224074 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.227966 master-0 kubenswrapper[28120]: I0220 15:17:55.224085 28120 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/101c0fb0-2857-4ea7-98b0-2fdd08b39966-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.227966 master-0 kubenswrapper[28120]: I0220 15:17:55.224095 28120 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.227966 master-0 kubenswrapper[28120]: I0220 15:17:55.224105 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xczk5\" (UniqueName: \"kubernetes.io/projected/101c0fb0-2857-4ea7-98b0-2fdd08b39966-kube-api-access-xczk5\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.247557 master-0 kubenswrapper[28120]: I0220 15:17:55.247515 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b9c77ddfc-pgzhs"] Feb 20 15:17:55.247883 master-0 kubenswrapper[28120]: I0220 15:17:55.247842 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-878r7\" (UniqueName: \"kubernetes.io/projected/266c4a44-1f0a-468c-99c1-1dbdab46f6ad-kube-api-access-878r7\") pod \"ironic-neutron-agent-7dbf56d775-btzqc\" (UID: \"266c4a44-1f0a-468c-99c1-1dbdab46f6ad\") " pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" Feb 20 15:17:55.248072 master-0 kubenswrapper[28120]: E0220 15:17:55.248053 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="101c0fb0-2857-4ea7-98b0-2fdd08b39966" containerName="cinder-scheduler" Feb 20 15:17:55.248107 master-0 kubenswrapper[28120]: I0220 15:17:55.248072 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="101c0fb0-2857-4ea7-98b0-2fdd08b39966" containerName="cinder-scheduler" Feb 20 15:17:55.248107 master-0 kubenswrapper[28120]: E0220 15:17:55.248085 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="101c0fb0-2857-4ea7-98b0-2fdd08b39966" containerName="probe" Feb 20 15:17:55.248107 master-0 kubenswrapper[28120]: I0220 15:17:55.248091 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="101c0fb0-2857-4ea7-98b0-2fdd08b39966" containerName="probe" Feb 20 15:17:55.248339 master-0 kubenswrapper[28120]: I0220 15:17:55.248321 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="101c0fb0-2857-4ea7-98b0-2fdd08b39966" containerName="cinder-scheduler" Feb 20 15:17:55.248385 master-0 kubenswrapper[28120]: I0220 15:17:55.248378 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="101c0fb0-2857-4ea7-98b0-2fdd08b39966" containerName="probe" Feb 20 15:17:55.249546 master-0 kubenswrapper[28120]: I0220 15:17:55.249483 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.255308 master-0 kubenswrapper[28120]: I0220 15:17:55.255268 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266c4a44-1f0a-468c-99c1-1dbdab46f6ad-combined-ca-bundle\") pod \"ironic-neutron-agent-7dbf56d775-btzqc\" (UID: \"266c4a44-1f0a-468c-99c1-1dbdab46f6ad\") " pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" Feb 20 15:17:55.259684 master-0 kubenswrapper[28120]: I0220 15:17:55.259650 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c77ddfc-pgzhs"] Feb 20 15:17:55.260368 master-0 kubenswrapper[28120]: I0220 15:17:55.260332 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-kxg76" Feb 20 15:17:55.262386 master-0 kubenswrapper[28120]: I0220 15:17:55.262292 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/266c4a44-1f0a-468c-99c1-1dbdab46f6ad-config\") pod \"ironic-neutron-agent-7dbf56d775-btzqc\" (UID: \"266c4a44-1f0a-468c-99c1-1dbdab46f6ad\") " pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" Feb 20 15:17:55.322517 master-0 kubenswrapper[28120]: I0220 15:17:55.322395 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-749b5b76fd-6zvxh"] Feb 20 15:17:55.326654 master-0 kubenswrapper[28120]: I0220 15:17:55.326590 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.329244 master-0 kubenswrapper[28120]: I0220 15:17:55.329206 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 20 15:17:55.330057 master-0 kubenswrapper[28120]: I0220 15:17:55.330027 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:55.330909 master-0 kubenswrapper[28120]: I0220 15:17:55.330866 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Feb 20 15:17:55.331479 master-0 kubenswrapper[28120]: I0220 15:17:55.331448 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 20 15:17:55.332568 master-0 kubenswrapper[28120]: I0220 15:17:55.332538 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Feb 20 15:17:55.332568 master-0 kubenswrapper[28120]: I0220 15:17:55.332547 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Feb 20 15:17:55.342988 master-0 kubenswrapper[28120]: I0220 15:17:55.342944 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "101c0fb0-2857-4ea7-98b0-2fdd08b39966" (UID: "101c0fb0-2857-4ea7-98b0-2fdd08b39966"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:55.344124 master-0 kubenswrapper[28120]: I0220 15:17:55.344083 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-749b5b76fd-6zvxh"] Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.425338 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-4668-account-create-update-5kcbc" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.425947 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-config-data" (OuterVolumeSpecName: "config-data") pod "101c0fb0-2857-4ea7-98b0-2fdd08b39966" (UID: "101c0fb0-2857-4ea7-98b0-2fdd08b39966"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427054 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-dev\") pod \"6cccfb03-002e-4e09-b630-501bfe258139\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427120 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-locks-cinder\") pod \"6cccfb03-002e-4e09-b630-501bfe258139\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427217 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-config-data\") pod \"6cccfb03-002e-4e09-b630-501bfe258139\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427278 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-combined-ca-bundle\") pod \"6cccfb03-002e-4e09-b630-501bfe258139\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427319 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-lib-modules\") pod \"6cccfb03-002e-4e09-b630-501bfe258139\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427345 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-nvme\") pod \"6cccfb03-002e-4e09-b630-501bfe258139\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427380 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-scripts\") pod \"6cccfb03-002e-4e09-b630-501bfe258139\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427395 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-run\") pod \"6cccfb03-002e-4e09-b630-501bfe258139\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427422 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-locks-brick\") pod \"6cccfb03-002e-4e09-b630-501bfe258139\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427440 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v58mh\" (UniqueName: \"kubernetes.io/projected/6cccfb03-002e-4e09-b630-501bfe258139-kube-api-access-v58mh\") pod \"6cccfb03-002e-4e09-b630-501bfe258139\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427454 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-config-data-custom\") pod \"6cccfb03-002e-4e09-b630-501bfe258139\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427493 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-lib-cinder\") pod \"6cccfb03-002e-4e09-b630-501bfe258139\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427523 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-sys\") pod \"6cccfb03-002e-4e09-b630-501bfe258139\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427556 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-machine-id\") pod \"6cccfb03-002e-4e09-b630-501bfe258139\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427605 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-iscsi\") pod \"6cccfb03-002e-4e09-b630-501bfe258139\" (UID: \"6cccfb03-002e-4e09-b630-501bfe258139\") " Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427843 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-config\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427908 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mb22\" (UniqueName: \"kubernetes.io/projected/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-kube-api-access-8mb22\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.427997 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06327dac-718e-426d-8c0e-9955f38106f7-logs\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.428038 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.428066 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-dns-svc\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.428115 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-config-data-custom\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.428145 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.428215 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.428243 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/06327dac-718e-426d-8c0e-9955f38106f7-etc-podinfo\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.428343 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-config-data\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.428361 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-scripts\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.428385 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-combined-ca-bundle\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.428434 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c69pl\" (UniqueName: \"kubernetes.io/projected/06327dac-718e-426d-8c0e-9955f38106f7-kube-api-access-c69pl\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.428452 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/06327dac-718e-426d-8c0e-9955f38106f7-config-data-merged\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.428449 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-run" (OuterVolumeSpecName: "run") pod "6cccfb03-002e-4e09-b630-501bfe258139" (UID: "6cccfb03-002e-4e09-b630-501bfe258139"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.428534 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "6cccfb03-002e-4e09-b630-501bfe258139" (UID: "6cccfb03-002e-4e09-b630-501bfe258139"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.428749 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-dev" (OuterVolumeSpecName: "dev") pod "6cccfb03-002e-4e09-b630-501bfe258139" (UID: "6cccfb03-002e-4e09-b630-501bfe258139"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:55.429031 master-0 kubenswrapper[28120]: I0220 15:17:55.428839 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "6cccfb03-002e-4e09-b630-501bfe258139" (UID: "6cccfb03-002e-4e09-b630-501bfe258139"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:55.432033 master-0 kubenswrapper[28120]: I0220 15:17:55.431225 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "6cccfb03-002e-4e09-b630-501bfe258139" (UID: "6cccfb03-002e-4e09-b630-501bfe258139"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:55.432033 master-0 kubenswrapper[28120]: I0220 15:17:55.431557 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "6cccfb03-002e-4e09-b630-501bfe258139" (UID: "6cccfb03-002e-4e09-b630-501bfe258139"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:55.432033 master-0 kubenswrapper[28120]: I0220 15:17:55.431576 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-sys" (OuterVolumeSpecName: "sys") pod "6cccfb03-002e-4e09-b630-501bfe258139" (UID: "6cccfb03-002e-4e09-b630-501bfe258139"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:55.432033 master-0 kubenswrapper[28120]: I0220 15:17:55.431596 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6cccfb03-002e-4e09-b630-501bfe258139" (UID: "6cccfb03-002e-4e09-b630-501bfe258139"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:55.432033 master-0 kubenswrapper[28120]: I0220 15:17:55.431651 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "6cccfb03-002e-4e09-b630-501bfe258139" (UID: "6cccfb03-002e-4e09-b630-501bfe258139"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:55.432764 master-0 kubenswrapper[28120]: I0220 15:17:55.432672 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cccfb03-002e-4e09-b630-501bfe258139-kube-api-access-v58mh" (OuterVolumeSpecName: "kube-api-access-v58mh") pod "6cccfb03-002e-4e09-b630-501bfe258139" (UID: "6cccfb03-002e-4e09-b630-501bfe258139"). InnerVolumeSpecName "kube-api-access-v58mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:55.434008 master-0 kubenswrapper[28120]: I0220 15:17:55.433913 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.434008 master-0 kubenswrapper[28120]: I0220 15:17:55.433969 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/101c0fb0-2857-4ea7-98b0-2fdd08b39966-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.434008 master-0 kubenswrapper[28120]: I0220 15:17:55.433984 28120 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-run\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.434317 master-0 kubenswrapper[28120]: I0220 15:17:55.434094 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6cccfb03-002e-4e09-b630-501bfe258139" (UID: "6cccfb03-002e-4e09-b630-501bfe258139"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 20 15:17:55.437547 master-0 kubenswrapper[28120]: I0220 15:17:55.437508 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" Feb 20 15:17:55.439039 master-0 kubenswrapper[28120]: I0220 15:17:55.438977 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6cccfb03-002e-4e09-b630-501bfe258139" (UID: "6cccfb03-002e-4e09-b630-501bfe258139"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:55.440473 master-0 kubenswrapper[28120]: I0220 15:17:55.440290 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-scripts" (OuterVolumeSpecName: "scripts") pod "6cccfb03-002e-4e09-b630-501bfe258139" (UID: "6cccfb03-002e-4e09-b630-501bfe258139"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:55.506437 master-0 kubenswrapper[28120]: I0220 15:17:55.506386 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-backup-0" event={"ID":"6cccfb03-002e-4e09-b630-501bfe258139","Type":"ContainerDied","Data":"f71839a100f03b4edbb3bf9cc08f406c1af0768463e65dafacd63ad850cbe0f3"} Feb 20 15:17:55.506437 master-0 kubenswrapper[28120]: I0220 15:17:55.506441 28120 scope.go:117] "RemoveContainer" containerID="89403f7f6619936bec0bf0b11cc89ad1ad6b86855249706936fdbce192a9bb3a" Feb 20 15:17:55.506921 master-0 kubenswrapper[28120]: I0220 15:17:55.506572 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:55.510233 master-0 kubenswrapper[28120]: I0220 15:17:55.510191 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" event={"ID":"bab44cbd-6925-4f66-9d4c-9b6489b4fa6f","Type":"ContainerDied","Data":"a8722c078ad03c029798e957cc12c18c8f080e2503180ba8114b9727b6a45450"} Feb 20 15:17:55.510294 master-0 kubenswrapper[28120]: I0220 15:17:55.510260 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.518331 master-0 kubenswrapper[28120]: I0220 15:17:55.518288 28120 generic.go:334] "Generic (PLEG): container finished" podID="fa98495d-36f2-4d3b-ad8a-139b1d11a4df" containerID="8e54cb78c719e825a618d3ebdccf42c1ca9861421fbd8b4dadf610d3b87fdfe3" exitCode=0 Feb 20 15:17:55.518437 master-0 kubenswrapper[28120]: I0220 15:17:55.518357 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" event={"ID":"fa98495d-36f2-4d3b-ad8a-139b1d11a4df","Type":"ContainerDied","Data":"8e54cb78c719e825a618d3ebdccf42c1ca9861421fbd8b4dadf610d3b87fdfe3"} Feb 20 15:17:55.520774 master-0 kubenswrapper[28120]: I0220 15:17:55.520745 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-scheduler-0" event={"ID":"101c0fb0-2857-4ea7-98b0-2fdd08b39966","Type":"ContainerDied","Data":"fc1045efeae51519cbce875abbf40dd60e2d6bdfdc4ef5b097e0150930228682"} Feb 20 15:17:55.520913 master-0 kubenswrapper[28120]: I0220 15:17:55.520863 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:55.535805 master-0 kubenswrapper[28120]: I0220 15:17:55.535761 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-config-data-custom\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.535911 master-0 kubenswrapper[28120]: I0220 15:17:55.535824 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.535911 master-0 kubenswrapper[28120]: I0220 15:17:55.535894 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.536034 master-0 kubenswrapper[28120]: I0220 15:17:55.535919 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/06327dac-718e-426d-8c0e-9955f38106f7-etc-podinfo\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.536034 master-0 kubenswrapper[28120]: I0220 15:17:55.535984 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-config-data\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.536034 master-0 kubenswrapper[28120]: I0220 15:17:55.536004 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-scripts\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.536034 master-0 kubenswrapper[28120]: I0220 15:17:55.536025 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-combined-ca-bundle\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.536155 master-0 kubenswrapper[28120]: I0220 15:17:55.536050 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c69pl\" (UniqueName: \"kubernetes.io/projected/06327dac-718e-426d-8c0e-9955f38106f7-kube-api-access-c69pl\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.536155 master-0 kubenswrapper[28120]: I0220 15:17:55.536069 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/06327dac-718e-426d-8c0e-9955f38106f7-config-data-merged\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.536155 master-0 kubenswrapper[28120]: I0220 15:17:55.536116 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-config\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.536155 master-0 kubenswrapper[28120]: I0220 15:17:55.536145 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mb22\" (UniqueName: \"kubernetes.io/projected/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-kube-api-access-8mb22\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.536269 master-0 kubenswrapper[28120]: I0220 15:17:55.536166 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06327dac-718e-426d-8c0e-9955f38106f7-logs\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.536269 master-0 kubenswrapper[28120]: I0220 15:17:55.536190 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.536269 master-0 kubenswrapper[28120]: I0220 15:17:55.536210 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-dns-svc\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.536437 master-0 kubenswrapper[28120]: I0220 15:17:55.536269 28120 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.536437 master-0 kubenswrapper[28120]: I0220 15:17:55.536281 28120 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.536437 master-0 kubenswrapper[28120]: I0220 15:17:55.536290 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.536437 master-0 kubenswrapper[28120]: I0220 15:17:55.536299 28120 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.536437 master-0 kubenswrapper[28120]: I0220 15:17:55.536309 28120 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.536437 master-0 kubenswrapper[28120]: I0220 15:17:55.536322 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v58mh\" (UniqueName: \"kubernetes.io/projected/6cccfb03-002e-4e09-b630-501bfe258139-kube-api-access-v58mh\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.536598 master-0 kubenswrapper[28120]: I0220 15:17:55.536449 28120 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.536598 master-0 kubenswrapper[28120]: I0220 15:17:55.536458 28120 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-sys\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.536598 master-0 kubenswrapper[28120]: I0220 15:17:55.536468 28120 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.536598 master-0 kubenswrapper[28120]: I0220 15:17:55.536476 28120 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.536598 master-0 kubenswrapper[28120]: I0220 15:17:55.536485 28120 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-dev\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.536598 master-0 kubenswrapper[28120]: I0220 15:17:55.536494 28120 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6cccfb03-002e-4e09-b630-501bfe258139-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.537625 master-0 kubenswrapper[28120]: I0220 15:17:55.537587 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-dns-svc\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.539140 master-0 kubenswrapper[28120]: I0220 15:17:55.539089 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/06327dac-718e-426d-8c0e-9955f38106f7-config-data-merged\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.539689 master-0 kubenswrapper[28120]: I0220 15:17:55.539655 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-config\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.540180 master-0 kubenswrapper[28120]: I0220 15:17:55.540121 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06327dac-718e-426d-8c0e-9955f38106f7-logs\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.540240 master-0 kubenswrapper[28120]: I0220 15:17:55.540162 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cccfb03-002e-4e09-b630-501bfe258139" (UID: "6cccfb03-002e-4e09-b630-501bfe258139"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:55.541017 master-0 kubenswrapper[28120]: I0220 15:17:55.540689 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.544103 master-0 kubenswrapper[28120]: I0220 15:17:55.541729 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-combined-ca-bundle\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.553885 master-0 kubenswrapper[28120]: I0220 15:17:55.553842 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/06327dac-718e-426d-8c0e-9955f38106f7-etc-podinfo\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.554699 master-0 kubenswrapper[28120]: I0220 15:17:55.554622 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.565241 master-0 kubenswrapper[28120]: I0220 15:17:55.565165 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-eea69-volume-lvm-iscsi-0"] Feb 20 15:17:55.569583 master-0 kubenswrapper[28120]: I0220 15:17:55.569541 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.569736 master-0 kubenswrapper[28120]: I0220 15:17:55.569662 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-config-data-custom\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.571564 master-0 kubenswrapper[28120]: I0220 15:17:55.571527 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-scripts\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.571721 master-0 kubenswrapper[28120]: I0220 15:17:55.571697 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-config-data\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.581316 master-0 kubenswrapper[28120]: I0220 15:17:55.581050 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c69pl\" (UniqueName: \"kubernetes.io/projected/06327dac-718e-426d-8c0e-9955f38106f7-kube-api-access-c69pl\") pod \"ironic-749b5b76fd-6zvxh\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.590889 master-0 kubenswrapper[28120]: I0220 15:17:55.590463 28120 scope.go:117] "RemoveContainer" containerID="4f5a4da6d27c99e36867ddeb79ffe2e3637197208020d30465ce5adb68736955" Feb 20 15:17:55.608384 master-0 kubenswrapper[28120]: I0220 15:17:55.608326 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-eea69-volume-lvm-iscsi-0"] Feb 20 15:17:55.619804 master-0 kubenswrapper[28120]: I0220 15:17:55.619735 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mb22\" (UniqueName: \"kubernetes.io/projected/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-kube-api-access-8mb22\") pod \"dnsmasq-dns-6b9c77ddfc-pgzhs\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.623334 master-0 kubenswrapper[28120]: I0220 15:17:55.623259 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:17:55.637043 master-0 kubenswrapper[28120]: I0220 15:17:55.636993 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-eea69-volume-lvm-iscsi-0"] Feb 20 15:17:55.637598 master-0 kubenswrapper[28120]: E0220 15:17:55.637542 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cccfb03-002e-4e09-b630-501bfe258139" containerName="cinder-backup" Feb 20 15:17:55.637598 master-0 kubenswrapper[28120]: I0220 15:17:55.637562 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cccfb03-002e-4e09-b630-501bfe258139" containerName="cinder-backup" Feb 20 15:17:55.637677 master-0 kubenswrapper[28120]: E0220 15:17:55.637607 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cccfb03-002e-4e09-b630-501bfe258139" containerName="probe" Feb 20 15:17:55.637677 master-0 kubenswrapper[28120]: I0220 15:17:55.637614 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cccfb03-002e-4e09-b630-501bfe258139" containerName="probe" Feb 20 15:17:55.637952 master-0 kubenswrapper[28120]: I0220 15:17:55.637893 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cccfb03-002e-4e09-b630-501bfe258139" containerName="cinder-backup" Feb 20 15:17:55.638002 master-0 kubenswrapper[28120]: I0220 15:17:55.637964 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cccfb03-002e-4e09-b630-501bfe258139" containerName="probe" Feb 20 15:17:55.639161 master-0 kubenswrapper[28120]: I0220 15:17:55.639140 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.641493 master-0 kubenswrapper[28120]: I0220 15:17:55.641461 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.650313 master-0 kubenswrapper[28120]: I0220 15:17:55.650269 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-eea69-volume-lvm-iscsi-config-data" Feb 20 15:17:55.652949 master-0 kubenswrapper[28120]: I0220 15:17:55.652893 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:17:55.679118 master-0 kubenswrapper[28120]: I0220 15:17:55.678748 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-volume-lvm-iscsi-0"] Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.742181 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-config-data" (OuterVolumeSpecName: "config-data") pod "6cccfb03-002e-4e09-b630-501bfe258139" (UID: "6cccfb03-002e-4e09-b630-501bfe258139"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.743600 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-scripts\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.743661 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-etc-iscsi\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.743689 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-config-data\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.743720 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-lib-modules\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.743786 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-var-lib-cinder\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.743801 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-config-data-custom\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.743823 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-var-locks-brick\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.743842 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-var-locks-cinder\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.743880 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-etc-machine-id\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.743900 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-run\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.744238 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-sys\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.744278 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66cvg\" (UniqueName: \"kubernetes.io/projected/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-kube-api-access-66cvg\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.744297 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-combined-ca-bundle\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.744346 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-dev\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.744361 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-etc-nvme\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.746754 master-0 kubenswrapper[28120]: I0220 15:17:55.744466 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cccfb03-002e-4e09-b630-501bfe258139-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:55.846656 master-0 kubenswrapper[28120]: I0220 15:17:55.846544 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-etc-machine-id\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.846656 master-0 kubenswrapper[28120]: I0220 15:17:55.846599 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-run\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.846656 master-0 kubenswrapper[28120]: I0220 15:17:55.846627 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-sys\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.846814 master-0 kubenswrapper[28120]: I0220 15:17:55.846661 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66cvg\" (UniqueName: \"kubernetes.io/projected/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-kube-api-access-66cvg\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.846955 master-0 kubenswrapper[28120]: I0220 15:17:55.846877 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-run\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.847061 master-0 kubenswrapper[28120]: I0220 15:17:55.846913 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-etc-machine-id\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.847109 master-0 kubenswrapper[28120]: I0220 15:17:55.846960 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-sys\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.847140 master-0 kubenswrapper[28120]: I0220 15:17:55.846988 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-combined-ca-bundle\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.847233 master-0 kubenswrapper[28120]: I0220 15:17:55.847177 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-dev\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.847763 master-0 kubenswrapper[28120]: I0220 15:17:55.847210 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-dev\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.847763 master-0 kubenswrapper[28120]: I0220 15:17:55.847717 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-etc-nvme\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.847838 master-0 kubenswrapper[28120]: I0220 15:17:55.847788 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-etc-nvme\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.847887 master-0 kubenswrapper[28120]: I0220 15:17:55.847865 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-scripts\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.847991 master-0 kubenswrapper[28120]: I0220 15:17:55.847970 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-etc-iscsi\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.848031 master-0 kubenswrapper[28120]: I0220 15:17:55.848020 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-config-data\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.848198 master-0 kubenswrapper[28120]: I0220 15:17:55.848167 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-lib-modules\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.848372 master-0 kubenswrapper[28120]: I0220 15:17:55.848345 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-var-lib-cinder\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.848416 master-0 kubenswrapper[28120]: I0220 15:17:55.848372 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-config-data-custom\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.848416 master-0 kubenswrapper[28120]: I0220 15:17:55.848403 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-var-locks-brick\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.848471 master-0 kubenswrapper[28120]: I0220 15:17:55.848432 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-var-locks-cinder\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.848645 master-0 kubenswrapper[28120]: I0220 15:17:55.848619 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-var-locks-cinder\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.848693 master-0 kubenswrapper[28120]: I0220 15:17:55.848651 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-lib-modules\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.848693 master-0 kubenswrapper[28120]: I0220 15:17:55.848685 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-var-lib-cinder\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.848895 master-0 kubenswrapper[28120]: I0220 15:17:55.848862 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-var-locks-brick\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.850082 master-0 kubenswrapper[28120]: I0220 15:17:55.848331 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-etc-iscsi\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.854014 master-0 kubenswrapper[28120]: I0220 15:17:55.853977 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-config-data\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.854434 master-0 kubenswrapper[28120]: I0220 15:17:55.854395 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-scripts\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.854805 master-0 kubenswrapper[28120]: I0220 15:17:55.854756 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-combined-ca-bundle\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.863101 master-0 kubenswrapper[28120]: I0220 15:17:55.863048 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-config-data-custom\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.868843 master-0 kubenswrapper[28120]: I0220 15:17:55.865332 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66cvg\" (UniqueName: \"kubernetes.io/projected/e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0-kube-api-access-66cvg\") pod \"cinder-eea69-volume-lvm-iscsi-0\" (UID: \"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0\") " pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.868843 master-0 kubenswrapper[28120]: I0220 15:17:55.868809 28120 scope.go:117] "RemoveContainer" containerID="30da80d308a13eb09dcdfe16d7c4a73151b9f84e490b85b75f8bbf6b90da2687" Feb 20 15:17:55.918996 master-0 kubenswrapper[28120]: I0220 15:17:55.918940 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-eea69-scheduler-0"] Feb 20 15:17:55.946000 master-0 kubenswrapper[28120]: I0220 15:17:55.945900 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-eea69-scheduler-0"] Feb 20 15:17:55.946321 master-0 kubenswrapper[28120]: I0220 15:17:55.946277 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:17:55.967976 master-0 kubenswrapper[28120]: I0220 15:17:55.967255 28120 scope.go:117] "RemoveContainer" containerID="839e01f27fa816464a6dce414026ba4dff21b3cd5a7cc75432e24844bd954570" Feb 20 15:17:55.990237 master-0 kubenswrapper[28120]: I0220 15:17:55.990159 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-eea69-scheduler-0"] Feb 20 15:17:55.995389 master-0 kubenswrapper[28120]: I0220 15:17:55.994547 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.003213 master-0 kubenswrapper[28120]: I0220 15:17:55.997621 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-eea69-scheduler-config-data" Feb 20 15:17:56.017780 master-0 kubenswrapper[28120]: I0220 15:17:56.017498 28120 scope.go:117] "RemoveContainer" containerID="5daafc80726bb950b11554a078c88cb62946a87b63e3cd743d00987939962692" Feb 20 15:17:56.017780 master-0 kubenswrapper[28120]: I0220 15:17:56.017774 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:56.020680 master-0 kubenswrapper[28120]: I0220 15:17:56.019525 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-scheduler-0"] Feb 20 15:17:56.029378 master-0 kubenswrapper[28120]: I0220 15:17:56.029329 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-eea69-backup-0"] Feb 20 15:17:56.039420 master-0 kubenswrapper[28120]: I0220 15:17:56.039263 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-eea69-backup-0"] Feb 20 15:17:56.050214 master-0 kubenswrapper[28120]: I0220 15:17:56.049079 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-eea69-backup-0"] Feb 20 15:17:56.050214 master-0 kubenswrapper[28120]: E0220 15:17:56.049601 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa98495d-36f2-4d3b-ad8a-139b1d11a4df" containerName="dnsmasq-dns" Feb 20 15:17:56.050214 master-0 kubenswrapper[28120]: I0220 15:17:56.049615 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa98495d-36f2-4d3b-ad8a-139b1d11a4df" containerName="dnsmasq-dns" Feb 20 15:17:56.051617 master-0 kubenswrapper[28120]: E0220 15:17:56.050537 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa98495d-36f2-4d3b-ad8a-139b1d11a4df" containerName="init" Feb 20 15:17:56.051617 master-0 kubenswrapper[28120]: I0220 15:17:56.050581 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa98495d-36f2-4d3b-ad8a-139b1d11a4df" containerName="init" Feb 20 15:17:56.051617 master-0 kubenswrapper[28120]: I0220 15:17:56.050850 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa98495d-36f2-4d3b-ad8a-139b1d11a4df" containerName="dnsmasq-dns" Feb 20 15:17:56.052828 master-0 kubenswrapper[28120]: I0220 15:17:56.052177 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.060014 master-0 kubenswrapper[28120]: I0220 15:17:56.059757 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-eea69-backup-config-data" Feb 20 15:17:56.110947 master-0 kubenswrapper[28120]: I0220 15:17:56.110875 28120 scope.go:117] "RemoveContainer" containerID="c8dd508c01b7d112c698a8a3eec28c78d4458edcdcc852653467a3fc36dccfe4" Feb 20 15:17:56.113855 master-0 kubenswrapper[28120]: I0220 15:17:56.113513 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="101c0fb0-2857-4ea7-98b0-2fdd08b39966" path="/var/lib/kubelet/pods/101c0fb0-2857-4ea7-98b0-2fdd08b39966/volumes" Feb 20 15:17:56.114687 master-0 kubenswrapper[28120]: I0220 15:17:56.114187 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cccfb03-002e-4e09-b630-501bfe258139" path="/var/lib/kubelet/pods/6cccfb03-002e-4e09-b630-501bfe258139/volumes" Feb 20 15:17:56.114807 master-0 kubenswrapper[28120]: I0220 15:17:56.114776 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab44cbd-6925-4f66-9d4c-9b6489b4fa6f" path="/var/lib/kubelet/pods/bab44cbd-6925-4f66-9d4c-9b6489b4fa6f/volumes" Feb 20 15:17:56.122929 master-0 kubenswrapper[28120]: I0220 15:17:56.122872 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-backup-0"] Feb 20 15:17:56.169182 master-0 kubenswrapper[28120]: I0220 15:17:56.169121 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-dns-swift-storage-0\") pod \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " Feb 20 15:17:56.169305 master-0 kubenswrapper[28120]: I0220 15:17:56.169283 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-ovsdbserver-nb\") pod \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " Feb 20 15:17:56.169413 master-0 kubenswrapper[28120]: I0220 15:17:56.169390 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-dns-svc\") pod \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " Feb 20 15:17:56.169489 master-0 kubenswrapper[28120]: I0220 15:17:56.169465 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-ovsdbserver-sb\") pod \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " Feb 20 15:17:56.169551 master-0 kubenswrapper[28120]: I0220 15:17:56.169530 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcjhn\" (UniqueName: \"kubernetes.io/projected/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-kube-api-access-gcjhn\") pod \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " Feb 20 15:17:56.169651 master-0 kubenswrapper[28120]: I0220 15:17:56.169622 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-config\") pod \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\" (UID: \"fa98495d-36f2-4d3b-ad8a-139b1d11a4df\") " Feb 20 15:17:56.170401 master-0 kubenswrapper[28120]: I0220 15:17:56.170367 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-config-data\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.170453 master-0 kubenswrapper[28120]: I0220 15:17:56.170421 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-etc-machine-id\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.170453 master-0 kubenswrapper[28120]: I0220 15:17:56.170447 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-config-data-custom\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.170514 master-0 kubenswrapper[28120]: I0220 15:17:56.170488 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-etc-machine-id\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.170673 master-0 kubenswrapper[28120]: I0220 15:17:56.170620 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-etc-nvme\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.170731 master-0 kubenswrapper[28120]: I0220 15:17:56.170710 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qfzk\" (UniqueName: \"kubernetes.io/projected/ba7d8e3e-cda5-4845-9c73-74221a517924-kube-api-access-4qfzk\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.170784 master-0 kubenswrapper[28120]: I0220 15:17:56.170770 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7d8e3e-cda5-4845-9c73-74221a517924-config-data\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.170904 master-0 kubenswrapper[28120]: I0220 15:17:56.170881 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxnrx\" (UniqueName: \"kubernetes.io/projected/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-kube-api-access-qxnrx\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.170961 master-0 kubenswrapper[28120]: I0220 15:17:56.170950 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-var-locks-cinder\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.171024 master-0 kubenswrapper[28120]: I0220 15:17:56.171008 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-etc-iscsi\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.171116 master-0 kubenswrapper[28120]: I0220 15:17:56.171097 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-scripts\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.172794 master-0 kubenswrapper[28120]: I0220 15:17:56.172749 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-kube-api-access-gcjhn" (OuterVolumeSpecName: "kube-api-access-gcjhn") pod "fa98495d-36f2-4d3b-ad8a-139b1d11a4df" (UID: "fa98495d-36f2-4d3b-ad8a-139b1d11a4df"). InnerVolumeSpecName "kube-api-access-gcjhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:56.172934 master-0 kubenswrapper[28120]: I0220 15:17:56.172880 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-lib-modules\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.172982 master-0 kubenswrapper[28120]: I0220 15:17:56.172943 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-var-lib-cinder\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.173042 master-0 kubenswrapper[28120]: I0220 15:17:56.173022 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba7d8e3e-cda5-4845-9c73-74221a517924-scripts\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.173078 master-0 kubenswrapper[28120]: I0220 15:17:56.173060 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-sys\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.173118 master-0 kubenswrapper[28120]: I0220 15:17:56.173077 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7d8e3e-cda5-4845-9c73-74221a517924-combined-ca-bundle\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.173118 master-0 kubenswrapper[28120]: I0220 15:17:56.173096 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-run\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.173118 master-0 kubenswrapper[28120]: I0220 15:17:56.173112 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba7d8e3e-cda5-4845-9c73-74221a517924-config-data-custom\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.173232 master-0 kubenswrapper[28120]: I0220 15:17:56.173167 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-var-locks-brick\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.173232 master-0 kubenswrapper[28120]: I0220 15:17:56.173219 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-dev\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.173311 master-0 kubenswrapper[28120]: I0220 15:17:56.173254 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-combined-ca-bundle\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.173371 master-0 kubenswrapper[28120]: I0220 15:17:56.173349 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcjhn\" (UniqueName: \"kubernetes.io/projected/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-kube-api-access-gcjhn\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:56.259178 master-0 kubenswrapper[28120]: I0220 15:17:56.247915 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fa98495d-36f2-4d3b-ad8a-139b1d11a4df" (UID: "fa98495d-36f2-4d3b-ad8a-139b1d11a4df"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.264697 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fa98495d-36f2-4d3b-ad8a-139b1d11a4df" (UID: "fa98495d-36f2-4d3b-ad8a-139b1d11a4df"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.275736 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-var-locks-brick\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.275803 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-dev\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.275825 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-combined-ca-bundle\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.275838 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-var-locks-brick\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.275856 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-config-data\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.275899 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-etc-machine-id\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.275932 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-config-data-custom\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.275951 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-etc-machine-id\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.275995 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-etc-nvme\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276021 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qfzk\" (UniqueName: \"kubernetes.io/projected/ba7d8e3e-cda5-4845-9c73-74221a517924-kube-api-access-4qfzk\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276042 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7d8e3e-cda5-4845-9c73-74221a517924-config-data\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276085 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxnrx\" (UniqueName: \"kubernetes.io/projected/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-kube-api-access-qxnrx\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276112 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-etc-iscsi\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276128 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-var-locks-cinder\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276164 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-scripts\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276181 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-lib-modules\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276198 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-var-lib-cinder\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276223 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba7d8e3e-cda5-4845-9c73-74221a517924-scripts\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276249 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7d8e3e-cda5-4845-9c73-74221a517924-combined-ca-bundle\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276264 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-sys\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276281 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-run\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276295 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba7d8e3e-cda5-4845-9c73-74221a517924-config-data-custom\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276349 28120 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276362 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276488 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-etc-iscsi\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.276961 master-0 kubenswrapper[28120]: I0220 15:17:56.276564 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-etc-machine-id\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.277863 master-0 kubenswrapper[28120]: I0220 15:17:56.277333 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-etc-machine-id\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.277863 master-0 kubenswrapper[28120]: I0220 15:17:56.277411 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-etc-nvme\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.277863 master-0 kubenswrapper[28120]: I0220 15:17:56.277463 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-var-lib-cinder\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.277863 master-0 kubenswrapper[28120]: I0220 15:17:56.277500 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-var-locks-cinder\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.295953 master-0 kubenswrapper[28120]: I0220 15:17:56.279862 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba7d8e3e-cda5-4845-9c73-74221a517924-config-data-custom\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.295953 master-0 kubenswrapper[28120]: I0220 15:17:56.280265 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-config-data\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.295953 master-0 kubenswrapper[28120]: I0220 15:17:56.280308 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-dev\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.295953 master-0 kubenswrapper[28120]: I0220 15:17:56.281584 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa98495d-36f2-4d3b-ad8a-139b1d11a4df" (UID: "fa98495d-36f2-4d3b-ad8a-139b1d11a4df"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:56.295953 master-0 kubenswrapper[28120]: I0220 15:17:56.281848 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-config-data-custom\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.295953 master-0 kubenswrapper[28120]: I0220 15:17:56.282089 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-lib-modules\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.295953 master-0 kubenswrapper[28120]: I0220 15:17:56.282169 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-sys\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.295953 master-0 kubenswrapper[28120]: I0220 15:17:56.282203 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ba7d8e3e-cda5-4845-9c73-74221a517924-run\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.295953 master-0 kubenswrapper[28120]: I0220 15:17:56.282611 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-combined-ca-bundle\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.295953 master-0 kubenswrapper[28120]: I0220 15:17:56.295664 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7d8e3e-cda5-4845-9c73-74221a517924-config-data\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.295953 master-0 kubenswrapper[28120]: I0220 15:17:56.295898 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7d8e3e-cda5-4845-9c73-74221a517924-combined-ca-bundle\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.311972 master-0 kubenswrapper[28120]: I0220 15:17:56.301492 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-scripts\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.311972 master-0 kubenswrapper[28120]: I0220 15:17:56.301632 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qfzk\" (UniqueName: \"kubernetes.io/projected/ba7d8e3e-cda5-4845-9c73-74221a517924-kube-api-access-4qfzk\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.311972 master-0 kubenswrapper[28120]: I0220 15:17:56.304880 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba7d8e3e-cda5-4845-9c73-74221a517924-scripts\") pod \"cinder-eea69-backup-0\" (UID: \"ba7d8e3e-cda5-4845-9c73-74221a517924\") " pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.311972 master-0 kubenswrapper[28120]: I0220 15:17:56.305013 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxnrx\" (UniqueName: \"kubernetes.io/projected/de23a98a-f6a0-44f9-95ae-fc4c396bf32a-kube-api-access-qxnrx\") pod \"cinder-eea69-scheduler-0\" (UID: \"de23a98a-f6a0-44f9-95ae-fc4c396bf32a\") " pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.322995 master-0 kubenswrapper[28120]: I0220 15:17:56.311549 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-config" (OuterVolumeSpecName: "config") pod "fa98495d-36f2-4d3b-ad8a-139b1d11a4df" (UID: "fa98495d-36f2-4d3b-ad8a-139b1d11a4df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:56.322995 master-0 kubenswrapper[28120]: I0220 15:17:56.315437 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fa98495d-36f2-4d3b-ad8a-139b1d11a4df" (UID: "fa98495d-36f2-4d3b-ad8a-139b1d11a4df"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:56.349041 master-0 kubenswrapper[28120]: I0220 15:17:56.344886 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-kxg76"] Feb 20 15:17:56.357710 master-0 kubenswrapper[28120]: I0220 15:17:56.357121 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:17:56.373654 master-0 kubenswrapper[28120]: I0220 15:17:56.372802 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-4668-account-create-update-5kcbc"] Feb 20 15:17:56.379895 master-0 kubenswrapper[28120]: I0220 15:17:56.379539 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:56.379895 master-0 kubenswrapper[28120]: I0220 15:17:56.379617 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:56.379895 master-0 kubenswrapper[28120]: I0220 15:17:56.379628 28120 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa98495d-36f2-4d3b-ad8a-139b1d11a4df-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:56.383616 master-0 kubenswrapper[28120]: I0220 15:17:56.382230 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-eea69-backup-0" Feb 20 15:17:56.385214 master-0 kubenswrapper[28120]: I0220 15:17:56.385162 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-7dbf56d775-btzqc"] Feb 20 15:17:56.403419 master-0 kubenswrapper[28120]: W0220 15:17:56.403386 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod266c4a44_1f0a_468c_99c1_1dbdab46f6ad.slice/crio-82a3336a6ae65fadc070006de83e1844dc9bc6b2ecf86c6890b2d5bbf34a57ed WatchSource:0}: Error finding container 82a3336a6ae65fadc070006de83e1844dc9bc6b2ecf86c6890b2d5bbf34a57ed: Status 404 returned error can't find the container with id 82a3336a6ae65fadc070006de83e1844dc9bc6b2ecf86c6890b2d5bbf34a57ed Feb 20 15:17:56.544329 master-0 kubenswrapper[28120]: I0220 15:17:56.543520 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-4668-account-create-update-5kcbc" event={"ID":"b640c921-f9d7-44ff-9bfe-44f48306a404","Type":"ContainerStarted","Data":"9706e13dbdbff3eeaec0217d289f69b6dc391787bd02eca72199ee4bae3c2799"} Feb 20 15:17:56.560477 master-0 kubenswrapper[28120]: I0220 15:17:56.546840 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" event={"ID":"fa98495d-36f2-4d3b-ad8a-139b1d11a4df","Type":"ContainerDied","Data":"77215333360e9a286ad82a4148c4a9522a7e6928a417b61852d6bf5bfc80b3ac"} Feb 20 15:17:56.560477 master-0 kubenswrapper[28120]: I0220 15:17:56.546876 28120 scope.go:117] "RemoveContainer" containerID="8e54cb78c719e825a618d3ebdccf42c1ca9861421fbd8b4dadf610d3b87fdfe3" Feb 20 15:17:56.560477 master-0 kubenswrapper[28120]: I0220 15:17:56.546999 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c54fb858c-bpkz9" Feb 20 15:17:56.571972 master-0 kubenswrapper[28120]: I0220 15:17:56.568327 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" event={"ID":"266c4a44-1f0a-468c-99c1-1dbdab46f6ad","Type":"ContainerStarted","Data":"82a3336a6ae65fadc070006de83e1844dc9bc6b2ecf86c6890b2d5bbf34a57ed"} Feb 20 15:17:56.571972 master-0 kubenswrapper[28120]: I0220 15:17:56.569793 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-kxg76" event={"ID":"cb8ac0c0-7862-4b33-a63e-3d5ac82e350c","Type":"ContainerStarted","Data":"9482dc2d372396dc9152704663745aae128a3e0007d3ca345ba84c98b1ff0e3b"} Feb 20 15:17:56.678399 master-0 kubenswrapper[28120]: I0220 15:17:56.678347 28120 scope.go:117] "RemoveContainer" containerID="e6f32a45a3d25836ab37e8f3290cecea8876a39d99ad6f7689d8d741d28473b4" Feb 20 15:17:56.714421 master-0 kubenswrapper[28120]: I0220 15:17:56.714368 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c54fb858c-bpkz9"] Feb 20 15:17:56.728541 master-0 kubenswrapper[28120]: I0220 15:17:56.728489 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c54fb858c-bpkz9"] Feb 20 15:17:56.773189 master-0 kubenswrapper[28120]: I0220 15:17:56.773066 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c77ddfc-pgzhs"] Feb 20 15:17:56.789839 master-0 kubenswrapper[28120]: I0220 15:17:56.789796 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-volume-lvm-iscsi-0"] Feb 20 15:17:56.806859 master-0 kubenswrapper[28120]: I0220 15:17:56.806811 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-749b5b76fd-6zvxh"] Feb 20 15:17:56.813298 master-0 kubenswrapper[28120]: W0220 15:17:56.813249 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89d3e8f0_e3b4_4ec7_870a_9fc5934f91c8.slice/crio-288b476546e53a74c0a10270dda70d19b6d4247e66baf0dbac0b86eae2dccd4f WatchSource:0}: Error finding container 288b476546e53a74c0a10270dda70d19b6d4247e66baf0dbac0b86eae2dccd4f: Status 404 returned error can't find the container with id 288b476546e53a74c0a10270dda70d19b6d4247e66baf0dbac0b86eae2dccd4f Feb 20 15:17:56.890780 master-0 kubenswrapper[28120]: W0220 15:17:56.889527 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06327dac_718e_426d_8c0e_9955f38106f7.slice/crio-315fdc6c690da58fb34bee6765a7504fad7d4e60ebec1f2523615db3695f95ab WatchSource:0}: Error finding container 315fdc6c690da58fb34bee6765a7504fad7d4e60ebec1f2523615db3695f95ab: Status 404 returned error can't find the container with id 315fdc6c690da58fb34bee6765a7504fad7d4e60ebec1f2523615db3695f95ab Feb 20 15:17:56.934700 master-0 kubenswrapper[28120]: I0220 15:17:56.934632 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-scheduler-0"] Feb 20 15:17:57.015699 master-0 kubenswrapper[28120]: I0220 15:17:57.015644 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Feb 20 15:17:57.019190 master-0 kubenswrapper[28120]: I0220 15:17:57.019147 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 20 15:17:57.021390 master-0 kubenswrapper[28120]: I0220 15:17:57.021229 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Feb 20 15:17:57.021505 master-0 kubenswrapper[28120]: I0220 15:17:57.021401 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Feb 20 15:17:57.037882 master-0 kubenswrapper[28120]: I0220 15:17:57.037778 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 20 15:17:57.064417 master-0 kubenswrapper[28120]: I0220 15:17:57.064390 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-eea69-backup-0"] Feb 20 15:17:57.199210 master-0 kubenswrapper[28120]: I0220 15:17:57.199163 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46zs2\" (UniqueName: \"kubernetes.io/projected/af765e06-e2aa-4239-8a51-fc29e02fa257-kube-api-access-46zs2\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.199309 master-0 kubenswrapper[28120]: I0220 15:17:57.199289 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af765e06-e2aa-4239-8a51-fc29e02fa257-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.199436 master-0 kubenswrapper[28120]: I0220 15:17:57.199416 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-148e5e82-9dfd-4d35-a4ce-5a593f61ac1d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1a82c4d4-84c5-4807-bb63-f114614f5a2f\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.199479 master-0 kubenswrapper[28120]: I0220 15:17:57.199446 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af765e06-e2aa-4239-8a51-fc29e02fa257-config-data\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.199510 master-0 kubenswrapper[28120]: I0220 15:17:57.199489 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/af765e06-e2aa-4239-8a51-fc29e02fa257-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.199567 master-0 kubenswrapper[28120]: I0220 15:17:57.199545 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af765e06-e2aa-4239-8a51-fc29e02fa257-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.199622 master-0 kubenswrapper[28120]: I0220 15:17:57.199603 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af765e06-e2aa-4239-8a51-fc29e02fa257-scripts\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.199768 master-0 kubenswrapper[28120]: I0220 15:17:57.199634 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/af765e06-e2aa-4239-8a51-fc29e02fa257-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.301981 master-0 kubenswrapper[28120]: I0220 15:17:57.301735 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/af765e06-e2aa-4239-8a51-fc29e02fa257-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.301981 master-0 kubenswrapper[28120]: I0220 15:17:57.301821 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af765e06-e2aa-4239-8a51-fc29e02fa257-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.301981 master-0 kubenswrapper[28120]: I0220 15:17:57.301854 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af765e06-e2aa-4239-8a51-fc29e02fa257-scripts\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.301981 master-0 kubenswrapper[28120]: I0220 15:17:57.301875 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/af765e06-e2aa-4239-8a51-fc29e02fa257-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.301981 master-0 kubenswrapper[28120]: I0220 15:17:57.301892 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46zs2\" (UniqueName: \"kubernetes.io/projected/af765e06-e2aa-4239-8a51-fc29e02fa257-kube-api-access-46zs2\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.301981 master-0 kubenswrapper[28120]: I0220 15:17:57.301980 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af765e06-e2aa-4239-8a51-fc29e02fa257-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.302296 master-0 kubenswrapper[28120]: I0220 15:17:57.302056 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-148e5e82-9dfd-4d35-a4ce-5a593f61ac1d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1a82c4d4-84c5-4807-bb63-f114614f5a2f\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.302296 master-0 kubenswrapper[28120]: I0220 15:17:57.302085 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af765e06-e2aa-4239-8a51-fc29e02fa257-config-data\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.303365 master-0 kubenswrapper[28120]: I0220 15:17:57.303201 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/af765e06-e2aa-4239-8a51-fc29e02fa257-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.304970 master-0 kubenswrapper[28120]: I0220 15:17:57.304502 28120 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 15:17:57.304970 master-0 kubenswrapper[28120]: I0220 15:17:57.304528 28120 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-148e5e82-9dfd-4d35-a4ce-5a593f61ac1d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1a82c4d4-84c5-4807-bb63-f114614f5a2f\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6d6d38e2d74c2d87606fa216c57334c3c6646d87990bf35cb835d75469b24230/globalmount\"" pod="openstack/ironic-conductor-0" Feb 20 15:17:57.308440 master-0 kubenswrapper[28120]: I0220 15:17:57.308377 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af765e06-e2aa-4239-8a51-fc29e02fa257-config-data\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.308541 master-0 kubenswrapper[28120]: I0220 15:17:57.308492 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/af765e06-e2aa-4239-8a51-fc29e02fa257-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.308685 master-0 kubenswrapper[28120]: I0220 15:17:57.308554 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af765e06-e2aa-4239-8a51-fc29e02fa257-scripts\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.309315 master-0 kubenswrapper[28120]: I0220 15:17:57.309220 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af765e06-e2aa-4239-8a51-fc29e02fa257-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.310308 master-0 kubenswrapper[28120]: I0220 15:17:57.309784 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af765e06-e2aa-4239-8a51-fc29e02fa257-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.317948 master-0 kubenswrapper[28120]: I0220 15:17:57.317898 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46zs2\" (UniqueName: \"kubernetes.io/projected/af765e06-e2aa-4239-8a51-fc29e02fa257-kube-api-access-46zs2\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:57.593411 master-0 kubenswrapper[28120]: I0220 15:17:57.593276 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" event={"ID":"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0","Type":"ContainerStarted","Data":"ee41d7b29c1b26a122dd6656a1b0780551cdb657e03d53ff74e8ad1bc9851027"} Feb 20 15:17:57.594861 master-0 kubenswrapper[28120]: I0220 15:17:57.594824 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" event={"ID":"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8","Type":"ContainerStarted","Data":"288b476546e53a74c0a10270dda70d19b6d4247e66baf0dbac0b86eae2dccd4f"} Feb 20 15:17:57.599225 master-0 kubenswrapper[28120]: I0220 15:17:57.599175 28120 generic.go:334] "Generic (PLEG): container finished" podID="cb8ac0c0-7862-4b33-a63e-3d5ac82e350c" containerID="5b704717e3a7c477709ac978f38a86b74364a56cbc9906602f4843533f089b11" exitCode=0 Feb 20 15:17:57.599225 master-0 kubenswrapper[28120]: I0220 15:17:57.599221 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-kxg76" event={"ID":"cb8ac0c0-7862-4b33-a63e-3d5ac82e350c","Type":"ContainerDied","Data":"5b704717e3a7c477709ac978f38a86b74364a56cbc9906602f4843533f089b11"} Feb 20 15:17:57.601036 master-0 kubenswrapper[28120]: I0220 15:17:57.600995 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-backup-0" event={"ID":"ba7d8e3e-cda5-4845-9c73-74221a517924","Type":"ContainerStarted","Data":"ecbbcb68e3f74bc4e45e0523f0209a54436d6ed9534d2cabaeb656714679f98e"} Feb 20 15:17:57.602032 master-0 kubenswrapper[28120]: I0220 15:17:57.601896 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-scheduler-0" event={"ID":"de23a98a-f6a0-44f9-95ae-fc4c396bf32a","Type":"ContainerStarted","Data":"389ef86fe649aafd9741c5608e5cdfa127527f48186eb42e11173f3d5224966c"} Feb 20 15:17:57.605436 master-0 kubenswrapper[28120]: I0220 15:17:57.605385 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-749b5b76fd-6zvxh" event={"ID":"06327dac-718e-426d-8c0e-9955f38106f7","Type":"ContainerStarted","Data":"315fdc6c690da58fb34bee6765a7504fad7d4e60ebec1f2523615db3695f95ab"} Feb 20 15:17:57.609564 master-0 kubenswrapper[28120]: I0220 15:17:57.609424 28120 generic.go:334] "Generic (PLEG): container finished" podID="b640c921-f9d7-44ff-9bfe-44f48306a404" containerID="fa2849295ab76a6c03f14a272aa24ac101444186bb44c57ff5b99a70c4e6d594" exitCode=0 Feb 20 15:17:57.609564 master-0 kubenswrapper[28120]: I0220 15:17:57.609461 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-4668-account-create-update-5kcbc" event={"ID":"b640c921-f9d7-44ff-9bfe-44f48306a404","Type":"ContainerDied","Data":"fa2849295ab76a6c03f14a272aa24ac101444186bb44c57ff5b99a70c4e6d594"} Feb 20 15:17:58.067943 master-0 kubenswrapper[28120]: I0220 15:17:58.067899 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa98495d-36f2-4d3b-ad8a-139b1d11a4df" path="/var/lib/kubelet/pods/fa98495d-36f2-4d3b-ad8a-139b1d11a4df/volumes" Feb 20 15:17:58.545945 master-0 kubenswrapper[28120]: I0220 15:17:58.544262 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-74d7ccf7c8-wrvs9"] Feb 20 15:17:58.552317 master-0 kubenswrapper[28120]: I0220 15:17:58.546559 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.579622 master-0 kubenswrapper[28120]: I0220 15:17:58.571344 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Feb 20 15:17:58.584876 master-0 kubenswrapper[28120]: I0220 15:17:58.584815 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-74d7ccf7c8-wrvs9"] Feb 20 15:17:58.586119 master-0 kubenswrapper[28120]: I0220 15:17:58.586082 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Feb 20 15:17:58.656413 master-0 kubenswrapper[28120]: I0220 15:17:58.656343 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-config-data\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.656913 master-0 kubenswrapper[28120]: I0220 15:17:58.656426 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/aa3237e4-664f-41e0-8175-7745ff681676-etc-podinfo\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.657202 master-0 kubenswrapper[28120]: I0220 15:17:58.657020 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-public-tls-certs\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.657317 master-0 kubenswrapper[28120]: I0220 15:17:58.657301 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa3237e4-664f-41e0-8175-7745ff681676-logs\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.657510 master-0 kubenswrapper[28120]: I0220 15:17:58.657492 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q444\" (UniqueName: \"kubernetes.io/projected/aa3237e4-664f-41e0-8175-7745ff681676-kube-api-access-6q444\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.657590 master-0 kubenswrapper[28120]: I0220 15:17:58.657578 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-internal-tls-certs\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.657651 master-0 kubenswrapper[28120]: I0220 15:17:58.657611 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-config-data-custom\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.657764 master-0 kubenswrapper[28120]: I0220 15:17:58.657751 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/aa3237e4-664f-41e0-8175-7745ff681676-config-data-merged\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.657824 master-0 kubenswrapper[28120]: I0220 15:17:58.657803 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-combined-ca-bundle\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.657907 master-0 kubenswrapper[28120]: I0220 15:17:58.657880 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-scripts\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.694742 master-0 kubenswrapper[28120]: I0220 15:17:58.693684 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-backup-0" event={"ID":"ba7d8e3e-cda5-4845-9c73-74221a517924","Type":"ContainerStarted","Data":"c27cfdd7dd576c79954973406471cbcfc7d3b74fe9beb63c0d8b36d2dd9ad1d2"} Feb 20 15:17:58.751694 master-0 kubenswrapper[28120]: I0220 15:17:58.728244 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-scheduler-0" event={"ID":"de23a98a-f6a0-44f9-95ae-fc4c396bf32a","Type":"ContainerStarted","Data":"ec44e69b9c800a6d63b6ec2d2cf7b7a793cbec8c59e2dba3cf6c2d11639f60d3"} Feb 20 15:17:58.751694 master-0 kubenswrapper[28120]: I0220 15:17:58.729742 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" event={"ID":"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0","Type":"ContainerStarted","Data":"c11729d9f6b459a27098598a0c87c8c137df16c03d4a4e57b9cd5a7ee06ce7d0"} Feb 20 15:17:58.751694 master-0 kubenswrapper[28120]: I0220 15:17:58.730787 28120 generic.go:334] "Generic (PLEG): container finished" podID="89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8" containerID="ef7a450eba6522e9892401d56871b2da56525ce9347458a175974c7cbcbb863a" exitCode=0 Feb 20 15:17:58.751694 master-0 kubenswrapper[28120]: I0220 15:17:58.730993 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" event={"ID":"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8","Type":"ContainerDied","Data":"ef7a450eba6522e9892401d56871b2da56525ce9347458a175974c7cbcbb863a"} Feb 20 15:17:58.761946 master-0 kubenswrapper[28120]: I0220 15:17:58.759540 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa3237e4-664f-41e0-8175-7745ff681676-logs\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.761946 master-0 kubenswrapper[28120]: I0220 15:17:58.759599 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q444\" (UniqueName: \"kubernetes.io/projected/aa3237e4-664f-41e0-8175-7745ff681676-kube-api-access-6q444\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.761946 master-0 kubenswrapper[28120]: I0220 15:17:58.759642 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-internal-tls-certs\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.761946 master-0 kubenswrapper[28120]: I0220 15:17:58.759664 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-config-data-custom\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.761946 master-0 kubenswrapper[28120]: I0220 15:17:58.759698 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/aa3237e4-664f-41e0-8175-7745ff681676-config-data-merged\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.761946 master-0 kubenswrapper[28120]: I0220 15:17:58.759717 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-combined-ca-bundle\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.761946 master-0 kubenswrapper[28120]: I0220 15:17:58.759752 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-scripts\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.761946 master-0 kubenswrapper[28120]: I0220 15:17:58.759835 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-config-data\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.761946 master-0 kubenswrapper[28120]: I0220 15:17:58.759861 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/aa3237e4-664f-41e0-8175-7745ff681676-etc-podinfo\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.761946 master-0 kubenswrapper[28120]: I0220 15:17:58.759892 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-public-tls-certs\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.763861 master-0 kubenswrapper[28120]: I0220 15:17:58.763352 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-config-data-custom\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.774414 master-0 kubenswrapper[28120]: I0220 15:17:58.763459 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa3237e4-664f-41e0-8175-7745ff681676-logs\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.774414 master-0 kubenswrapper[28120]: I0220 15:17:58.764592 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/aa3237e4-664f-41e0-8175-7745ff681676-config-data-merged\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.774414 master-0 kubenswrapper[28120]: I0220 15:17:58.764956 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-public-tls-certs\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.774414 master-0 kubenswrapper[28120]: I0220 15:17:58.765130 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-scripts\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.774414 master-0 kubenswrapper[28120]: I0220 15:17:58.770350 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/aa3237e4-664f-41e0-8175-7745ff681676-etc-podinfo\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.774414 master-0 kubenswrapper[28120]: I0220 15:17:58.774254 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-combined-ca-bundle\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.774414 master-0 kubenswrapper[28120]: I0220 15:17:58.774318 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-internal-tls-certs\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.774414 master-0 kubenswrapper[28120]: I0220 15:17:58.774366 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa3237e4-664f-41e0-8175-7745ff681676-config-data\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.808235 master-0 kubenswrapper[28120]: I0220 15:17:58.808129 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q444\" (UniqueName: \"kubernetes.io/projected/aa3237e4-664f-41e0-8175-7745ff681676-kube-api-access-6q444\") pod \"ironic-74d7ccf7c8-wrvs9\" (UID: \"aa3237e4-664f-41e0-8175-7745ff681676\") " pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:58.825435 master-0 kubenswrapper[28120]: I0220 15:17:58.825389 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-148e5e82-9dfd-4d35-a4ce-5a593f61ac1d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1a82c4d4-84c5-4807-bb63-f114614f5a2f\") pod \"ironic-conductor-0\" (UID: \"af765e06-e2aa-4239-8a51-fc29e02fa257\") " pod="openstack/ironic-conductor-0" Feb 20 15:17:58.926623 master-0 kubenswrapper[28120]: I0220 15:17:58.926562 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:17:59.036844 master-0 kubenswrapper[28120]: I0220 15:17:59.036341 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 20 15:17:59.514725 master-0 kubenswrapper[28120]: I0220 15:17:59.514670 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-4668-account-create-update-5kcbc" Feb 20 15:17:59.537896 master-0 kubenswrapper[28120]: I0220 15:17:59.537847 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-kxg76" Feb 20 15:17:59.606096 master-0 kubenswrapper[28120]: I0220 15:17:59.603091 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg75t\" (UniqueName: \"kubernetes.io/projected/b640c921-f9d7-44ff-9bfe-44f48306a404-kube-api-access-bg75t\") pod \"b640c921-f9d7-44ff-9bfe-44f48306a404\" (UID: \"b640c921-f9d7-44ff-9bfe-44f48306a404\") " Feb 20 15:17:59.606096 master-0 kubenswrapper[28120]: I0220 15:17:59.603420 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b640c921-f9d7-44ff-9bfe-44f48306a404-operator-scripts\") pod \"b640c921-f9d7-44ff-9bfe-44f48306a404\" (UID: \"b640c921-f9d7-44ff-9bfe-44f48306a404\") " Feb 20 15:17:59.610416 master-0 kubenswrapper[28120]: I0220 15:17:59.607165 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b640c921-f9d7-44ff-9bfe-44f48306a404-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b640c921-f9d7-44ff-9bfe-44f48306a404" (UID: "b640c921-f9d7-44ff-9bfe-44f48306a404"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:59.610416 master-0 kubenswrapper[28120]: I0220 15:17:59.609100 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b640c921-f9d7-44ff-9bfe-44f48306a404-kube-api-access-bg75t" (OuterVolumeSpecName: "kube-api-access-bg75t") pod "b640c921-f9d7-44ff-9bfe-44f48306a404" (UID: "b640c921-f9d7-44ff-9bfe-44f48306a404"). InnerVolumeSpecName "kube-api-access-bg75t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:59.713553 master-0 kubenswrapper[28120]: I0220 15:17:59.713320 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnwpw\" (UniqueName: \"kubernetes.io/projected/cb8ac0c0-7862-4b33-a63e-3d5ac82e350c-kube-api-access-gnwpw\") pod \"cb8ac0c0-7862-4b33-a63e-3d5ac82e350c\" (UID: \"cb8ac0c0-7862-4b33-a63e-3d5ac82e350c\") " Feb 20 15:17:59.713553 master-0 kubenswrapper[28120]: I0220 15:17:59.713467 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb8ac0c0-7862-4b33-a63e-3d5ac82e350c-operator-scripts\") pod \"cb8ac0c0-7862-4b33-a63e-3d5ac82e350c\" (UID: \"cb8ac0c0-7862-4b33-a63e-3d5ac82e350c\") " Feb 20 15:17:59.714291 master-0 kubenswrapper[28120]: I0220 15:17:59.714005 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b640c921-f9d7-44ff-9bfe-44f48306a404-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:59.714291 master-0 kubenswrapper[28120]: I0220 15:17:59.714020 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg75t\" (UniqueName: \"kubernetes.io/projected/b640c921-f9d7-44ff-9bfe-44f48306a404-kube-api-access-bg75t\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:59.725101 master-0 kubenswrapper[28120]: I0220 15:17:59.725033 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb8ac0c0-7862-4b33-a63e-3d5ac82e350c-kube-api-access-gnwpw" (OuterVolumeSpecName: "kube-api-access-gnwpw") pod "cb8ac0c0-7862-4b33-a63e-3d5ac82e350c" (UID: "cb8ac0c0-7862-4b33-a63e-3d5ac82e350c"). InnerVolumeSpecName "kube-api-access-gnwpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:17:59.744118 master-0 kubenswrapper[28120]: I0220 15:17:59.742503 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb8ac0c0-7862-4b33-a63e-3d5ac82e350c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb8ac0c0-7862-4b33-a63e-3d5ac82e350c" (UID: "cb8ac0c0-7862-4b33-a63e-3d5ac82e350c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:17:59.787148 master-0 kubenswrapper[28120]: I0220 15:17:59.787100 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-kxg76" event={"ID":"cb8ac0c0-7862-4b33-a63e-3d5ac82e350c","Type":"ContainerDied","Data":"9482dc2d372396dc9152704663745aae128a3e0007d3ca345ba84c98b1ff0e3b"} Feb 20 15:17:59.787148 master-0 kubenswrapper[28120]: I0220 15:17:59.787148 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9482dc2d372396dc9152704663745aae128a3e0007d3ca345ba84c98b1ff0e3b" Feb 20 15:17:59.787503 master-0 kubenswrapper[28120]: I0220 15:17:59.787209 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-kxg76" Feb 20 15:17:59.813371 master-0 kubenswrapper[28120]: I0220 15:17:59.800643 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-4668-account-create-update-5kcbc" event={"ID":"b640c921-f9d7-44ff-9bfe-44f48306a404","Type":"ContainerDied","Data":"9706e13dbdbff3eeaec0217d289f69b6dc391787bd02eca72199ee4bae3c2799"} Feb 20 15:17:59.813371 master-0 kubenswrapper[28120]: I0220 15:17:59.800708 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9706e13dbdbff3eeaec0217d289f69b6dc391787bd02eca72199ee4bae3c2799" Feb 20 15:17:59.813371 master-0 kubenswrapper[28120]: I0220 15:17:59.800789 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-4668-account-create-update-5kcbc" Feb 20 15:17:59.815816 master-0 kubenswrapper[28120]: I0220 15:17:59.815773 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb8ac0c0-7862-4b33-a63e-3d5ac82e350c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:17:59.815816 master-0 kubenswrapper[28120]: I0220 15:17:59.815811 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnwpw\" (UniqueName: \"kubernetes.io/projected/cb8ac0c0-7862-4b33-a63e-3d5ac82e350c-kube-api-access-gnwpw\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:00.202264 master-0 kubenswrapper[28120]: I0220 15:18:00.202032 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-74d7ccf7c8-wrvs9"] Feb 20 15:18:00.262718 master-0 kubenswrapper[28120]: I0220 15:18:00.262681 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 20 15:18:00.837676 master-0 kubenswrapper[28120]: I0220 15:18:00.837520 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-74d7ccf7c8-wrvs9" event={"ID":"aa3237e4-664f-41e0-8175-7745ff681676","Type":"ContainerStarted","Data":"31153d3d7fb386ab8f686f4c4abef82ef2f477a121af9cffaff2bc3cb2b2b575"} Feb 20 15:18:00.865004 master-0 kubenswrapper[28120]: I0220 15:18:00.864899 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" event={"ID":"e6599b3f-4d35-404d-9e2b-15fdcc1aa8c0","Type":"ContainerStarted","Data":"92298d17611cfab817bfe929d2bf07d55fa9798fac317f28277dce53f12cc7bf"} Feb 20 15:18:00.868860 master-0 kubenswrapper[28120]: I0220 15:18:00.868799 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" event={"ID":"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8","Type":"ContainerStarted","Data":"7c9a421fb027d0770d7b1c388fc7608c71fe42d5f4abc94a48954842aec29544"} Feb 20 15:18:00.869163 master-0 kubenswrapper[28120]: I0220 15:18:00.869142 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:18:00.877309 master-0 kubenswrapper[28120]: I0220 15:18:00.877260 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"af765e06-e2aa-4239-8a51-fc29e02fa257","Type":"ContainerStarted","Data":"e717b111a1f0bf03702d41380178b6b08d100dbc038f3c01832194221d3caafa"} Feb 20 15:18:00.883791 master-0 kubenswrapper[28120]: I0220 15:18:00.883747 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" event={"ID":"266c4a44-1f0a-468c-99c1-1dbdab46f6ad","Type":"ContainerStarted","Data":"c2cbe1b3f568c4ebfd131adc685988a17eb378409e09c5cf7a91f840eace4e41"} Feb 20 15:18:00.884622 master-0 kubenswrapper[28120]: I0220 15:18:00.884598 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" Feb 20 15:18:00.889841 master-0 kubenswrapper[28120]: I0220 15:18:00.889788 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-backup-0" event={"ID":"ba7d8e3e-cda5-4845-9c73-74221a517924","Type":"ContainerStarted","Data":"8ff6ff917b02eb9a300deeca4e95e52f85d0349a4c9cc01e8386d0cace25e8c1"} Feb 20 15:18:00.909265 master-0 kubenswrapper[28120]: I0220 15:18:00.907184 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-eea69-scheduler-0" event={"ID":"de23a98a-f6a0-44f9-95ae-fc4c396bf32a","Type":"ContainerStarted","Data":"c4473515487e8013b7988e02450921d03884011c151e2b209f310a23fdcb549a"} Feb 20 15:18:00.917984 master-0 kubenswrapper[28120]: I0220 15:18:00.910877 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" podStartSLOduration=5.910861677 podStartE2EDuration="5.910861677s" podCreationTimestamp="2026-02-20 15:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:18:00.908502718 +0000 UTC m=+1019.169296301" watchObservedRunningTime="2026-02-20 15:18:00.910861677 +0000 UTC m=+1019.171655240" Feb 20 15:18:00.918464 master-0 kubenswrapper[28120]: I0220 15:18:00.918417 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-749b5b76fd-6zvxh" event={"ID":"06327dac-718e-426d-8c0e-9955f38106f7","Type":"ContainerStarted","Data":"cc35ce3e8d27ddd7ddf08a675cf2504bb16f1553b3925508f7b292174e29ca28"} Feb 20 15:18:00.943967 master-0 kubenswrapper[28120]: I0220 15:18:00.939641 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" podStartSLOduration=5.939624544 podStartE2EDuration="5.939624544s" podCreationTimestamp="2026-02-20 15:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:18:00.93745719 +0000 UTC m=+1019.198250753" watchObservedRunningTime="2026-02-20 15:18:00.939624544 +0000 UTC m=+1019.200418107" Feb 20 15:18:00.947633 master-0 kubenswrapper[28120]: I0220 15:18:00.946296 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:18:00.947633 master-0 kubenswrapper[28120]: I0220 15:18:00.946710 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:18:00.980951 master-0 kubenswrapper[28120]: I0220 15:18:00.977281 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-eea69-scheduler-0" podStartSLOduration=5.977263332 podStartE2EDuration="5.977263332s" podCreationTimestamp="2026-02-20 15:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:18:00.975246651 +0000 UTC m=+1019.236040214" watchObservedRunningTime="2026-02-20 15:18:00.977263332 +0000 UTC m=+1019.238056895" Feb 20 15:18:01.007593 master-0 kubenswrapper[28120]: I0220 15:18:01.007136 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-eea69-backup-0" podStartSLOduration=6.007119316 podStartE2EDuration="6.007119316s" podCreationTimestamp="2026-02-20 15:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:18:01.00085481 +0000 UTC m=+1019.261648383" watchObservedRunningTime="2026-02-20 15:18:01.007119316 +0000 UTC m=+1019.267912879" Feb 20 15:18:01.059277 master-0 kubenswrapper[28120]: I0220 15:18:01.048979 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" podStartSLOduration=3.969604161 podStartE2EDuration="7.048959889s" podCreationTimestamp="2026-02-20 15:17:54 +0000 UTC" firstStartedPulling="2026-02-20 15:17:56.424816839 +0000 UTC m=+1014.685610402" lastFinishedPulling="2026-02-20 15:17:59.504172567 +0000 UTC m=+1017.764966130" observedRunningTime="2026-02-20 15:18:01.045233986 +0000 UTC m=+1019.306027549" watchObservedRunningTime="2026-02-20 15:18:01.048959889 +0000 UTC m=+1019.309753452" Feb 20 15:18:01.357831 master-0 kubenswrapper[28120]: I0220 15:18:01.357724 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:18:01.367906 master-0 kubenswrapper[28120]: I0220 15:18:01.367827 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-55b884df7b-l5kgx"] Feb 20 15:18:01.369745 master-0 kubenswrapper[28120]: E0220 15:18:01.369222 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8ac0c0-7862-4b33-a63e-3d5ac82e350c" containerName="mariadb-database-create" Feb 20 15:18:01.369745 master-0 kubenswrapper[28120]: I0220 15:18:01.369244 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8ac0c0-7862-4b33-a63e-3d5ac82e350c" containerName="mariadb-database-create" Feb 20 15:18:01.369745 master-0 kubenswrapper[28120]: E0220 15:18:01.369271 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b640c921-f9d7-44ff-9bfe-44f48306a404" containerName="mariadb-account-create-update" Feb 20 15:18:01.369745 master-0 kubenswrapper[28120]: I0220 15:18:01.369277 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b640c921-f9d7-44ff-9bfe-44f48306a404" containerName="mariadb-account-create-update" Feb 20 15:18:01.369745 master-0 kubenswrapper[28120]: I0220 15:18:01.369579 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="b640c921-f9d7-44ff-9bfe-44f48306a404" containerName="mariadb-account-create-update" Feb 20 15:18:01.369745 master-0 kubenswrapper[28120]: I0220 15:18:01.369610 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb8ac0c0-7862-4b33-a63e-3d5ac82e350c" containerName="mariadb-database-create" Feb 20 15:18:01.380375 master-0 kubenswrapper[28120]: I0220 15:18:01.373596 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.432278 master-0 kubenswrapper[28120]: I0220 15:18:01.430431 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55b884df7b-l5kgx"] Feb 20 15:18:01.439117 master-0 kubenswrapper[28120]: I0220 15:18:01.439069 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-eea69-backup-0" Feb 20 15:18:01.526727 master-0 kubenswrapper[28120]: I0220 15:18:01.526687 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1dca572-e6e3-4323-92ec-4a8dad5208a7-config-data\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.526829 master-0 kubenswrapper[28120]: I0220 15:18:01.526752 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1dca572-e6e3-4323-92ec-4a8dad5208a7-scripts\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.526829 master-0 kubenswrapper[28120]: I0220 15:18:01.526823 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1dca572-e6e3-4323-92ec-4a8dad5208a7-combined-ca-bundle\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.526901 master-0 kubenswrapper[28120]: I0220 15:18:01.526855 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj5zb\" (UniqueName: \"kubernetes.io/projected/d1dca572-e6e3-4323-92ec-4a8dad5208a7-kube-api-access-sj5zb\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.526901 master-0 kubenswrapper[28120]: I0220 15:18:01.526876 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1dca572-e6e3-4323-92ec-4a8dad5208a7-public-tls-certs\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.526901 master-0 kubenswrapper[28120]: I0220 15:18:01.526898 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1dca572-e6e3-4323-92ec-4a8dad5208a7-logs\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.527006 master-0 kubenswrapper[28120]: I0220 15:18:01.526943 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1dca572-e6e3-4323-92ec-4a8dad5208a7-internal-tls-certs\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.629482 master-0 kubenswrapper[28120]: I0220 15:18:01.629366 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1dca572-e6e3-4323-92ec-4a8dad5208a7-scripts\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.630078 master-0 kubenswrapper[28120]: I0220 15:18:01.630049 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1dca572-e6e3-4323-92ec-4a8dad5208a7-combined-ca-bundle\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.630363 master-0 kubenswrapper[28120]: I0220 15:18:01.630311 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj5zb\" (UniqueName: \"kubernetes.io/projected/d1dca572-e6e3-4323-92ec-4a8dad5208a7-kube-api-access-sj5zb\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.630567 master-0 kubenswrapper[28120]: I0220 15:18:01.630552 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1dca572-e6e3-4323-92ec-4a8dad5208a7-public-tls-certs\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.630713 master-0 kubenswrapper[28120]: I0220 15:18:01.630700 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1dca572-e6e3-4323-92ec-4a8dad5208a7-logs\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.631152 master-0 kubenswrapper[28120]: I0220 15:18:01.631136 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1dca572-e6e3-4323-92ec-4a8dad5208a7-internal-tls-certs\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.631985 master-0 kubenswrapper[28120]: I0220 15:18:01.631966 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1dca572-e6e3-4323-92ec-4a8dad5208a7-config-data\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.632337 master-0 kubenswrapper[28120]: I0220 15:18:01.631407 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1dca572-e6e3-4323-92ec-4a8dad5208a7-logs\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.641779 master-0 kubenswrapper[28120]: I0220 15:18:01.641729 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1dca572-e6e3-4323-92ec-4a8dad5208a7-public-tls-certs\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.643632 master-0 kubenswrapper[28120]: I0220 15:18:01.642984 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1dca572-e6e3-4323-92ec-4a8dad5208a7-internal-tls-certs\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.645078 master-0 kubenswrapper[28120]: I0220 15:18:01.645051 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1dca572-e6e3-4323-92ec-4a8dad5208a7-config-data\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.650396 master-0 kubenswrapper[28120]: I0220 15:18:01.649365 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1dca572-e6e3-4323-92ec-4a8dad5208a7-scripts\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.659732 master-0 kubenswrapper[28120]: I0220 15:18:01.659689 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj5zb\" (UniqueName: \"kubernetes.io/projected/d1dca572-e6e3-4323-92ec-4a8dad5208a7-kube-api-access-sj5zb\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.674805 master-0 kubenswrapper[28120]: I0220 15:18:01.674772 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1dca572-e6e3-4323-92ec-4a8dad5208a7-combined-ca-bundle\") pod \"placement-55b884df7b-l5kgx\" (UID: \"d1dca572-e6e3-4323-92ec-4a8dad5208a7\") " pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.805950 master-0 kubenswrapper[28120]: I0220 15:18:01.805796 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:01.935458 master-0 kubenswrapper[28120]: I0220 15:18:01.935400 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"af765e06-e2aa-4239-8a51-fc29e02fa257","Type":"ContainerStarted","Data":"bfb3865ba6c27d1fdd421a9b3a8c18dd45a62bf75f58f9036e7de2d761b2a745"} Feb 20 15:18:01.944248 master-0 kubenswrapper[28120]: I0220 15:18:01.944080 28120 generic.go:334] "Generic (PLEG): container finished" podID="06327dac-718e-426d-8c0e-9955f38106f7" containerID="cc35ce3e8d27ddd7ddf08a675cf2504bb16f1553b3925508f7b292174e29ca28" exitCode=0 Feb 20 15:18:01.944248 master-0 kubenswrapper[28120]: I0220 15:18:01.944172 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-749b5b76fd-6zvxh" event={"ID":"06327dac-718e-426d-8c0e-9955f38106f7","Type":"ContainerDied","Data":"cc35ce3e8d27ddd7ddf08a675cf2504bb16f1553b3925508f7b292174e29ca28"} Feb 20 15:18:01.944248 master-0 kubenswrapper[28120]: I0220 15:18:01.944199 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-749b5b76fd-6zvxh" event={"ID":"06327dac-718e-426d-8c0e-9955f38106f7","Type":"ContainerStarted","Data":"4ccf190aa86d2c5bade22e9790a8ba37225937c227a9e2a74ecec25fdceb4969"} Feb 20 15:18:01.948078 master-0 kubenswrapper[28120]: I0220 15:18:01.947362 28120 generic.go:334] "Generic (PLEG): container finished" podID="aa3237e4-664f-41e0-8175-7745ff681676" containerID="2f06932547357e9110f2abbdced57ee0dd20d755ce610f0dd43650aa0221cfa1" exitCode=0 Feb 20 15:18:01.953389 master-0 kubenswrapper[28120]: I0220 15:18:01.949140 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-74d7ccf7c8-wrvs9" event={"ID":"aa3237e4-664f-41e0-8175-7745ff681676","Type":"ContainerDied","Data":"2f06932547357e9110f2abbdced57ee0dd20d755ce610f0dd43650aa0221cfa1"} Feb 20 15:18:02.289818 master-0 kubenswrapper[28120]: I0220 15:18:02.289771 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55b884df7b-l5kgx"] Feb 20 15:18:02.969555 master-0 kubenswrapper[28120]: I0220 15:18:02.969422 28120 generic.go:334] "Generic (PLEG): container finished" podID="06327dac-718e-426d-8c0e-9955f38106f7" containerID="c303aae86d84a777e683318362a5cc79dfe43b781716e2f0590669fbe2c5cc97" exitCode=1 Feb 20 15:18:02.969555 master-0 kubenswrapper[28120]: I0220 15:18:02.969515 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-749b5b76fd-6zvxh" event={"ID":"06327dac-718e-426d-8c0e-9955f38106f7","Type":"ContainerDied","Data":"c303aae86d84a777e683318362a5cc79dfe43b781716e2f0590669fbe2c5cc97"} Feb 20 15:18:02.973899 master-0 kubenswrapper[28120]: I0220 15:18:02.970181 28120 scope.go:117] "RemoveContainer" containerID="c303aae86d84a777e683318362a5cc79dfe43b781716e2f0590669fbe2c5cc97" Feb 20 15:18:02.973899 master-0 kubenswrapper[28120]: I0220 15:18:02.972514 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-eea69-api-0" Feb 20 15:18:02.976415 master-0 kubenswrapper[28120]: I0220 15:18:02.974736 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-74d7ccf7c8-wrvs9" event={"ID":"aa3237e4-664f-41e0-8175-7745ff681676","Type":"ContainerStarted","Data":"0efc7ef7618bdd294de1754bf15e4c2ef56d2794e13fab53281e808715a9d6d7"} Feb 20 15:18:02.976415 master-0 kubenswrapper[28120]: I0220 15:18:02.974760 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-74d7ccf7c8-wrvs9" event={"ID":"aa3237e4-664f-41e0-8175-7745ff681676","Type":"ContainerStarted","Data":"b02679857d1d6bcae42c633b3dead564e4ed44cd87c3475f83ab4f51c17c68fb"} Feb 20 15:18:02.976415 master-0 kubenswrapper[28120]: I0220 15:18:02.975261 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:18:02.980153 master-0 kubenswrapper[28120]: I0220 15:18:02.978669 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55b884df7b-l5kgx" event={"ID":"d1dca572-e6e3-4323-92ec-4a8dad5208a7","Type":"ContainerStarted","Data":"6d5047484f79e19717e99e3e69b7d05f400b720cc637c4186cb0570770831fc8"} Feb 20 15:18:02.980153 master-0 kubenswrapper[28120]: I0220 15:18:02.978727 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55b884df7b-l5kgx" event={"ID":"d1dca572-e6e3-4323-92ec-4a8dad5208a7","Type":"ContainerStarted","Data":"72e51c9c12c21aeb2b6480d9dfbd7f5ccd2df63b9a7a6032abdfcae2857f6093"} Feb 20 15:18:02.980153 master-0 kubenswrapper[28120]: I0220 15:18:02.978743 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55b884df7b-l5kgx" event={"ID":"d1dca572-e6e3-4323-92ec-4a8dad5208a7","Type":"ContainerStarted","Data":"ccca2dbdb053125626e9d81e5341f81a5cb83c858046e4db65915d9aa07b75cd"} Feb 20 15:18:03.052639 master-0 kubenswrapper[28120]: I0220 15:18:03.052535 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-55b884df7b-l5kgx" podStartSLOduration=2.052515944 podStartE2EDuration="2.052515944s" podCreationTimestamp="2026-02-20 15:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:18:03.04435544 +0000 UTC m=+1021.305149003" watchObservedRunningTime="2026-02-20 15:18:03.052515944 +0000 UTC m=+1021.313309507" Feb 20 15:18:03.153815 master-0 kubenswrapper[28120]: I0220 15:18:03.153685 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-74d7ccf7c8-wrvs9" podStartSLOduration=5.153663865 podStartE2EDuration="5.153663865s" podCreationTimestamp="2026-02-20 15:17:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:18:03.148844795 +0000 UTC m=+1021.409638368" watchObservedRunningTime="2026-02-20 15:18:03.153663865 +0000 UTC m=+1021.414457448" Feb 20 15:18:03.999026 master-0 kubenswrapper[28120]: I0220 15:18:03.998766 28120 generic.go:334] "Generic (PLEG): container finished" podID="06327dac-718e-426d-8c0e-9955f38106f7" containerID="c3b40cc2f0eaa97462bc1471e3321a04845b0242ad8521fb72928be5c7f9d613" exitCode=1 Feb 20 15:18:03.999026 master-0 kubenswrapper[28120]: I0220 15:18:03.998860 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-749b5b76fd-6zvxh" event={"ID":"06327dac-718e-426d-8c0e-9955f38106f7","Type":"ContainerDied","Data":"c3b40cc2f0eaa97462bc1471e3321a04845b0242ad8521fb72928be5c7f9d613"} Feb 20 15:18:03.999026 master-0 kubenswrapper[28120]: I0220 15:18:03.998961 28120 scope.go:117] "RemoveContainer" containerID="c303aae86d84a777e683318362a5cc79dfe43b781716e2f0590669fbe2c5cc97" Feb 20 15:18:04.000263 master-0 kubenswrapper[28120]: I0220 15:18:04.000192 28120 scope.go:117] "RemoveContainer" containerID="c3b40cc2f0eaa97462bc1471e3321a04845b0242ad8521fb72928be5c7f9d613" Feb 20 15:18:04.002615 master-0 kubenswrapper[28120]: E0220 15:18:04.000820 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-749b5b76fd-6zvxh_openstack(06327dac-718e-426d-8c0e-9955f38106f7)\"" pod="openstack/ironic-749b5b76fd-6zvxh" podUID="06327dac-718e-426d-8c0e-9955f38106f7" Feb 20 15:18:04.002615 master-0 kubenswrapper[28120]: I0220 15:18:04.001400 28120 generic.go:334] "Generic (PLEG): container finished" podID="af765e06-e2aa-4239-8a51-fc29e02fa257" containerID="bfb3865ba6c27d1fdd421a9b3a8c18dd45a62bf75f58f9036e7de2d761b2a745" exitCode=0 Feb 20 15:18:04.002615 master-0 kubenswrapper[28120]: I0220 15:18:04.001510 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"af765e06-e2aa-4239-8a51-fc29e02fa257","Type":"ContainerDied","Data":"bfb3865ba6c27d1fdd421a9b3a8c18dd45a62bf75f58f9036e7de2d761b2a745"} Feb 20 15:18:04.016543 master-0 kubenswrapper[28120]: I0220 15:18:04.016474 28120 generic.go:334] "Generic (PLEG): container finished" podID="266c4a44-1f0a-468c-99c1-1dbdab46f6ad" containerID="c2cbe1b3f568c4ebfd131adc685988a17eb378409e09c5cf7a91f840eace4e41" exitCode=1 Feb 20 15:18:04.017544 master-0 kubenswrapper[28120]: I0220 15:18:04.017483 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" event={"ID":"266c4a44-1f0a-468c-99c1-1dbdab46f6ad","Type":"ContainerDied","Data":"c2cbe1b3f568c4ebfd131adc685988a17eb378409e09c5cf7a91f840eace4e41"} Feb 20 15:18:04.019181 master-0 kubenswrapper[28120]: I0220 15:18:04.017788 28120 scope.go:117] "RemoveContainer" containerID="c2cbe1b3f568c4ebfd131adc685988a17eb378409e09c5cf7a91f840eace4e41" Feb 20 15:18:04.019181 master-0 kubenswrapper[28120]: I0220 15:18:04.018145 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:04.019181 master-0 kubenswrapper[28120]: I0220 15:18:04.018183 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:05.032264 master-0 kubenswrapper[28120]: I0220 15:18:05.032193 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" event={"ID":"266c4a44-1f0a-468c-99c1-1dbdab46f6ad","Type":"ContainerStarted","Data":"a828d7f5122b36fc07d917f428b69802ca53ad0ac272a2b2c48498586fe0a361"} Feb 20 15:18:05.033452 master-0 kubenswrapper[28120]: I0220 15:18:05.033404 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" Feb 20 15:18:05.049878 master-0 kubenswrapper[28120]: I0220 15:18:05.049839 28120 scope.go:117] "RemoveContainer" containerID="c3b40cc2f0eaa97462bc1471e3321a04845b0242ad8521fb72928be5c7f9d613" Feb 20 15:18:05.050378 master-0 kubenswrapper[28120]: E0220 15:18:05.050354 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-749b5b76fd-6zvxh_openstack(06327dac-718e-426d-8c0e-9955f38106f7)\"" pod="openstack/ironic-749b5b76fd-6zvxh" podUID="06327dac-718e-426d-8c0e-9955f38106f7" Feb 20 15:18:05.626206 master-0 kubenswrapper[28120]: I0220 15:18:05.626140 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:18:05.653829 master-0 kubenswrapper[28120]: I0220 15:18:05.653726 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:18:05.657415 master-0 kubenswrapper[28120]: I0220 15:18:05.655070 28120 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:18:05.741641 master-0 kubenswrapper[28120]: I0220 15:18:05.741376 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d687b68b9-d5bw7"] Feb 20 15:18:05.741890 master-0 kubenswrapper[28120]: I0220 15:18:05.741675 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" podUID="788fe9e6-7d26-4739-855e-0a1f758e0376" containerName="dnsmasq-dns" containerID="cri-o://8e083b7dce91c3ee1f1f7d37019ca73b66f7e59d1305c72e283d8238718f7459" gracePeriod=10 Feb 20 15:18:06.086295 master-0 kubenswrapper[28120]: I0220 15:18:06.078013 28120 generic.go:334] "Generic (PLEG): container finished" podID="788fe9e6-7d26-4739-855e-0a1f758e0376" containerID="8e083b7dce91c3ee1f1f7d37019ca73b66f7e59d1305c72e283d8238718f7459" exitCode=0 Feb 20 15:18:06.086295 master-0 kubenswrapper[28120]: I0220 15:18:06.084824 28120 scope.go:117] "RemoveContainer" containerID="c3b40cc2f0eaa97462bc1471e3321a04845b0242ad8521fb72928be5c7f9d613" Feb 20 15:18:06.086295 master-0 kubenswrapper[28120]: E0220 15:18:06.085106 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-749b5b76fd-6zvxh_openstack(06327dac-718e-426d-8c0e-9955f38106f7)\"" pod="openstack/ironic-749b5b76fd-6zvxh" podUID="06327dac-718e-426d-8c0e-9955f38106f7" Feb 20 15:18:06.114806 master-0 kubenswrapper[28120]: I0220 15:18:06.114755 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" event={"ID":"788fe9e6-7d26-4739-855e-0a1f758e0376","Type":"ContainerDied","Data":"8e083b7dce91c3ee1f1f7d37019ca73b66f7e59d1305c72e283d8238718f7459"} Feb 20 15:18:06.254233 master-0 kubenswrapper[28120]: I0220 15:18:06.254153 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-eea69-volume-lvm-iscsi-0" Feb 20 15:18:06.494815 master-0 kubenswrapper[28120]: I0220 15:18:06.494707 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:18:06.553453 master-0 kubenswrapper[28120]: I0220 15:18:06.545080 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-ovsdbserver-sb\") pod \"788fe9e6-7d26-4739-855e-0a1f758e0376\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " Feb 20 15:18:06.553453 master-0 kubenswrapper[28120]: I0220 15:18:06.546112 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rptkl\" (UniqueName: \"kubernetes.io/projected/788fe9e6-7d26-4739-855e-0a1f758e0376-kube-api-access-rptkl\") pod \"788fe9e6-7d26-4739-855e-0a1f758e0376\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " Feb 20 15:18:06.553453 master-0 kubenswrapper[28120]: I0220 15:18:06.546193 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-dns-swift-storage-0\") pod \"788fe9e6-7d26-4739-855e-0a1f758e0376\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " Feb 20 15:18:06.553453 master-0 kubenswrapper[28120]: I0220 15:18:06.548597 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-dns-svc\") pod \"788fe9e6-7d26-4739-855e-0a1f758e0376\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " Feb 20 15:18:06.571426 master-0 kubenswrapper[28120]: I0220 15:18:06.554148 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/788fe9e6-7d26-4739-855e-0a1f758e0376-kube-api-access-rptkl" (OuterVolumeSpecName: "kube-api-access-rptkl") pod "788fe9e6-7d26-4739-855e-0a1f758e0376" (UID: "788fe9e6-7d26-4739-855e-0a1f758e0376"). InnerVolumeSpecName "kube-api-access-rptkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:18:06.605965 master-0 kubenswrapper[28120]: I0220 15:18:06.605622 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "788fe9e6-7d26-4739-855e-0a1f758e0376" (UID: "788fe9e6-7d26-4739-855e-0a1f758e0376"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:18:06.620121 master-0 kubenswrapper[28120]: I0220 15:18:06.619667 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "788fe9e6-7d26-4739-855e-0a1f758e0376" (UID: "788fe9e6-7d26-4739-855e-0a1f758e0376"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:18:06.620121 master-0 kubenswrapper[28120]: I0220 15:18:06.619985 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "788fe9e6-7d26-4739-855e-0a1f758e0376" (UID: "788fe9e6-7d26-4739-855e-0a1f758e0376"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:18:06.653337 master-0 kubenswrapper[28120]: I0220 15:18:06.633814 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-eea69-scheduler-0" Feb 20 15:18:06.653337 master-0 kubenswrapper[28120]: I0220 15:18:06.653307 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-eea69-backup-0" Feb 20 15:18:06.654972 master-0 kubenswrapper[28120]: I0220 15:18:06.654544 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-config\") pod \"788fe9e6-7d26-4739-855e-0a1f758e0376\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " Feb 20 15:18:06.654972 master-0 kubenswrapper[28120]: I0220 15:18:06.654667 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-ovsdbserver-nb\") pod \"788fe9e6-7d26-4739-855e-0a1f758e0376\" (UID: \"788fe9e6-7d26-4739-855e-0a1f758e0376\") " Feb 20 15:18:06.670786 master-0 kubenswrapper[28120]: I0220 15:18:06.668381 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rptkl\" (UniqueName: \"kubernetes.io/projected/788fe9e6-7d26-4739-855e-0a1f758e0376-kube-api-access-rptkl\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:06.670786 master-0 kubenswrapper[28120]: I0220 15:18:06.668425 28120 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:06.670786 master-0 kubenswrapper[28120]: I0220 15:18:06.668439 28120 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:06.670786 master-0 kubenswrapper[28120]: I0220 15:18:06.668450 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:06.734589 master-0 kubenswrapper[28120]: I0220 15:18:06.734300 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "788fe9e6-7d26-4739-855e-0a1f758e0376" (UID: "788fe9e6-7d26-4739-855e-0a1f758e0376"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:18:06.763624 master-0 kubenswrapper[28120]: I0220 15:18:06.763562 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-config" (OuterVolumeSpecName: "config") pod "788fe9e6-7d26-4739-855e-0a1f758e0376" (UID: "788fe9e6-7d26-4739-855e-0a1f758e0376"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:18:06.771862 master-0 kubenswrapper[28120]: I0220 15:18:06.771818 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:06.771862 master-0 kubenswrapper[28120]: I0220 15:18:06.771863 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/788fe9e6-7d26-4739-855e-0a1f758e0376-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:07.096076 master-0 kubenswrapper[28120]: I0220 15:18:07.095210 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" event={"ID":"788fe9e6-7d26-4739-855e-0a1f758e0376","Type":"ContainerDied","Data":"d522be06ac328822d0118bced3005f41d9fbbb82b454280943ca80d362998c6b"} Feb 20 15:18:07.096076 master-0 kubenswrapper[28120]: I0220 15:18:07.095264 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d687b68b9-d5bw7" Feb 20 15:18:07.096076 master-0 kubenswrapper[28120]: I0220 15:18:07.095292 28120 scope.go:117] "RemoveContainer" containerID="8e083b7dce91c3ee1f1f7d37019ca73b66f7e59d1305c72e283d8238718f7459" Feb 20 15:18:07.096076 master-0 kubenswrapper[28120]: I0220 15:18:07.096061 28120 scope.go:117] "RemoveContainer" containerID="c3b40cc2f0eaa97462bc1471e3321a04845b0242ad8521fb72928be5c7f9d613" Feb 20 15:18:07.096849 master-0 kubenswrapper[28120]: E0220 15:18:07.096302 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-749b5b76fd-6zvxh_openstack(06327dac-718e-426d-8c0e-9955f38106f7)\"" pod="openstack/ironic-749b5b76fd-6zvxh" podUID="06327dac-718e-426d-8c0e-9955f38106f7" Feb 20 15:18:07.206426 master-0 kubenswrapper[28120]: I0220 15:18:07.205011 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d687b68b9-d5bw7"] Feb 20 15:18:07.216911 master-0 kubenswrapper[28120]: I0220 15:18:07.216844 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d687b68b9-d5bw7"] Feb 20 15:18:08.087062 master-0 kubenswrapper[28120]: I0220 15:18:08.087006 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="788fe9e6-7d26-4739-855e-0a1f758e0376" path="/var/lib/kubelet/pods/788fe9e6-7d26-4739-855e-0a1f758e0376/volumes" Feb 20 15:18:08.111481 master-0 kubenswrapper[28120]: I0220 15:18:08.111435 28120 generic.go:334] "Generic (PLEG): container finished" podID="266c4a44-1f0a-468c-99c1-1dbdab46f6ad" containerID="a828d7f5122b36fc07d917f428b69802ca53ad0ac272a2b2c48498586fe0a361" exitCode=1 Feb 20 15:18:08.111481 master-0 kubenswrapper[28120]: I0220 15:18:08.111481 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" event={"ID":"266c4a44-1f0a-468c-99c1-1dbdab46f6ad","Type":"ContainerDied","Data":"a828d7f5122b36fc07d917f428b69802ca53ad0ac272a2b2c48498586fe0a361"} Feb 20 15:18:08.111966 master-0 kubenswrapper[28120]: I0220 15:18:08.111932 28120 scope.go:117] "RemoveContainer" containerID="a828d7f5122b36fc07d917f428b69802ca53ad0ac272a2b2c48498586fe0a361" Feb 20 15:18:08.112409 master-0 kubenswrapper[28120]: E0220 15:18:08.112376 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-7dbf56d775-btzqc_openstack(266c4a44-1f0a-468c-99c1-1dbdab46f6ad)\"" pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" podUID="266c4a44-1f0a-468c-99c1-1dbdab46f6ad" Feb 20 15:18:08.621465 master-0 kubenswrapper[28120]: I0220 15:18:08.621411 28120 scope.go:117] "RemoveContainer" containerID="d253d7a4782181ee715b0a7baa5e77a2f9a2113fd77b7646e8676ddef1abf770" Feb 20 15:18:08.657853 master-0 kubenswrapper[28120]: I0220 15:18:08.657810 28120 scope.go:117] "RemoveContainer" containerID="c2cbe1b3f568c4ebfd131adc685988a17eb378409e09c5cf7a91f840eace4e41" Feb 20 15:18:09.046169 master-0 kubenswrapper[28120]: I0220 15:18:09.045866 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-856c7f5b5-b8rb7" Feb 20 15:18:09.928119 master-0 kubenswrapper[28120]: I0220 15:18:09.927910 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-dgtgb"] Feb 20 15:18:09.928745 master-0 kubenswrapper[28120]: E0220 15:18:09.928709 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788fe9e6-7d26-4739-855e-0a1f758e0376" containerName="dnsmasq-dns" Feb 20 15:18:09.928841 master-0 kubenswrapper[28120]: I0220 15:18:09.928747 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="788fe9e6-7d26-4739-855e-0a1f758e0376" containerName="dnsmasq-dns" Feb 20 15:18:09.928841 master-0 kubenswrapper[28120]: E0220 15:18:09.928771 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788fe9e6-7d26-4739-855e-0a1f758e0376" containerName="init" Feb 20 15:18:09.928841 master-0 kubenswrapper[28120]: I0220 15:18:09.928784 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="788fe9e6-7d26-4739-855e-0a1f758e0376" containerName="init" Feb 20 15:18:09.929402 master-0 kubenswrapper[28120]: I0220 15:18:09.929356 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="788fe9e6-7d26-4739-855e-0a1f758e0376" containerName="dnsmasq-dns" Feb 20 15:18:09.930660 master-0 kubenswrapper[28120]: I0220 15:18:09.930613 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:09.933221 master-0 kubenswrapper[28120]: I0220 15:18:09.933170 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 20 15:18:09.933577 master-0 kubenswrapper[28120]: I0220 15:18:09.933545 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 20 15:18:09.981794 master-0 kubenswrapper[28120]: I0220 15:18:09.981737 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/1da0bc4d-0f79-436b-a44c-927da967db5c-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:09.982040 master-0 kubenswrapper[28120]: I0220 15:18:09.981842 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-scripts\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:09.982040 master-0 kubenswrapper[28120]: I0220 15:18:09.981937 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/1da0bc4d-0f79-436b-a44c-927da967db5c-var-lib-ironic\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:09.982040 master-0 kubenswrapper[28120]: I0220 15:18:09.982007 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-config\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:09.982040 master-0 kubenswrapper[28120]: I0220 15:18:09.982040 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbsmx\" (UniqueName: \"kubernetes.io/projected/1da0bc4d-0f79-436b-a44c-927da967db5c-kube-api-access-gbsmx\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:09.982188 master-0 kubenswrapper[28120]: I0220 15:18:09.982062 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1da0bc4d-0f79-436b-a44c-927da967db5c-etc-podinfo\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:09.982188 master-0 kubenswrapper[28120]: I0220 15:18:09.982123 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-combined-ca-bundle\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:09.984974 master-0 kubenswrapper[28120]: I0220 15:18:09.984722 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-dgtgb"] Feb 20 15:18:10.083579 master-0 kubenswrapper[28120]: I0220 15:18:10.083524 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-scripts\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:10.083774 master-0 kubenswrapper[28120]: I0220 15:18:10.083634 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/1da0bc4d-0f79-436b-a44c-927da967db5c-var-lib-ironic\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:10.084632 master-0 kubenswrapper[28120]: I0220 15:18:10.084585 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/1da0bc4d-0f79-436b-a44c-927da967db5c-var-lib-ironic\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:10.085087 master-0 kubenswrapper[28120]: I0220 15:18:10.085059 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-config\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:10.085181 master-0 kubenswrapper[28120]: I0220 15:18:10.085159 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbsmx\" (UniqueName: \"kubernetes.io/projected/1da0bc4d-0f79-436b-a44c-927da967db5c-kube-api-access-gbsmx\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:10.085539 master-0 kubenswrapper[28120]: I0220 15:18:10.085194 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1da0bc4d-0f79-436b-a44c-927da967db5c-etc-podinfo\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:10.086093 master-0 kubenswrapper[28120]: I0220 15:18:10.086069 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-combined-ca-bundle\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:10.086190 master-0 kubenswrapper[28120]: I0220 15:18:10.086155 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/1da0bc4d-0f79-436b-a44c-927da967db5c-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:10.086742 master-0 kubenswrapper[28120]: I0220 15:18:10.086685 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/1da0bc4d-0f79-436b-a44c-927da967db5c-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:10.090361 master-0 kubenswrapper[28120]: I0220 15:18:10.088275 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-scripts\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:10.090361 master-0 kubenswrapper[28120]: I0220 15:18:10.089806 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-config\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:10.092098 master-0 kubenswrapper[28120]: I0220 15:18:10.091845 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1da0bc4d-0f79-436b-a44c-927da967db5c-etc-podinfo\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:10.111982 master-0 kubenswrapper[28120]: I0220 15:18:10.106893 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbsmx\" (UniqueName: \"kubernetes.io/projected/1da0bc4d-0f79-436b-a44c-927da967db5c-kube-api-access-gbsmx\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:10.120554 master-0 kubenswrapper[28120]: I0220 15:18:10.120511 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-combined-ca-bundle\") pod \"ironic-inspector-db-sync-dgtgb\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:10.219558 master-0 kubenswrapper[28120]: I0220 15:18:10.219512 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-74d7ccf7c8-wrvs9" Feb 20 15:18:10.263620 master-0 kubenswrapper[28120]: I0220 15:18:10.263569 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:10.439448 master-0 kubenswrapper[28120]: I0220 15:18:10.439306 28120 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" Feb 20 15:18:10.440324 master-0 kubenswrapper[28120]: I0220 15:18:10.440296 28120 scope.go:117] "RemoveContainer" containerID="a828d7f5122b36fc07d917f428b69802ca53ad0ac272a2b2c48498586fe0a361" Feb 20 15:18:10.440658 master-0 kubenswrapper[28120]: E0220 15:18:10.440615 28120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-7dbf56d775-btzqc_openstack(266c4a44-1f0a-468c-99c1-1dbdab46f6ad)\"" pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" podUID="266c4a44-1f0a-468c-99c1-1dbdab46f6ad" Feb 20 15:18:10.852600 master-0 kubenswrapper[28120]: I0220 15:18:10.848003 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-dgtgb"] Feb 20 15:18:10.859642 master-0 kubenswrapper[28120]: W0220 15:18:10.859226 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1da0bc4d_0f79_436b_a44c_927da967db5c.slice/crio-63b55b94c3b98b34b718c9a519d0e63a390cda2562ed7cb947d1eb9d524d7247 WatchSource:0}: Error finding container 63b55b94c3b98b34b718c9a519d0e63a390cda2562ed7cb947d1eb9d524d7247: Status 404 returned error can't find the container with id 63b55b94c3b98b34b718c9a519d0e63a390cda2562ed7cb947d1eb9d524d7247 Feb 20 15:18:10.874646 master-0 kubenswrapper[28120]: I0220 15:18:10.871472 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-749b5b76fd-6zvxh"] Feb 20 15:18:10.874646 master-0 kubenswrapper[28120]: I0220 15:18:10.871788 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-749b5b76fd-6zvxh" podUID="06327dac-718e-426d-8c0e-9955f38106f7" containerName="ironic-api-log" containerID="cri-o://4ccf190aa86d2c5bade22e9790a8ba37225937c227a9e2a74ecec25fdceb4969" gracePeriod=60 Feb 20 15:18:11.203989 master-0 kubenswrapper[28120]: I0220 15:18:11.202181 28120 generic.go:334] "Generic (PLEG): container finished" podID="06327dac-718e-426d-8c0e-9955f38106f7" containerID="4ccf190aa86d2c5bade22e9790a8ba37225937c227a9e2a74ecec25fdceb4969" exitCode=143 Feb 20 15:18:11.203989 master-0 kubenswrapper[28120]: I0220 15:18:11.202250 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-749b5b76fd-6zvxh" event={"ID":"06327dac-718e-426d-8c0e-9955f38106f7","Type":"ContainerDied","Data":"4ccf190aa86d2c5bade22e9790a8ba37225937c227a9e2a74ecec25fdceb4969"} Feb 20 15:18:11.207066 master-0 kubenswrapper[28120]: I0220 15:18:11.206999 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-dgtgb" event={"ID":"1da0bc4d-0f79-436b-a44c-927da967db5c","Type":"ContainerStarted","Data":"63b55b94c3b98b34b718c9a519d0e63a390cda2562ed7cb947d1eb9d524d7247"} Feb 20 15:18:11.398361 master-0 kubenswrapper[28120]: I0220 15:18:11.398299 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:18:11.556219 master-0 kubenswrapper[28120]: I0220 15:18:11.556138 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-scripts\") pod \"06327dac-718e-426d-8c0e-9955f38106f7\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " Feb 20 15:18:11.556550 master-0 kubenswrapper[28120]: I0220 15:18:11.556269 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/06327dac-718e-426d-8c0e-9955f38106f7-etc-podinfo\") pod \"06327dac-718e-426d-8c0e-9955f38106f7\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " Feb 20 15:18:11.556550 master-0 kubenswrapper[28120]: I0220 15:18:11.556314 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-combined-ca-bundle\") pod \"06327dac-718e-426d-8c0e-9955f38106f7\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " Feb 20 15:18:11.556550 master-0 kubenswrapper[28120]: I0220 15:18:11.556348 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c69pl\" (UniqueName: \"kubernetes.io/projected/06327dac-718e-426d-8c0e-9955f38106f7-kube-api-access-c69pl\") pod \"06327dac-718e-426d-8c0e-9955f38106f7\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " Feb 20 15:18:11.556550 master-0 kubenswrapper[28120]: I0220 15:18:11.556389 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-config-data\") pod \"06327dac-718e-426d-8c0e-9955f38106f7\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " Feb 20 15:18:11.556550 master-0 kubenswrapper[28120]: I0220 15:18:11.556458 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/06327dac-718e-426d-8c0e-9955f38106f7-config-data-merged\") pod \"06327dac-718e-426d-8c0e-9955f38106f7\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " Feb 20 15:18:11.556740 master-0 kubenswrapper[28120]: I0220 15:18:11.556625 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06327dac-718e-426d-8c0e-9955f38106f7-logs\") pod \"06327dac-718e-426d-8c0e-9955f38106f7\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " Feb 20 15:18:11.556876 master-0 kubenswrapper[28120]: I0220 15:18:11.556827 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-config-data-custom\") pod \"06327dac-718e-426d-8c0e-9955f38106f7\" (UID: \"06327dac-718e-426d-8c0e-9955f38106f7\") " Feb 20 15:18:11.557213 master-0 kubenswrapper[28120]: I0220 15:18:11.557128 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06327dac-718e-426d-8c0e-9955f38106f7-logs" (OuterVolumeSpecName: "logs") pod "06327dac-718e-426d-8c0e-9955f38106f7" (UID: "06327dac-718e-426d-8c0e-9955f38106f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:18:11.557289 master-0 kubenswrapper[28120]: I0220 15:18:11.557206 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06327dac-718e-426d-8c0e-9955f38106f7-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "06327dac-718e-426d-8c0e-9955f38106f7" (UID: "06327dac-718e-426d-8c0e-9955f38106f7"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:18:11.558219 master-0 kubenswrapper[28120]: I0220 15:18:11.558183 28120 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/06327dac-718e-426d-8c0e-9955f38106f7-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:11.558219 master-0 kubenswrapper[28120]: I0220 15:18:11.558218 28120 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06327dac-718e-426d-8c0e-9955f38106f7-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:11.562236 master-0 kubenswrapper[28120]: I0220 15:18:11.561990 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "06327dac-718e-426d-8c0e-9955f38106f7" (UID: "06327dac-718e-426d-8c0e-9955f38106f7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:11.562236 master-0 kubenswrapper[28120]: I0220 15:18:11.562106 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06327dac-718e-426d-8c0e-9955f38106f7-kube-api-access-c69pl" (OuterVolumeSpecName: "kube-api-access-c69pl") pod "06327dac-718e-426d-8c0e-9955f38106f7" (UID: "06327dac-718e-426d-8c0e-9955f38106f7"). InnerVolumeSpecName "kube-api-access-c69pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:18:11.562236 master-0 kubenswrapper[28120]: I0220 15:18:11.562134 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-scripts" (OuterVolumeSpecName: "scripts") pod "06327dac-718e-426d-8c0e-9955f38106f7" (UID: "06327dac-718e-426d-8c0e-9955f38106f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:11.563384 master-0 kubenswrapper[28120]: I0220 15:18:11.563345 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/06327dac-718e-426d-8c0e-9955f38106f7-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "06327dac-718e-426d-8c0e-9955f38106f7" (UID: "06327dac-718e-426d-8c0e-9955f38106f7"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 20 15:18:11.598395 master-0 kubenswrapper[28120]: I0220 15:18:11.598248 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-config-data" (OuterVolumeSpecName: "config-data") pod "06327dac-718e-426d-8c0e-9955f38106f7" (UID: "06327dac-718e-426d-8c0e-9955f38106f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:11.628073 master-0 kubenswrapper[28120]: I0220 15:18:11.626455 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06327dac-718e-426d-8c0e-9955f38106f7" (UID: "06327dac-718e-426d-8c0e-9955f38106f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:11.660053 master-0 kubenswrapper[28120]: I0220 15:18:11.659995 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:11.660053 master-0 kubenswrapper[28120]: I0220 15:18:11.660045 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c69pl\" (UniqueName: \"kubernetes.io/projected/06327dac-718e-426d-8c0e-9955f38106f7-kube-api-access-c69pl\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:11.660224 master-0 kubenswrapper[28120]: I0220 15:18:11.660062 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:11.660224 master-0 kubenswrapper[28120]: I0220 15:18:11.660075 28120 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:11.660224 master-0 kubenswrapper[28120]: I0220 15:18:11.660086 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06327dac-718e-426d-8c0e-9955f38106f7-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:11.660224 master-0 kubenswrapper[28120]: I0220 15:18:11.660098 28120 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/06327dac-718e-426d-8c0e-9955f38106f7-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:12.226869 master-0 kubenswrapper[28120]: I0220 15:18:12.226802 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-749b5b76fd-6zvxh" event={"ID":"06327dac-718e-426d-8c0e-9955f38106f7","Type":"ContainerDied","Data":"315fdc6c690da58fb34bee6765a7504fad7d4e60ebec1f2523615db3695f95ab"} Feb 20 15:18:12.227459 master-0 kubenswrapper[28120]: I0220 15:18:12.226890 28120 scope.go:117] "RemoveContainer" containerID="c3b40cc2f0eaa97462bc1471e3321a04845b0242ad8521fb72928be5c7f9d613" Feb 20 15:18:12.227459 master-0 kubenswrapper[28120]: I0220 15:18:12.227051 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-749b5b76fd-6zvxh" Feb 20 15:18:12.270517 master-0 kubenswrapper[28120]: I0220 15:18:12.270380 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-749b5b76fd-6zvxh"] Feb 20 15:18:12.286725 master-0 kubenswrapper[28120]: I0220 15:18:12.286590 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-749b5b76fd-6zvxh"] Feb 20 15:18:12.944557 master-0 kubenswrapper[28120]: I0220 15:18:12.943914 28120 scope.go:117] "RemoveContainer" containerID="4ccf190aa86d2c5bade22e9790a8ba37225937c227a9e2a74ecec25fdceb4969" Feb 20 15:18:13.008847 master-0 kubenswrapper[28120]: I0220 15:18:13.008465 28120 scope.go:117] "RemoveContainer" containerID="cc35ce3e8d27ddd7ddf08a675cf2504bb16f1553b3925508f7b292174e29ca28" Feb 20 15:18:13.209915 master-0 kubenswrapper[28120]: I0220 15:18:13.209829 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 20 15:18:13.210574 master-0 kubenswrapper[28120]: E0220 15:18:13.210540 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06327dac-718e-426d-8c0e-9955f38106f7" containerName="ironic-api-log" Feb 20 15:18:13.210574 master-0 kubenswrapper[28120]: I0220 15:18:13.210568 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="06327dac-718e-426d-8c0e-9955f38106f7" containerName="ironic-api-log" Feb 20 15:18:13.210696 master-0 kubenswrapper[28120]: E0220 15:18:13.210606 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06327dac-718e-426d-8c0e-9955f38106f7" containerName="ironic-api" Feb 20 15:18:13.210696 master-0 kubenswrapper[28120]: I0220 15:18:13.210613 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="06327dac-718e-426d-8c0e-9955f38106f7" containerName="ironic-api" Feb 20 15:18:13.210696 master-0 kubenswrapper[28120]: E0220 15:18:13.210624 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06327dac-718e-426d-8c0e-9955f38106f7" containerName="init" Feb 20 15:18:13.210696 master-0 kubenswrapper[28120]: I0220 15:18:13.210630 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="06327dac-718e-426d-8c0e-9955f38106f7" containerName="init" Feb 20 15:18:13.210696 master-0 kubenswrapper[28120]: E0220 15:18:13.210661 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06327dac-718e-426d-8c0e-9955f38106f7" containerName="ironic-api" Feb 20 15:18:13.210696 master-0 kubenswrapper[28120]: I0220 15:18:13.210667 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="06327dac-718e-426d-8c0e-9955f38106f7" containerName="ironic-api" Feb 20 15:18:13.211116 master-0 kubenswrapper[28120]: I0220 15:18:13.210990 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="06327dac-718e-426d-8c0e-9955f38106f7" containerName="ironic-api-log" Feb 20 15:18:13.211116 master-0 kubenswrapper[28120]: I0220 15:18:13.211028 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="06327dac-718e-426d-8c0e-9955f38106f7" containerName="ironic-api" Feb 20 15:18:13.211116 master-0 kubenswrapper[28120]: I0220 15:18:13.211055 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="06327dac-718e-426d-8c0e-9955f38106f7" containerName="ironic-api" Feb 20 15:18:13.212007 master-0 kubenswrapper[28120]: I0220 15:18:13.211962 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 20 15:18:13.213940 master-0 kubenswrapper[28120]: I0220 15:18:13.213738 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 20 15:18:13.213940 master-0 kubenswrapper[28120]: I0220 15:18:13.213762 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 20 15:18:13.224671 master-0 kubenswrapper[28120]: I0220 15:18:13.224560 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 20 15:18:13.308242 master-0 kubenswrapper[28120]: I0220 15:18:13.308180 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj8cc\" (UniqueName: \"kubernetes.io/projected/5153b5a6-9bf4-4478-aacf-c45d230b71cd-kube-api-access-mj8cc\") pod \"openstackclient\" (UID: \"5153b5a6-9bf4-4478-aacf-c45d230b71cd\") " pod="openstack/openstackclient" Feb 20 15:18:13.308780 master-0 kubenswrapper[28120]: I0220 15:18:13.308344 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5153b5a6-9bf4-4478-aacf-c45d230b71cd-openstack-config\") pod \"openstackclient\" (UID: \"5153b5a6-9bf4-4478-aacf-c45d230b71cd\") " pod="openstack/openstackclient" Feb 20 15:18:13.308780 master-0 kubenswrapper[28120]: I0220 15:18:13.308423 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5153b5a6-9bf4-4478-aacf-c45d230b71cd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5153b5a6-9bf4-4478-aacf-c45d230b71cd\") " pod="openstack/openstackclient" Feb 20 15:18:13.308780 master-0 kubenswrapper[28120]: I0220 15:18:13.308537 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5153b5a6-9bf4-4478-aacf-c45d230b71cd-openstack-config-secret\") pod \"openstackclient\" (UID: \"5153b5a6-9bf4-4478-aacf-c45d230b71cd\") " pod="openstack/openstackclient" Feb 20 15:18:13.410371 master-0 kubenswrapper[28120]: I0220 15:18:13.410312 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5153b5a6-9bf4-4478-aacf-c45d230b71cd-openstack-config-secret\") pod \"openstackclient\" (UID: \"5153b5a6-9bf4-4478-aacf-c45d230b71cd\") " pod="openstack/openstackclient" Feb 20 15:18:13.410799 master-0 kubenswrapper[28120]: I0220 15:18:13.410779 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj8cc\" (UniqueName: \"kubernetes.io/projected/5153b5a6-9bf4-4478-aacf-c45d230b71cd-kube-api-access-mj8cc\") pod \"openstackclient\" (UID: \"5153b5a6-9bf4-4478-aacf-c45d230b71cd\") " pod="openstack/openstackclient" Feb 20 15:18:13.411456 master-0 kubenswrapper[28120]: I0220 15:18:13.411436 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5153b5a6-9bf4-4478-aacf-c45d230b71cd-openstack-config\") pod \"openstackclient\" (UID: \"5153b5a6-9bf4-4478-aacf-c45d230b71cd\") " pod="openstack/openstackclient" Feb 20 15:18:13.412835 master-0 kubenswrapper[28120]: I0220 15:18:13.412720 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5153b5a6-9bf4-4478-aacf-c45d230b71cd-openstack-config\") pod \"openstackclient\" (UID: \"5153b5a6-9bf4-4478-aacf-c45d230b71cd\") " pod="openstack/openstackclient" Feb 20 15:18:13.413520 master-0 kubenswrapper[28120]: I0220 15:18:13.413458 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5153b5a6-9bf4-4478-aacf-c45d230b71cd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5153b5a6-9bf4-4478-aacf-c45d230b71cd\") " pod="openstack/openstackclient" Feb 20 15:18:13.416119 master-0 kubenswrapper[28120]: I0220 15:18:13.416096 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5153b5a6-9bf4-4478-aacf-c45d230b71cd-openstack-config-secret\") pod \"openstackclient\" (UID: \"5153b5a6-9bf4-4478-aacf-c45d230b71cd\") " pod="openstack/openstackclient" Feb 20 15:18:13.418585 master-0 kubenswrapper[28120]: I0220 15:18:13.418564 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5153b5a6-9bf4-4478-aacf-c45d230b71cd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5153b5a6-9bf4-4478-aacf-c45d230b71cd\") " pod="openstack/openstackclient" Feb 20 15:18:13.429699 master-0 kubenswrapper[28120]: I0220 15:18:13.429649 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj8cc\" (UniqueName: \"kubernetes.io/projected/5153b5a6-9bf4-4478-aacf-c45d230b71cd-kube-api-access-mj8cc\") pod \"openstackclient\" (UID: \"5153b5a6-9bf4-4478-aacf-c45d230b71cd\") " pod="openstack/openstackclient" Feb 20 15:18:13.563171 master-0 kubenswrapper[28120]: I0220 15:18:13.563114 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 20 15:18:14.073775 master-0 kubenswrapper[28120]: I0220 15:18:14.073684 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06327dac-718e-426d-8c0e-9955f38106f7" path="/var/lib/kubelet/pods/06327dac-718e-426d-8c0e-9955f38106f7/volumes" Feb 20 15:18:14.266216 master-0 kubenswrapper[28120]: I0220 15:18:14.266136 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-dgtgb" event={"ID":"1da0bc4d-0f79-436b-a44c-927da967db5c","Type":"ContainerStarted","Data":"32f90c673f9d6318dfc8b872ba7a78dc7d52ede80df507a5bdd102eab794cabe"} Feb 20 15:18:14.354537 master-0 kubenswrapper[28120]: I0220 15:18:14.354480 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 20 15:18:14.516311 master-0 kubenswrapper[28120]: I0220 15:18:14.516226 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-dgtgb" podStartSLOduration=3.420716392 podStartE2EDuration="5.516209879s" podCreationTimestamp="2026-02-20 15:18:09 +0000 UTC" firstStartedPulling="2026-02-20 15:18:10.899885948 +0000 UTC m=+1029.160679511" lastFinishedPulling="2026-02-20 15:18:12.995379435 +0000 UTC m=+1031.256172998" observedRunningTime="2026-02-20 15:18:14.50943828 +0000 UTC m=+1032.770231843" watchObservedRunningTime="2026-02-20 15:18:14.516209879 +0000 UTC m=+1032.777003442" Feb 20 15:18:15.286175 master-0 kubenswrapper[28120]: I0220 15:18:15.285939 28120 generic.go:334] "Generic (PLEG): container finished" podID="1da0bc4d-0f79-436b-a44c-927da967db5c" containerID="32f90c673f9d6318dfc8b872ba7a78dc7d52ede80df507a5bdd102eab794cabe" exitCode=0 Feb 20 15:18:15.286175 master-0 kubenswrapper[28120]: I0220 15:18:15.285992 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-dgtgb" event={"ID":"1da0bc4d-0f79-436b-a44c-927da967db5c","Type":"ContainerDied","Data":"32f90c673f9d6318dfc8b872ba7a78dc7d52ede80df507a5bdd102eab794cabe"} Feb 20 15:18:15.289310 master-0 kubenswrapper[28120]: I0220 15:18:15.289064 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5153b5a6-9bf4-4478-aacf-c45d230b71cd","Type":"ContainerStarted","Data":"198b5579e1d53fecdef7bdddd5157f246cfe62fd626fb457bc31c66b42b48eea"} Feb 20 15:18:16.821206 master-0 kubenswrapper[28120]: I0220 15:18:16.820251 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5dddf9444-sdxtw"] Feb 20 15:18:16.827176 master-0 kubenswrapper[28120]: I0220 15:18:16.822448 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:16.828504 master-0 kubenswrapper[28120]: I0220 15:18:16.828487 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 20 15:18:16.828779 master-0 kubenswrapper[28120]: I0220 15:18:16.828766 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 20 15:18:16.829150 master-0 kubenswrapper[28120]: I0220 15:18:16.829136 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 20 15:18:16.855428 master-0 kubenswrapper[28120]: I0220 15:18:16.855391 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5dddf9444-sdxtw"] Feb 20 15:18:16.887971 master-0 kubenswrapper[28120]: I0220 15:18:16.883274 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:16.922214 master-0 kubenswrapper[28120]: I0220 15:18:16.922157 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9503c22b-df51-43f9-9d0d-c2d088642167-config-data\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:16.922485 master-0 kubenswrapper[28120]: I0220 15:18:16.922461 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9503c22b-df51-43f9-9d0d-c2d088642167-combined-ca-bundle\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:16.922666 master-0 kubenswrapper[28120]: I0220 15:18:16.922621 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9503c22b-df51-43f9-9d0d-c2d088642167-run-httpd\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:16.922911 master-0 kubenswrapper[28120]: I0220 15:18:16.922871 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9503c22b-df51-43f9-9d0d-c2d088642167-public-tls-certs\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:16.923279 master-0 kubenswrapper[28120]: I0220 15:18:16.923259 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9503c22b-df51-43f9-9d0d-c2d088642167-etc-swift\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:16.923485 master-0 kubenswrapper[28120]: I0220 15:18:16.923444 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9503c22b-df51-43f9-9d0d-c2d088642167-internal-tls-certs\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:16.923690 master-0 kubenswrapper[28120]: I0220 15:18:16.923670 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqr7l\" (UniqueName: \"kubernetes.io/projected/9503c22b-df51-43f9-9d0d-c2d088642167-kube-api-access-dqr7l\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:16.923854 master-0 kubenswrapper[28120]: I0220 15:18:16.923834 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9503c22b-df51-43f9-9d0d-c2d088642167-log-httpd\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.025402 master-0 kubenswrapper[28120]: I0220 15:18:17.025330 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-scripts\") pod \"1da0bc4d-0f79-436b-a44c-927da967db5c\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " Feb 20 15:18:17.025610 master-0 kubenswrapper[28120]: I0220 15:18:17.025413 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/1da0bc4d-0f79-436b-a44c-927da967db5c-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"1da0bc4d-0f79-436b-a44c-927da967db5c\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " Feb 20 15:18:17.025716 master-0 kubenswrapper[28120]: I0220 15:18:17.025689 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbsmx\" (UniqueName: \"kubernetes.io/projected/1da0bc4d-0f79-436b-a44c-927da967db5c-kube-api-access-gbsmx\") pod \"1da0bc4d-0f79-436b-a44c-927da967db5c\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " Feb 20 15:18:17.025816 master-0 kubenswrapper[28120]: I0220 15:18:17.025789 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/1da0bc4d-0f79-436b-a44c-927da967db5c-var-lib-ironic\") pod \"1da0bc4d-0f79-436b-a44c-927da967db5c\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " Feb 20 15:18:17.025872 master-0 kubenswrapper[28120]: I0220 15:18:17.025854 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-combined-ca-bundle\") pod \"1da0bc4d-0f79-436b-a44c-927da967db5c\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " Feb 20 15:18:17.025907 master-0 kubenswrapper[28120]: I0220 15:18:17.025893 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-config\") pod \"1da0bc4d-0f79-436b-a44c-927da967db5c\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " Feb 20 15:18:17.025985 master-0 kubenswrapper[28120]: I0220 15:18:17.025968 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1da0bc4d-0f79-436b-a44c-927da967db5c-etc-podinfo\") pod \"1da0bc4d-0f79-436b-a44c-927da967db5c\" (UID: \"1da0bc4d-0f79-436b-a44c-927da967db5c\") " Feb 20 15:18:17.026684 master-0 kubenswrapper[28120]: I0220 15:18:17.026578 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1da0bc4d-0f79-436b-a44c-927da967db5c-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "1da0bc4d-0f79-436b-a44c-927da967db5c" (UID: "1da0bc4d-0f79-436b-a44c-927da967db5c"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:18:17.026684 master-0 kubenswrapper[28120]: I0220 15:18:17.026611 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1da0bc4d-0f79-436b-a44c-927da967db5c-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "1da0bc4d-0f79-436b-a44c-927da967db5c" (UID: "1da0bc4d-0f79-436b-a44c-927da967db5c"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:18:17.027164 master-0 kubenswrapper[28120]: I0220 15:18:17.027134 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9503c22b-df51-43f9-9d0d-c2d088642167-etc-swift\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.027216 master-0 kubenswrapper[28120]: I0220 15:18:17.027187 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9503c22b-df51-43f9-9d0d-c2d088642167-internal-tls-certs\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.027327 master-0 kubenswrapper[28120]: I0220 15:18:17.027221 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqr7l\" (UniqueName: \"kubernetes.io/projected/9503c22b-df51-43f9-9d0d-c2d088642167-kube-api-access-dqr7l\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.027360 master-0 kubenswrapper[28120]: I0220 15:18:17.027323 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9503c22b-df51-43f9-9d0d-c2d088642167-log-httpd\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.027469 master-0 kubenswrapper[28120]: I0220 15:18:17.027440 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9503c22b-df51-43f9-9d0d-c2d088642167-config-data\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.027960 master-0 kubenswrapper[28120]: I0220 15:18:17.027935 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9503c22b-df51-43f9-9d0d-c2d088642167-combined-ca-bundle\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.028011 master-0 kubenswrapper[28120]: I0220 15:18:17.027970 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9503c22b-df51-43f9-9d0d-c2d088642167-run-httpd\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.028090 master-0 kubenswrapper[28120]: I0220 15:18:17.028067 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9503c22b-df51-43f9-9d0d-c2d088642167-public-tls-certs\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.028185 master-0 kubenswrapper[28120]: I0220 15:18:17.028162 28120 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/1da0bc4d-0f79-436b-a44c-927da967db5c-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:17.028231 master-0 kubenswrapper[28120]: I0220 15:18:17.028186 28120 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/1da0bc4d-0f79-436b-a44c-927da967db5c-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:17.031557 master-0 kubenswrapper[28120]: I0220 15:18:17.031101 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1da0bc4d-0f79-436b-a44c-927da967db5c-kube-api-access-gbsmx" (OuterVolumeSpecName: "kube-api-access-gbsmx") pod "1da0bc4d-0f79-436b-a44c-927da967db5c" (UID: "1da0bc4d-0f79-436b-a44c-927da967db5c"). InnerVolumeSpecName "kube-api-access-gbsmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:18:17.032488 master-0 kubenswrapper[28120]: I0220 15:18:17.032392 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1da0bc4d-0f79-436b-a44c-927da967db5c-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "1da0bc4d-0f79-436b-a44c-927da967db5c" (UID: "1da0bc4d-0f79-436b-a44c-927da967db5c"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 20 15:18:17.032556 master-0 kubenswrapper[28120]: I0220 15:18:17.032454 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9503c22b-df51-43f9-9d0d-c2d088642167-log-httpd\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.033423 master-0 kubenswrapper[28120]: I0220 15:18:17.033365 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-scripts" (OuterVolumeSpecName: "scripts") pod "1da0bc4d-0f79-436b-a44c-927da967db5c" (UID: "1da0bc4d-0f79-436b-a44c-927da967db5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:17.034662 master-0 kubenswrapper[28120]: I0220 15:18:17.034622 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9503c22b-df51-43f9-9d0d-c2d088642167-config-data\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.035283 master-0 kubenswrapper[28120]: I0220 15:18:17.035199 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9503c22b-df51-43f9-9d0d-c2d088642167-run-httpd\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.036976 master-0 kubenswrapper[28120]: I0220 15:18:17.036918 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9503c22b-df51-43f9-9d0d-c2d088642167-internal-tls-certs\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.043481 master-0 kubenswrapper[28120]: I0220 15:18:17.043440 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9503c22b-df51-43f9-9d0d-c2d088642167-public-tls-certs\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.043553 master-0 kubenswrapper[28120]: I0220 15:18:17.043521 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9503c22b-df51-43f9-9d0d-c2d088642167-combined-ca-bundle\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.044494 master-0 kubenswrapper[28120]: I0220 15:18:17.044448 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9503c22b-df51-43f9-9d0d-c2d088642167-etc-swift\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.050158 master-0 kubenswrapper[28120]: I0220 15:18:17.049956 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqr7l\" (UniqueName: \"kubernetes.io/projected/9503c22b-df51-43f9-9d0d-c2d088642167-kube-api-access-dqr7l\") pod \"swift-proxy-5dddf9444-sdxtw\" (UID: \"9503c22b-df51-43f9-9d0d-c2d088642167\") " pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.070108 master-0 kubenswrapper[28120]: I0220 15:18:17.070035 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1da0bc4d-0f79-436b-a44c-927da967db5c" (UID: "1da0bc4d-0f79-436b-a44c-927da967db5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:17.089335 master-0 kubenswrapper[28120]: I0220 15:18:17.089130 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-config" (OuterVolumeSpecName: "config") pod "1da0bc4d-0f79-436b-a44c-927da967db5c" (UID: "1da0bc4d-0f79-436b-a44c-927da967db5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:17.131187 master-0 kubenswrapper[28120]: I0220 15:18:17.131132 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbsmx\" (UniqueName: \"kubernetes.io/projected/1da0bc4d-0f79-436b-a44c-927da967db5c-kube-api-access-gbsmx\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:17.131187 master-0 kubenswrapper[28120]: I0220 15:18:17.131170 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:17.131187 master-0 kubenswrapper[28120]: I0220 15:18:17.131180 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:17.131187 master-0 kubenswrapper[28120]: I0220 15:18:17.131192 28120 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1da0bc4d-0f79-436b-a44c-927da967db5c-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:17.131460 master-0 kubenswrapper[28120]: I0220 15:18:17.131203 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1da0bc4d-0f79-436b-a44c-927da967db5c-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:17.212830 master-0 kubenswrapper[28120]: I0220 15:18:17.212766 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:17.322095 master-0 kubenswrapper[28120]: I0220 15:18:17.320687 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-dgtgb" event={"ID":"1da0bc4d-0f79-436b-a44c-927da967db5c","Type":"ContainerDied","Data":"63b55b94c3b98b34b718c9a519d0e63a390cda2562ed7cb947d1eb9d524d7247"} Feb 20 15:18:17.322095 master-0 kubenswrapper[28120]: I0220 15:18:17.320749 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63b55b94c3b98b34b718c9a519d0e63a390cda2562ed7cb947d1eb9d524d7247" Feb 20 15:18:17.322095 master-0 kubenswrapper[28120]: I0220 15:18:17.320800 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-dgtgb" Feb 20 15:18:17.774005 master-0 kubenswrapper[28120]: W0220 15:18:17.773788 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9503c22b_df51_43f9_9d0d_c2d088642167.slice/crio-76b32c2e3b2a353fdafb5e08d95b95a22027abf2a226fb525acb40035f7259d9 WatchSource:0}: Error finding container 76b32c2e3b2a353fdafb5e08d95b95a22027abf2a226fb525acb40035f7259d9: Status 404 returned error can't find the container with id 76b32c2e3b2a353fdafb5e08d95b95a22027abf2a226fb525acb40035f7259d9 Feb 20 15:18:17.779712 master-0 kubenswrapper[28120]: I0220 15:18:17.779657 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:18:17.786806 master-0 kubenswrapper[28120]: I0220 15:18:17.786740 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5dddf9444-sdxtw"] Feb 20 15:18:18.372093 master-0 kubenswrapper[28120]: I0220 15:18:18.364835 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz"] Feb 20 15:18:18.372093 master-0 kubenswrapper[28120]: E0220 15:18:18.365449 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da0bc4d-0f79-436b-a44c-927da967db5c" containerName="ironic-inspector-db-sync" Feb 20 15:18:18.372093 master-0 kubenswrapper[28120]: I0220 15:18:18.365471 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da0bc4d-0f79-436b-a44c-927da967db5c" containerName="ironic-inspector-db-sync" Feb 20 15:18:18.372093 master-0 kubenswrapper[28120]: I0220 15:18:18.365910 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da0bc4d-0f79-436b-a44c-927da967db5c" containerName="ironic-inspector-db-sync" Feb 20 15:18:18.372093 master-0 kubenswrapper[28120]: I0220 15:18:18.369663 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.384547 master-0 kubenswrapper[28120]: I0220 15:18:18.378632 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5dddf9444-sdxtw" event={"ID":"9503c22b-df51-43f9-9d0d-c2d088642167","Type":"ContainerStarted","Data":"76224a28592be9f2143b1051737b3da1370a5409ea6803e67c4257aebac534ba"} Feb 20 15:18:18.384547 master-0 kubenswrapper[28120]: I0220 15:18:18.378684 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5dddf9444-sdxtw" event={"ID":"9503c22b-df51-43f9-9d0d-c2d088642167","Type":"ContainerStarted","Data":"76b32c2e3b2a353fdafb5e08d95b95a22027abf2a226fb525acb40035f7259d9"} Feb 20 15:18:18.384547 master-0 kubenswrapper[28120]: I0220 15:18:18.383582 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz"] Feb 20 15:18:18.475013 master-0 kubenswrapper[28120]: I0220 15:18:18.474934 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 15:18:18.481615 master-0 kubenswrapper[28120]: I0220 15:18:18.479628 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 20 15:18:18.482864 master-0 kubenswrapper[28120]: I0220 15:18:18.482818 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 20 15:18:18.483058 master-0 kubenswrapper[28120]: I0220 15:18:18.483033 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 20 15:18:18.484179 master-0 kubenswrapper[28120]: I0220 15:18:18.484161 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 20 15:18:18.493007 master-0 kubenswrapper[28120]: I0220 15:18:18.492946 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-dns-swift-storage-0\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.495217 master-0 kubenswrapper[28120]: I0220 15:18:18.495182 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-ovsdbserver-sb\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.495461 master-0 kubenswrapper[28120]: I0220 15:18:18.495310 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj74z\" (UniqueName: \"kubernetes.io/projected/39cd6f63-b18a-4428-8698-a8c176540d71-kube-api-access-lj74z\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.495461 master-0 kubenswrapper[28120]: I0220 15:18:18.495398 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-ovsdbserver-nb\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.495559 master-0 kubenswrapper[28120]: I0220 15:18:18.495548 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-config\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.495598 master-0 kubenswrapper[28120]: I0220 15:18:18.495572 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-dns-svc\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.529790 master-0 kubenswrapper[28120]: I0220 15:18:18.529659 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 15:18:18.598161 master-0 kubenswrapper[28120]: I0220 15:18:18.598063 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-config\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.598368 master-0 kubenswrapper[28120]: I0220 15:18:18.598230 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj74z\" (UniqueName: \"kubernetes.io/projected/39cd6f63-b18a-4428-8698-a8c176540d71-kube-api-access-lj74z\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.598411 master-0 kubenswrapper[28120]: I0220 15:18:18.598395 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-ovsdbserver-nb\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.598522 master-0 kubenswrapper[28120]: I0220 15:18:18.598493 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/79055f50-7353-4dc9-90d0-dcd790817149-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.598563 master-0 kubenswrapper[28120]: I0220 15:18:18.598536 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-scripts\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.598711 master-0 kubenswrapper[28120]: I0220 15:18:18.598688 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-config\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.598757 master-0 kubenswrapper[28120]: I0220 15:18:18.598723 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-dns-svc\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.598862 master-0 kubenswrapper[28120]: I0220 15:18:18.598835 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/79055f50-7353-4dc9-90d0-dcd790817149-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.598914 master-0 kubenswrapper[28120]: I0220 15:18:18.598881 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.599046 master-0 kubenswrapper[28120]: I0220 15:18:18.599010 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-dns-swift-storage-0\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.599233 master-0 kubenswrapper[28120]: I0220 15:18:18.599187 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/79055f50-7353-4dc9-90d0-dcd790817149-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.599289 master-0 kubenswrapper[28120]: I0220 15:18:18.599264 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6zgf\" (UniqueName: \"kubernetes.io/projected/79055f50-7353-4dc9-90d0-dcd790817149-kube-api-access-z6zgf\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.599454 master-0 kubenswrapper[28120]: I0220 15:18:18.599414 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-ovsdbserver-sb\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.600007 master-0 kubenswrapper[28120]: I0220 15:18:18.599974 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-config\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.600456 master-0 kubenswrapper[28120]: I0220 15:18:18.600391 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-ovsdbserver-nb\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.600511 master-0 kubenswrapper[28120]: I0220 15:18:18.600481 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-ovsdbserver-sb\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.600798 master-0 kubenswrapper[28120]: I0220 15:18:18.600758 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-dns-svc\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.601160 master-0 kubenswrapper[28120]: I0220 15:18:18.601108 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-dns-swift-storage-0\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.625079 master-0 kubenswrapper[28120]: I0220 15:18:18.624636 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj74z\" (UniqueName: \"kubernetes.io/projected/39cd6f63-b18a-4428-8698-a8c176540d71-kube-api-access-lj74z\") pod \"dnsmasq-dns-5f4c4c4d6c-sfxkz\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.701621 master-0 kubenswrapper[28120]: I0220 15:18:18.701550 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/79055f50-7353-4dc9-90d0-dcd790817149-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.701829 master-0 kubenswrapper[28120]: I0220 15:18:18.701630 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6zgf\" (UniqueName: \"kubernetes.io/projected/79055f50-7353-4dc9-90d0-dcd790817149-kube-api-access-z6zgf\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.701829 master-0 kubenswrapper[28120]: I0220 15:18:18.701723 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-config\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.701829 master-0 kubenswrapper[28120]: I0220 15:18:18.701798 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-scripts\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.701829 master-0 kubenswrapper[28120]: I0220 15:18:18.701821 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/79055f50-7353-4dc9-90d0-dcd790817149-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.702035 master-0 kubenswrapper[28120]: I0220 15:18:18.701966 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/79055f50-7353-4dc9-90d0-dcd790817149-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.702089 master-0 kubenswrapper[28120]: I0220 15:18:18.702030 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.702233 master-0 kubenswrapper[28120]: I0220 15:18:18.702186 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/79055f50-7353-4dc9-90d0-dcd790817149-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.702675 master-0 kubenswrapper[28120]: I0220 15:18:18.702641 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/79055f50-7353-4dc9-90d0-dcd790817149-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.705231 master-0 kubenswrapper[28120]: I0220 15:18:18.705199 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/79055f50-7353-4dc9-90d0-dcd790817149-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.711495 master-0 kubenswrapper[28120]: I0220 15:18:18.710443 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-scripts\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.711495 master-0 kubenswrapper[28120]: I0220 15:18:18.710490 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.711495 master-0 kubenswrapper[28120]: I0220 15:18:18.711113 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-config\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.717614 master-0 kubenswrapper[28120]: I0220 15:18:18.717586 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6zgf\" (UniqueName: \"kubernetes.io/projected/79055f50-7353-4dc9-90d0-dcd790817149-kube-api-access-z6zgf\") pod \"ironic-inspector-0\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:18.747736 master-0 kubenswrapper[28120]: I0220 15:18:18.747695 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:18.842358 master-0 kubenswrapper[28120]: I0220 15:18:18.840989 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 20 15:18:19.248488 master-0 kubenswrapper[28120]: I0220 15:18:19.248425 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz"] Feb 20 15:18:19.271038 master-0 kubenswrapper[28120]: W0220 15:18:19.266932 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39cd6f63_b18a_4428_8698_a8c176540d71.slice/crio-0bba76e5dc4e26e2afc1dbccb9084c35bbed8626f4c12638e83ec859dd805663 WatchSource:0}: Error finding container 0bba76e5dc4e26e2afc1dbccb9084c35bbed8626f4c12638e83ec859dd805663: Status 404 returned error can't find the container with id 0bba76e5dc4e26e2afc1dbccb9084c35bbed8626f4c12638e83ec859dd805663 Feb 20 15:18:19.420117 master-0 kubenswrapper[28120]: I0220 15:18:19.420066 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5dddf9444-sdxtw" event={"ID":"9503c22b-df51-43f9-9d0d-c2d088642167","Type":"ContainerStarted","Data":"971effaf652f4262a3d52ab019890146dce1868024980bf79cf004c79151f08b"} Feb 20 15:18:19.420831 master-0 kubenswrapper[28120]: I0220 15:18:19.420786 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:19.420831 master-0 kubenswrapper[28120]: I0220 15:18:19.420816 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:19.434958 master-0 kubenswrapper[28120]: I0220 15:18:19.434225 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" event={"ID":"39cd6f63-b18a-4428-8698-a8c176540d71","Type":"ContainerStarted","Data":"0bba76e5dc4e26e2afc1dbccb9084c35bbed8626f4c12638e83ec859dd805663"} Feb 20 15:18:19.465288 master-0 kubenswrapper[28120]: I0220 15:18:19.464191 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5dddf9444-sdxtw" podStartSLOduration=3.46417219 podStartE2EDuration="3.46417219s" podCreationTimestamp="2026-02-20 15:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:18:19.44732371 +0000 UTC m=+1037.708117273" watchObservedRunningTime="2026-02-20 15:18:19.46417219 +0000 UTC m=+1037.724965753" Feb 20 15:18:19.523714 master-0 kubenswrapper[28120]: I0220 15:18:19.523639 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 15:18:20.088724 master-0 kubenswrapper[28120]: I0220 15:18:20.088068 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-9fd7b4d69-vstx7" Feb 20 15:18:20.189476 master-0 kubenswrapper[28120]: I0220 15:18:20.189422 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-79855fb6c6-bhshx"] Feb 20 15:18:20.190452 master-0 kubenswrapper[28120]: I0220 15:18:20.190407 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-79855fb6c6-bhshx" podUID="619993c5-d801-4e79-bf70-d7a94e307239" containerName="neutron-api" containerID="cri-o://80d3e7c2fbedda1a0f1237fbcb29ccf7960d1509734b46a0827c9b43d4405715" gracePeriod=30 Feb 20 15:18:20.191564 master-0 kubenswrapper[28120]: I0220 15:18:20.191070 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-79855fb6c6-bhshx" podUID="619993c5-d801-4e79-bf70-d7a94e307239" containerName="neutron-httpd" containerID="cri-o://6454b6a7258cdfe184e33c928247e68c106e172685eb33be40aa80c26db4617e" gracePeriod=30 Feb 20 15:18:20.464327 master-0 kubenswrapper[28120]: I0220 15:18:20.464265 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"79055f50-7353-4dc9-90d0-dcd790817149","Type":"ContainerStarted","Data":"1c3a377282bdbd7bfd266916a3c34c3c76f02757c850835f2b9ce8ed4c399939"} Feb 20 15:18:20.472943 master-0 kubenswrapper[28120]: I0220 15:18:20.472871 28120 generic.go:334] "Generic (PLEG): container finished" podID="39cd6f63-b18a-4428-8698-a8c176540d71" containerID="5192ce2dd5ff017015b2aaf15c9d4936de077110e06b2518e2ab1c548aaec450" exitCode=0 Feb 20 15:18:20.473238 master-0 kubenswrapper[28120]: I0220 15:18:20.472991 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" event={"ID":"39cd6f63-b18a-4428-8698-a8c176540d71","Type":"ContainerDied","Data":"5192ce2dd5ff017015b2aaf15c9d4936de077110e06b2518e2ab1c548aaec450"} Feb 20 15:18:20.478271 master-0 kubenswrapper[28120]: I0220 15:18:20.478210 28120 generic.go:334] "Generic (PLEG): container finished" podID="619993c5-d801-4e79-bf70-d7a94e307239" containerID="6454b6a7258cdfe184e33c928247e68c106e172685eb33be40aa80c26db4617e" exitCode=0 Feb 20 15:18:20.479286 master-0 kubenswrapper[28120]: I0220 15:18:20.479223 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79855fb6c6-bhshx" event={"ID":"619993c5-d801-4e79-bf70-d7a94e307239","Type":"ContainerDied","Data":"6454b6a7258cdfe184e33c928247e68c106e172685eb33be40aa80c26db4617e"} Feb 20 15:18:21.273247 master-0 kubenswrapper[28120]: I0220 15:18:21.273179 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 15:18:23.521752 master-0 kubenswrapper[28120]: I0220 15:18:23.521691 28120 generic.go:334] "Generic (PLEG): container finished" podID="619993c5-d801-4e79-bf70-d7a94e307239" containerID="80d3e7c2fbedda1a0f1237fbcb29ccf7960d1509734b46a0827c9b43d4405715" exitCode=0 Feb 20 15:18:23.521752 master-0 kubenswrapper[28120]: I0220 15:18:23.521744 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79855fb6c6-bhshx" event={"ID":"619993c5-d801-4e79-bf70-d7a94e307239","Type":"ContainerDied","Data":"80d3e7c2fbedda1a0f1237fbcb29ccf7960d1509734b46a0827c9b43d4405715"} Feb 20 15:18:24.056947 master-0 kubenswrapper[28120]: I0220 15:18:24.056587 28120 scope.go:117] "RemoveContainer" containerID="a828d7f5122b36fc07d917f428b69802ca53ad0ac272a2b2c48498586fe0a361" Feb 20 15:18:27.222164 master-0 kubenswrapper[28120]: I0220 15:18:27.222030 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:27.223348 master-0 kubenswrapper[28120]: I0220 15:18:27.223208 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5dddf9444-sdxtw" Feb 20 15:18:29.014588 master-0 kubenswrapper[28120]: I0220 15:18:29.014528 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c0df7-default-internal-api-0"] Feb 20 15:18:29.015104 master-0 kubenswrapper[28120]: I0220 15:18:29.014792 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-c0df7-default-internal-api-0" podUID="e80723b0-174d-4db6-ba59-b4ab371ed0cc" containerName="glance-log" containerID="cri-o://2e7744b8fa231391eb964c207b409e3353315d2aaee5b3c27e0b40008962380c" gracePeriod=30 Feb 20 15:18:29.015104 master-0 kubenswrapper[28120]: I0220 15:18:29.014938 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-c0df7-default-internal-api-0" podUID="e80723b0-174d-4db6-ba59-b4ab371ed0cc" containerName="glance-httpd" containerID="cri-o://e7857b1b473b1248555a33b62f06ca613bf23e6616c09cfeca46f2b7a2aff173" gracePeriod=30 Feb 20 15:18:29.687869 master-0 kubenswrapper[28120]: I0220 15:18:29.687413 28120 generic.go:334] "Generic (PLEG): container finished" podID="e80723b0-174d-4db6-ba59-b4ab371ed0cc" containerID="2e7744b8fa231391eb964c207b409e3353315d2aaee5b3c27e0b40008962380c" exitCode=143 Feb 20 15:18:29.687869 master-0 kubenswrapper[28120]: I0220 15:18:29.687463 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-internal-api-0" event={"ID":"e80723b0-174d-4db6-ba59-b4ab371ed0cc","Type":"ContainerDied","Data":"2e7744b8fa231391eb964c207b409e3353315d2aaee5b3c27e0b40008962380c"} Feb 20 15:18:30.551520 master-0 kubenswrapper[28120]: I0220 15:18:30.551396 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:18:30.556187 master-0 kubenswrapper[28120]: I0220 15:18:30.556020 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-ovndb-tls-certs\") pod \"619993c5-d801-4e79-bf70-d7a94e307239\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " Feb 20 15:18:30.556187 master-0 kubenswrapper[28120]: I0220 15:18:30.556086 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj27p\" (UniqueName: \"kubernetes.io/projected/619993c5-d801-4e79-bf70-d7a94e307239-kube-api-access-sj27p\") pod \"619993c5-d801-4e79-bf70-d7a94e307239\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " Feb 20 15:18:30.556187 master-0 kubenswrapper[28120]: I0220 15:18:30.556115 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-httpd-config\") pod \"619993c5-d801-4e79-bf70-d7a94e307239\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " Feb 20 15:18:30.561448 master-0 kubenswrapper[28120]: I0220 15:18:30.559491 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "619993c5-d801-4e79-bf70-d7a94e307239" (UID: "619993c5-d801-4e79-bf70-d7a94e307239"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:30.562259 master-0 kubenswrapper[28120]: I0220 15:18:30.562202 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/619993c5-d801-4e79-bf70-d7a94e307239-kube-api-access-sj27p" (OuterVolumeSpecName: "kube-api-access-sj27p") pod "619993c5-d801-4e79-bf70-d7a94e307239" (UID: "619993c5-d801-4e79-bf70-d7a94e307239"). InnerVolumeSpecName "kube-api-access-sj27p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:18:30.657863 master-0 kubenswrapper[28120]: I0220 15:18:30.657817 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-config\") pod \"619993c5-d801-4e79-bf70-d7a94e307239\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " Feb 20 15:18:30.658001 master-0 kubenswrapper[28120]: I0220 15:18:30.657944 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-combined-ca-bundle\") pod \"619993c5-d801-4e79-bf70-d7a94e307239\" (UID: \"619993c5-d801-4e79-bf70-d7a94e307239\") " Feb 20 15:18:30.658979 master-0 kubenswrapper[28120]: I0220 15:18:30.658950 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj27p\" (UniqueName: \"kubernetes.io/projected/619993c5-d801-4e79-bf70-d7a94e307239-kube-api-access-sj27p\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:30.658979 master-0 kubenswrapper[28120]: I0220 15:18:30.658978 28120 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-httpd-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:30.704614 master-0 kubenswrapper[28120]: I0220 15:18:30.704556 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79855fb6c6-bhshx" event={"ID":"619993c5-d801-4e79-bf70-d7a94e307239","Type":"ContainerDied","Data":"c2cb37a87f61003975dc046876921754ad67955cfd3a7110a8bbceb23e5b323d"} Feb 20 15:18:30.704614 master-0 kubenswrapper[28120]: I0220 15:18:30.704609 28120 scope.go:117] "RemoveContainer" containerID="6454b6a7258cdfe184e33c928247e68c106e172685eb33be40aa80c26db4617e" Feb 20 15:18:30.704761 master-0 kubenswrapper[28120]: I0220 15:18:30.704741 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79855fb6c6-bhshx" Feb 20 15:18:30.760935 master-0 kubenswrapper[28120]: I0220 15:18:30.757789 28120 scope.go:117] "RemoveContainer" containerID="80d3e7c2fbedda1a0f1237fbcb29ccf7960d1509734b46a0827c9b43d4405715" Feb 20 15:18:30.837471 master-0 kubenswrapper[28120]: I0220 15:18:30.837415 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "619993c5-d801-4e79-bf70-d7a94e307239" (UID: "619993c5-d801-4e79-bf70-d7a94e307239"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:30.863528 master-0 kubenswrapper[28120]: I0220 15:18:30.863480 28120 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:30.864633 master-0 kubenswrapper[28120]: I0220 15:18:30.864593 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-config" (OuterVolumeSpecName: "config") pod "619993c5-d801-4e79-bf70-d7a94e307239" (UID: "619993c5-d801-4e79-bf70-d7a94e307239"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:30.874682 master-0 kubenswrapper[28120]: I0220 15:18:30.874630 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "619993c5-d801-4e79-bf70-d7a94e307239" (UID: "619993c5-d801-4e79-bf70-d7a94e307239"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:30.965747 master-0 kubenswrapper[28120]: I0220 15:18:30.965681 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:30.965747 master-0 kubenswrapper[28120]: I0220 15:18:30.965738 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/619993c5-d801-4e79-bf70-d7a94e307239-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:31.049031 master-0 kubenswrapper[28120]: I0220 15:18:31.048971 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-79855fb6c6-bhshx"] Feb 20 15:18:31.060088 master-0 kubenswrapper[28120]: I0220 15:18:31.060040 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-79855fb6c6-bhshx"] Feb 20 15:18:31.725268 master-0 kubenswrapper[28120]: I0220 15:18:31.724940 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"af765e06-e2aa-4239-8a51-fc29e02fa257","Type":"ContainerStarted","Data":"8a22b9f3bb7694ca14fb685380ee620f7676e346df87de494fc10c608401c6f7"} Feb 20 15:18:31.747958 master-0 kubenswrapper[28120]: I0220 15:18:31.742585 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" event={"ID":"266c4a44-1f0a-468c-99c1-1dbdab46f6ad","Type":"ContainerStarted","Data":"e83417a178c526dbfaa66489626aad7db41206a053bb9318cab1cdcac7a2888b"} Feb 20 15:18:31.747958 master-0 kubenswrapper[28120]: I0220 15:18:31.743493 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" Feb 20 15:18:31.747958 master-0 kubenswrapper[28120]: I0220 15:18:31.745538 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5153b5a6-9bf4-4478-aacf-c45d230b71cd","Type":"ContainerStarted","Data":"f77c021973b9d4369ec5be8d570d20036f04a9787300c93fa964bfe56a8e26b8"} Feb 20 15:18:31.750044 master-0 kubenswrapper[28120]: I0220 15:18:31.748354 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" event={"ID":"39cd6f63-b18a-4428-8698-a8c176540d71","Type":"ContainerStarted","Data":"606d40b0bcd7d370972ca6259083c7b714160955b8cc2e4a99abcd56c7da64fc"} Feb 20 15:18:31.750044 master-0 kubenswrapper[28120]: I0220 15:18:31.748803 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:31.801888 master-0 kubenswrapper[28120]: I0220 15:18:31.801793 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.744597964 podStartE2EDuration="18.801768715s" podCreationTimestamp="2026-02-20 15:18:13 +0000 UTC" firstStartedPulling="2026-02-20 15:18:14.346741555 +0000 UTC m=+1032.607535118" lastFinishedPulling="2026-02-20 15:18:30.403912306 +0000 UTC m=+1048.664705869" observedRunningTime="2026-02-20 15:18:31.793188571 +0000 UTC m=+1050.053982124" watchObservedRunningTime="2026-02-20 15:18:31.801768715 +0000 UTC m=+1050.062562278" Feb 20 15:18:31.858190 master-0 kubenswrapper[28120]: I0220 15:18:31.858095 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" podStartSLOduration=13.858067628 podStartE2EDuration="13.858067628s" podCreationTimestamp="2026-02-20 15:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:18:31.856805427 +0000 UTC m=+1050.117599010" watchObservedRunningTime="2026-02-20 15:18:31.858067628 +0000 UTC m=+1050.118861201" Feb 20 15:18:32.070742 master-0 kubenswrapper[28120]: I0220 15:18:32.070679 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="619993c5-d801-4e79-bf70-d7a94e307239" path="/var/lib/kubelet/pods/619993c5-d801-4e79-bf70-d7a94e307239/volumes" Feb 20 15:18:32.767641 master-0 kubenswrapper[28120]: I0220 15:18:32.765643 28120 generic.go:334] "Generic (PLEG): container finished" podID="e80723b0-174d-4db6-ba59-b4ab371ed0cc" containerID="e7857b1b473b1248555a33b62f06ca613bf23e6616c09cfeca46f2b7a2aff173" exitCode=0 Feb 20 15:18:32.767641 master-0 kubenswrapper[28120]: I0220 15:18:32.766668 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-internal-api-0" event={"ID":"e80723b0-174d-4db6-ba59-b4ab371ed0cc","Type":"ContainerDied","Data":"e7857b1b473b1248555a33b62f06ca613bf23e6616c09cfeca46f2b7a2aff173"} Feb 20 15:18:32.894064 master-0 kubenswrapper[28120]: I0220 15:18:32.894017 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:32.901856 master-0 kubenswrapper[28120]: I0220 15:18:32.901751 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:32.987007 master-0 kubenswrapper[28120]: I0220 15:18:32.984059 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55b884df7b-l5kgx" Feb 20 15:18:33.046002 master-0 kubenswrapper[28120]: I0220 15:18:33.045813 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42fl5\" (UniqueName: \"kubernetes.io/projected/e80723b0-174d-4db6-ba59-b4ab371ed0cc-kube-api-access-42fl5\") pod \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " Feb 20 15:18:33.050401 master-0 kubenswrapper[28120]: I0220 15:18:33.046948 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e7d6ec2f-0159-4059-a4df-0b5b60e7e4f5\") pod \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " Feb 20 15:18:33.050401 master-0 kubenswrapper[28120]: I0220 15:18:33.047636 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-combined-ca-bundle\") pod \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " Feb 20 15:18:33.050401 master-0 kubenswrapper[28120]: I0220 15:18:33.047673 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-config-data\") pod \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " Feb 20 15:18:33.050401 master-0 kubenswrapper[28120]: I0220 15:18:33.047804 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e80723b0-174d-4db6-ba59-b4ab371ed0cc-httpd-run\") pod \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " Feb 20 15:18:33.050401 master-0 kubenswrapper[28120]: I0220 15:18:33.047875 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e80723b0-174d-4db6-ba59-b4ab371ed0cc-logs\") pod \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " Feb 20 15:18:33.050401 master-0 kubenswrapper[28120]: I0220 15:18:33.047915 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-internal-tls-certs\") pod \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " Feb 20 15:18:33.050401 master-0 kubenswrapper[28120]: I0220 15:18:33.048670 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-scripts\") pod \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\" (UID: \"e80723b0-174d-4db6-ba59-b4ab371ed0cc\") " Feb 20 15:18:33.050746 master-0 kubenswrapper[28120]: I0220 15:18:33.050567 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e80723b0-174d-4db6-ba59-b4ab371ed0cc-kube-api-access-42fl5" (OuterVolumeSpecName: "kube-api-access-42fl5") pod "e80723b0-174d-4db6-ba59-b4ab371ed0cc" (UID: "e80723b0-174d-4db6-ba59-b4ab371ed0cc"). InnerVolumeSpecName "kube-api-access-42fl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:18:33.060129 master-0 kubenswrapper[28120]: I0220 15:18:33.051094 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e80723b0-174d-4db6-ba59-b4ab371ed0cc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e80723b0-174d-4db6-ba59-b4ab371ed0cc" (UID: "e80723b0-174d-4db6-ba59-b4ab371ed0cc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:18:33.060129 master-0 kubenswrapper[28120]: I0220 15:18:33.051153 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42fl5\" (UniqueName: \"kubernetes.io/projected/e80723b0-174d-4db6-ba59-b4ab371ed0cc-kube-api-access-42fl5\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:33.060129 master-0 kubenswrapper[28120]: I0220 15:18:33.054614 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e80723b0-174d-4db6-ba59-b4ab371ed0cc-logs" (OuterVolumeSpecName: "logs") pod "e80723b0-174d-4db6-ba59-b4ab371ed0cc" (UID: "e80723b0-174d-4db6-ba59-b4ab371ed0cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:18:33.092943 master-0 kubenswrapper[28120]: I0220 15:18:33.083949 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^e7d6ec2f-0159-4059-a4df-0b5b60e7e4f5" (OuterVolumeSpecName: "glance") pod "e80723b0-174d-4db6-ba59-b4ab371ed0cc" (UID: "e80723b0-174d-4db6-ba59-b4ab371ed0cc"). InnerVolumeSpecName "pvc-68c93e35-40fa-4709-92c7-5387ee663a47". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 20 15:18:33.092943 master-0 kubenswrapper[28120]: I0220 15:18:33.090870 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7fb458c87b-4xkt8"] Feb 20 15:18:33.092943 master-0 kubenswrapper[28120]: I0220 15:18:33.091691 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7fb458c87b-4xkt8" podUID="b0d27db3-3e55-4041-aca6-83c2da3645bb" containerName="placement-log" containerID="cri-o://c7028f36b135e2d26580aa9046750f8ddd4628a1f64c90c9f12aa64e8e042b3a" gracePeriod=30 Feb 20 15:18:33.092943 master-0 kubenswrapper[28120]: I0220 15:18:33.091916 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7fb458c87b-4xkt8" podUID="b0d27db3-3e55-4041-aca6-83c2da3645bb" containerName="placement-api" containerID="cri-o://649c0f3ec84afd9a73d4e8c5a8c0c02c1f3b46c8574299bfc9e2473557ea8456" gracePeriod=30 Feb 20 15:18:33.100942 master-0 kubenswrapper[28120]: I0220 15:18:33.094935 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-scripts" (OuterVolumeSpecName: "scripts") pod "e80723b0-174d-4db6-ba59-b4ab371ed0cc" (UID: "e80723b0-174d-4db6-ba59-b4ab371ed0cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:33.151824 master-0 kubenswrapper[28120]: I0220 15:18:33.151764 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e80723b0-174d-4db6-ba59-b4ab371ed0cc" (UID: "e80723b0-174d-4db6-ba59-b4ab371ed0cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:33.153152 master-0 kubenswrapper[28120]: I0220 15:18:33.153087 28120 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-68c93e35-40fa-4709-92c7-5387ee663a47\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e7d6ec2f-0159-4059-a4df-0b5b60e7e4f5\") on node \"master-0\" " Feb 20 15:18:33.153152 master-0 kubenswrapper[28120]: I0220 15:18:33.153126 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:33.153152 master-0 kubenswrapper[28120]: I0220 15:18:33.153139 28120 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e80723b0-174d-4db6-ba59-b4ab371ed0cc-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:33.153152 master-0 kubenswrapper[28120]: I0220 15:18:33.153147 28120 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e80723b0-174d-4db6-ba59-b4ab371ed0cc-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:33.153152 master-0 kubenswrapper[28120]: I0220 15:18:33.153159 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:33.186417 master-0 kubenswrapper[28120]: I0220 15:18:33.180110 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e80723b0-174d-4db6-ba59-b4ab371ed0cc" (UID: "e80723b0-174d-4db6-ba59-b4ab371ed0cc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:33.205754 master-0 kubenswrapper[28120]: I0220 15:18:33.205686 28120 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 20 15:18:33.205959 master-0 kubenswrapper[28120]: I0220 15:18:33.205836 28120 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-68c93e35-40fa-4709-92c7-5387ee663a47" (UniqueName: "kubernetes.io/csi/topolvm.io^e7d6ec2f-0159-4059-a4df-0b5b60e7e4f5") on node "master-0" Feb 20 15:18:33.229120 master-0 kubenswrapper[28120]: I0220 15:18:33.229065 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-config-data" (OuterVolumeSpecName: "config-data") pod "e80723b0-174d-4db6-ba59-b4ab371ed0cc" (UID: "e80723b0-174d-4db6-ba59-b4ab371ed0cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:33.258390 master-0 kubenswrapper[28120]: I0220 15:18:33.258338 28120 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:33.258390 master-0 kubenswrapper[28120]: I0220 15:18:33.258384 28120 reconciler_common.go:293] "Volume detached for volume \"pvc-68c93e35-40fa-4709-92c7-5387ee663a47\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e7d6ec2f-0159-4059-a4df-0b5b60e7e4f5\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:33.258390 master-0 kubenswrapper[28120]: I0220 15:18:33.258396 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80723b0-174d-4db6-ba59-b4ab371ed0cc-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:33.832874 master-0 kubenswrapper[28120]: I0220 15:18:33.832665 28120 generic.go:334] "Generic (PLEG): container finished" podID="b0d27db3-3e55-4041-aca6-83c2da3645bb" containerID="c7028f36b135e2d26580aa9046750f8ddd4628a1f64c90c9f12aa64e8e042b3a" exitCode=143 Feb 20 15:18:33.832874 master-0 kubenswrapper[28120]: I0220 15:18:33.832763 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fb458c87b-4xkt8" event={"ID":"b0d27db3-3e55-4041-aca6-83c2da3645bb","Type":"ContainerDied","Data":"c7028f36b135e2d26580aa9046750f8ddd4628a1f64c90c9f12aa64e8e042b3a"} Feb 20 15:18:33.861140 master-0 kubenswrapper[28120]: I0220 15:18:33.861090 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:33.865583 master-0 kubenswrapper[28120]: I0220 15:18:33.865518 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-internal-api-0" event={"ID":"e80723b0-174d-4db6-ba59-b4ab371ed0cc","Type":"ContainerDied","Data":"73fa186047e93843df4493be40e57db3bacd3491b31e8a70a5c874dcffaf1cbd"} Feb 20 15:18:33.865687 master-0 kubenswrapper[28120]: I0220 15:18:33.865602 28120 scope.go:117] "RemoveContainer" containerID="e7857b1b473b1248555a33b62f06ca613bf23e6616c09cfeca46f2b7a2aff173" Feb 20 15:18:33.953198 master-0 kubenswrapper[28120]: I0220 15:18:33.952513 28120 scope.go:117] "RemoveContainer" containerID="2e7744b8fa231391eb964c207b409e3353315d2aaee5b3c27e0b40008962380c" Feb 20 15:18:34.002300 master-0 kubenswrapper[28120]: I0220 15:18:34.002249 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c0df7-default-internal-api-0"] Feb 20 15:18:34.052209 master-0 kubenswrapper[28120]: I0220 15:18:34.045038 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-c0df7-default-internal-api-0"] Feb 20 15:18:34.109795 master-0 kubenswrapper[28120]: I0220 15:18:34.108859 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e80723b0-174d-4db6-ba59-b4ab371ed0cc" path="/var/lib/kubelet/pods/e80723b0-174d-4db6-ba59-b4ab371ed0cc/volumes" Feb 20 15:18:34.111954 master-0 kubenswrapper[28120]: I0220 15:18:34.111904 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-c0df7-default-internal-api-0"] Feb 20 15:18:34.112480 master-0 kubenswrapper[28120]: E0220 15:18:34.112450 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80723b0-174d-4db6-ba59-b4ab371ed0cc" containerName="glance-httpd" Feb 20 15:18:34.112480 master-0 kubenswrapper[28120]: I0220 15:18:34.112470 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80723b0-174d-4db6-ba59-b4ab371ed0cc" containerName="glance-httpd" Feb 20 15:18:34.112563 master-0 kubenswrapper[28120]: E0220 15:18:34.112504 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80723b0-174d-4db6-ba59-b4ab371ed0cc" containerName="glance-log" Feb 20 15:18:34.112563 master-0 kubenswrapper[28120]: I0220 15:18:34.112511 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80723b0-174d-4db6-ba59-b4ab371ed0cc" containerName="glance-log" Feb 20 15:18:34.112563 master-0 kubenswrapper[28120]: E0220 15:18:34.112526 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619993c5-d801-4e79-bf70-d7a94e307239" containerName="neutron-api" Feb 20 15:18:34.112563 master-0 kubenswrapper[28120]: I0220 15:18:34.112533 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="619993c5-d801-4e79-bf70-d7a94e307239" containerName="neutron-api" Feb 20 15:18:34.112563 master-0 kubenswrapper[28120]: E0220 15:18:34.112549 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619993c5-d801-4e79-bf70-d7a94e307239" containerName="neutron-httpd" Feb 20 15:18:34.112563 master-0 kubenswrapper[28120]: I0220 15:18:34.112555 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="619993c5-d801-4e79-bf70-d7a94e307239" containerName="neutron-httpd" Feb 20 15:18:34.112943 master-0 kubenswrapper[28120]: I0220 15:18:34.112796 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="619993c5-d801-4e79-bf70-d7a94e307239" containerName="neutron-api" Feb 20 15:18:34.112943 master-0 kubenswrapper[28120]: I0220 15:18:34.112813 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80723b0-174d-4db6-ba59-b4ab371ed0cc" containerName="glance-log" Feb 20 15:18:34.113036 master-0 kubenswrapper[28120]: I0220 15:18:34.112836 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="619993c5-d801-4e79-bf70-d7a94e307239" containerName="neutron-httpd" Feb 20 15:18:34.113036 master-0 kubenswrapper[28120]: I0220 15:18:34.113001 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80723b0-174d-4db6-ba59-b4ab371ed0cc" containerName="glance-httpd" Feb 20 15:18:34.114339 master-0 kubenswrapper[28120]: I0220 15:18:34.114172 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.116549 master-0 kubenswrapper[28120]: I0220 15:18:34.116510 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 20 15:18:34.116846 master-0 kubenswrapper[28120]: I0220 15:18:34.116817 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-c0df7-default-internal-config-data" Feb 20 15:18:34.129091 master-0 kubenswrapper[28120]: I0220 15:18:34.129029 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c0df7-default-internal-api-0"] Feb 20 15:18:34.315505 master-0 kubenswrapper[28120]: I0220 15:18:34.315439 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72b461ef-72b8-4632-803c-a0416f9cfbd3-logs\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.315724 master-0 kubenswrapper[28120]: I0220 15:18:34.315545 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b461ef-72b8-4632-803c-a0416f9cfbd3-internal-tls-certs\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.315724 master-0 kubenswrapper[28120]: I0220 15:18:34.315602 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72b461ef-72b8-4632-803c-a0416f9cfbd3-httpd-run\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.315724 master-0 kubenswrapper[28120]: I0220 15:18:34.315624 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b461ef-72b8-4632-803c-a0416f9cfbd3-config-data\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.315724 master-0 kubenswrapper[28120]: I0220 15:18:34.315644 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-68c93e35-40fa-4709-92c7-5387ee663a47\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e7d6ec2f-0159-4059-a4df-0b5b60e7e4f5\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.315724 master-0 kubenswrapper[28120]: I0220 15:18:34.315669 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b461ef-72b8-4632-803c-a0416f9cfbd3-combined-ca-bundle\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.315724 master-0 kubenswrapper[28120]: I0220 15:18:34.315703 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvhvp\" (UniqueName: \"kubernetes.io/projected/72b461ef-72b8-4632-803c-a0416f9cfbd3-kube-api-access-hvhvp\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.315909 master-0 kubenswrapper[28120]: I0220 15:18:34.315780 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72b461ef-72b8-4632-803c-a0416f9cfbd3-scripts\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.417422 master-0 kubenswrapper[28120]: I0220 15:18:34.417302 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72b461ef-72b8-4632-803c-a0416f9cfbd3-scripts\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.417422 master-0 kubenswrapper[28120]: I0220 15:18:34.417380 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72b461ef-72b8-4632-803c-a0416f9cfbd3-logs\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.420990 master-0 kubenswrapper[28120]: I0220 15:18:34.418177 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72b461ef-72b8-4632-803c-a0416f9cfbd3-logs\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.420990 master-0 kubenswrapper[28120]: I0220 15:18:34.418309 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b461ef-72b8-4632-803c-a0416f9cfbd3-internal-tls-certs\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.420990 master-0 kubenswrapper[28120]: I0220 15:18:34.418387 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72b461ef-72b8-4632-803c-a0416f9cfbd3-httpd-run\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.420990 master-0 kubenswrapper[28120]: I0220 15:18:34.418409 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b461ef-72b8-4632-803c-a0416f9cfbd3-config-data\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.420990 master-0 kubenswrapper[28120]: I0220 15:18:34.420481 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72b461ef-72b8-4632-803c-a0416f9cfbd3-httpd-run\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.421218 master-0 kubenswrapper[28120]: I0220 15:18:34.421011 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-68c93e35-40fa-4709-92c7-5387ee663a47\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e7d6ec2f-0159-4059-a4df-0b5b60e7e4f5\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.421579 master-0 kubenswrapper[28120]: I0220 15:18:34.421053 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b461ef-72b8-4632-803c-a0416f9cfbd3-combined-ca-bundle\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.421765 master-0 kubenswrapper[28120]: I0220 15:18:34.421745 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvhvp\" (UniqueName: \"kubernetes.io/projected/72b461ef-72b8-4632-803c-a0416f9cfbd3-kube-api-access-hvhvp\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.422604 master-0 kubenswrapper[28120]: I0220 15:18:34.422556 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72b461ef-72b8-4632-803c-a0416f9cfbd3-scripts\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.424151 master-0 kubenswrapper[28120]: I0220 15:18:34.424121 28120 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 15:18:34.424223 master-0 kubenswrapper[28120]: I0220 15:18:34.424172 28120 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-68c93e35-40fa-4709-92c7-5387ee663a47\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e7d6ec2f-0159-4059-a4df-0b5b60e7e4f5\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/f9eb06ead75927ffb33ebe5802ebc626d2bdc6570ed175f8a5fed1a927e8b454/globalmount\"" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.424266 master-0 kubenswrapper[28120]: I0220 15:18:34.424212 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b461ef-72b8-4632-803c-a0416f9cfbd3-internal-tls-certs\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.425198 master-0 kubenswrapper[28120]: I0220 15:18:34.425167 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b461ef-72b8-4632-803c-a0416f9cfbd3-config-data\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.426607 master-0 kubenswrapper[28120]: I0220 15:18:34.426583 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b461ef-72b8-4632-803c-a0416f9cfbd3-combined-ca-bundle\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.437666 master-0 kubenswrapper[28120]: I0220 15:18:34.437627 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvhvp\" (UniqueName: \"kubernetes.io/projected/72b461ef-72b8-4632-803c-a0416f9cfbd3-kube-api-access-hvhvp\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:34.878992 master-0 kubenswrapper[28120]: I0220 15:18:34.871660 28120 generic.go:334] "Generic (PLEG): container finished" podID="af765e06-e2aa-4239-8a51-fc29e02fa257" containerID="8a22b9f3bb7694ca14fb685380ee620f7676e346df87de494fc10c608401c6f7" exitCode=0 Feb 20 15:18:34.878992 master-0 kubenswrapper[28120]: I0220 15:18:34.871723 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"af765e06-e2aa-4239-8a51-fc29e02fa257","Type":"ContainerDied","Data":"8a22b9f3bb7694ca14fb685380ee620f7676e346df87de494fc10c608401c6f7"} Feb 20 15:18:35.281494 master-0 kubenswrapper[28120]: I0220 15:18:35.281413 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-68c93e35-40fa-4709-92c7-5387ee663a47\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e7d6ec2f-0159-4059-a4df-0b5b60e7e4f5\") pod \"glance-c0df7-default-internal-api-0\" (UID: \"72b461ef-72b8-4632-803c-a0416f9cfbd3\") " pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:35.381398 master-0 kubenswrapper[28120]: I0220 15:18:35.381323 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c0df7-default-external-api-0"] Feb 20 15:18:35.381666 master-0 kubenswrapper[28120]: I0220 15:18:35.381621 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-c0df7-default-external-api-0" podUID="8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" containerName="glance-log" containerID="cri-o://75f52938914e3c2ceead05696ed96550dd740a1b3e1f9d04aca0d29cf16fd722" gracePeriod=30 Feb 20 15:18:35.381825 master-0 kubenswrapper[28120]: I0220 15:18:35.381751 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-c0df7-default-external-api-0" podUID="8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" containerName="glance-httpd" containerID="cri-o://7876bd0e3790df9dcf01e0fefd2e587d51e3b8bb2707a59f4fc0b26d9446461c" gracePeriod=30 Feb 20 15:18:35.398461 master-0 kubenswrapper[28120]: I0220 15:18:35.398406 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:35.488408 master-0 kubenswrapper[28120]: I0220 15:18:35.488358 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-7dbf56d775-btzqc" Feb 20 15:18:35.884154 master-0 kubenswrapper[28120]: I0220 15:18:35.884084 28120 generic.go:334] "Generic (PLEG): container finished" podID="8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" containerID="75f52938914e3c2ceead05696ed96550dd740a1b3e1f9d04aca0d29cf16fd722" exitCode=143 Feb 20 15:18:35.884154 master-0 kubenswrapper[28120]: I0220 15:18:35.884135 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-external-api-0" event={"ID":"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b","Type":"ContainerDied","Data":"75f52938914e3c2ceead05696ed96550dd740a1b3e1f9d04aca0d29cf16fd722"} Feb 20 15:18:35.973844 master-0 kubenswrapper[28120]: I0220 15:18:35.973784 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c0df7-default-internal-api-0"] Feb 20 15:18:35.977108 master-0 kubenswrapper[28120]: W0220 15:18:35.977055 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72b461ef_72b8_4632_803c_a0416f9cfbd3.slice/crio-5f33e8c8709706ab3e55b13f761866c7b8354227af01642abcfebecbd2b1ce74 WatchSource:0}: Error finding container 5f33e8c8709706ab3e55b13f761866c7b8354227af01642abcfebecbd2b1ce74: Status 404 returned error can't find the container with id 5f33e8c8709706ab3e55b13f761866c7b8354227af01642abcfebecbd2b1ce74 Feb 20 15:18:36.826459 master-0 kubenswrapper[28120]: I0220 15:18:36.825993 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:18:36.920971 master-0 kubenswrapper[28120]: I0220 15:18:36.920897 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-internal-api-0" event={"ID":"72b461ef-72b8-4632-803c-a0416f9cfbd3","Type":"ContainerStarted","Data":"7a1147e157741718c7a8ca49ae49f5bb62dbab04902e2da4397bf0e50281df72"} Feb 20 15:18:36.920971 master-0 kubenswrapper[28120]: I0220 15:18:36.920968 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-internal-api-0" event={"ID":"72b461ef-72b8-4632-803c-a0416f9cfbd3","Type":"ContainerStarted","Data":"5f33e8c8709706ab3e55b13f761866c7b8354227af01642abcfebecbd2b1ce74"} Feb 20 15:18:36.924887 master-0 kubenswrapper[28120]: I0220 15:18:36.924844 28120 generic.go:334] "Generic (PLEG): container finished" podID="b0d27db3-3e55-4041-aca6-83c2da3645bb" containerID="649c0f3ec84afd9a73d4e8c5a8c0c02c1f3b46c8574299bfc9e2473557ea8456" exitCode=0 Feb 20 15:18:36.925064 master-0 kubenswrapper[28120]: I0220 15:18:36.925030 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fb458c87b-4xkt8" event={"ID":"b0d27db3-3e55-4041-aca6-83c2da3645bb","Type":"ContainerDied","Data":"649c0f3ec84afd9a73d4e8c5a8c0c02c1f3b46c8574299bfc9e2473557ea8456"} Feb 20 15:18:36.925133 master-0 kubenswrapper[28120]: I0220 15:18:36.925070 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fb458c87b-4xkt8" event={"ID":"b0d27db3-3e55-4041-aca6-83c2da3645bb","Type":"ContainerDied","Data":"f26c7456736ad037c9044e68bc655f286f78202921a216d2edf384eab37787a5"} Feb 20 15:18:36.925133 master-0 kubenswrapper[28120]: I0220 15:18:36.925089 28120 scope.go:117] "RemoveContainer" containerID="649c0f3ec84afd9a73d4e8c5a8c0c02c1f3b46c8574299bfc9e2473557ea8456" Feb 20 15:18:36.925311 master-0 kubenswrapper[28120]: I0220 15:18:36.925292 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fb458c87b-4xkt8" Feb 20 15:18:36.960316 master-0 kubenswrapper[28120]: I0220 15:18:36.960280 28120 scope.go:117] "RemoveContainer" containerID="c7028f36b135e2d26580aa9046750f8ddd4628a1f64c90c9f12aa64e8e042b3a" Feb 20 15:18:36.989708 master-0 kubenswrapper[28120]: I0220 15:18:36.989557 28120 scope.go:117] "RemoveContainer" containerID="649c0f3ec84afd9a73d4e8c5a8c0c02c1f3b46c8574299bfc9e2473557ea8456" Feb 20 15:18:36.990953 master-0 kubenswrapper[28120]: I0220 15:18:36.990912 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d27db3-3e55-4041-aca6-83c2da3645bb-logs\") pod \"b0d27db3-3e55-4041-aca6-83c2da3645bb\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " Feb 20 15:18:36.991103 master-0 kubenswrapper[28120]: I0220 15:18:36.991085 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-internal-tls-certs\") pod \"b0d27db3-3e55-4041-aca6-83c2da3645bb\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " Feb 20 15:18:36.991247 master-0 kubenswrapper[28120]: E0220 15:18:36.990939 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"649c0f3ec84afd9a73d4e8c5a8c0c02c1f3b46c8574299bfc9e2473557ea8456\": container with ID starting with 649c0f3ec84afd9a73d4e8c5a8c0c02c1f3b46c8574299bfc9e2473557ea8456 not found: ID does not exist" containerID="649c0f3ec84afd9a73d4e8c5a8c0c02c1f3b46c8574299bfc9e2473557ea8456" Feb 20 15:18:36.991439 master-0 kubenswrapper[28120]: I0220 15:18:36.991273 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"649c0f3ec84afd9a73d4e8c5a8c0c02c1f3b46c8574299bfc9e2473557ea8456"} err="failed to get container status \"649c0f3ec84afd9a73d4e8c5a8c0c02c1f3b46c8574299bfc9e2473557ea8456\": rpc error: code = NotFound desc = could not find container \"649c0f3ec84afd9a73d4e8c5a8c0c02c1f3b46c8574299bfc9e2473557ea8456\": container with ID starting with 649c0f3ec84afd9a73d4e8c5a8c0c02c1f3b46c8574299bfc9e2473557ea8456 not found: ID does not exist" Feb 20 15:18:36.991439 master-0 kubenswrapper[28120]: I0220 15:18:36.991293 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d27db3-3e55-4041-aca6-83c2da3645bb-logs" (OuterVolumeSpecName: "logs") pod "b0d27db3-3e55-4041-aca6-83c2da3645bb" (UID: "b0d27db3-3e55-4041-aca6-83c2da3645bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:18:36.991439 master-0 kubenswrapper[28120]: I0220 15:18:36.991305 28120 scope.go:117] "RemoveContainer" containerID="c7028f36b135e2d26580aa9046750f8ddd4628a1f64c90c9f12aa64e8e042b3a" Feb 20 15:18:36.991599 master-0 kubenswrapper[28120]: I0220 15:18:36.991584 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-public-tls-certs\") pod \"b0d27db3-3e55-4041-aca6-83c2da3645bb\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " Feb 20 15:18:36.991723 master-0 kubenswrapper[28120]: I0220 15:18:36.991711 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-scripts\") pod \"b0d27db3-3e55-4041-aca6-83c2da3645bb\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " Feb 20 15:18:36.991894 master-0 kubenswrapper[28120]: I0220 15:18:36.991882 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-config-data\") pod \"b0d27db3-3e55-4041-aca6-83c2da3645bb\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " Feb 20 15:18:36.991989 master-0 kubenswrapper[28120]: I0220 15:18:36.991977 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-combined-ca-bundle\") pod \"b0d27db3-3e55-4041-aca6-83c2da3645bb\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " Feb 20 15:18:36.992384 master-0 kubenswrapper[28120]: I0220 15:18:36.992368 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhhmj\" (UniqueName: \"kubernetes.io/projected/b0d27db3-3e55-4041-aca6-83c2da3645bb-kube-api-access-fhhmj\") pod \"b0d27db3-3e55-4041-aca6-83c2da3645bb\" (UID: \"b0d27db3-3e55-4041-aca6-83c2da3645bb\") " Feb 20 15:18:36.992572 master-0 kubenswrapper[28120]: E0220 15:18:36.992533 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7028f36b135e2d26580aa9046750f8ddd4628a1f64c90c9f12aa64e8e042b3a\": container with ID starting with c7028f36b135e2d26580aa9046750f8ddd4628a1f64c90c9f12aa64e8e042b3a not found: ID does not exist" containerID="c7028f36b135e2d26580aa9046750f8ddd4628a1f64c90c9f12aa64e8e042b3a" Feb 20 15:18:36.992621 master-0 kubenswrapper[28120]: I0220 15:18:36.992578 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7028f36b135e2d26580aa9046750f8ddd4628a1f64c90c9f12aa64e8e042b3a"} err="failed to get container status \"c7028f36b135e2d26580aa9046750f8ddd4628a1f64c90c9f12aa64e8e042b3a\": rpc error: code = NotFound desc = could not find container \"c7028f36b135e2d26580aa9046750f8ddd4628a1f64c90c9f12aa64e8e042b3a\": container with ID starting with c7028f36b135e2d26580aa9046750f8ddd4628a1f64c90c9f12aa64e8e042b3a not found: ID does not exist" Feb 20 15:18:36.993222 master-0 kubenswrapper[28120]: I0220 15:18:36.993207 28120 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0d27db3-3e55-4041-aca6-83c2da3645bb-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:36.994852 master-0 kubenswrapper[28120]: I0220 15:18:36.994810 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-scripts" (OuterVolumeSpecName: "scripts") pod "b0d27db3-3e55-4041-aca6-83c2da3645bb" (UID: "b0d27db3-3e55-4041-aca6-83c2da3645bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:36.995744 master-0 kubenswrapper[28120]: I0220 15:18:36.995710 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d27db3-3e55-4041-aca6-83c2da3645bb-kube-api-access-fhhmj" (OuterVolumeSpecName: "kube-api-access-fhhmj") pod "b0d27db3-3e55-4041-aca6-83c2da3645bb" (UID: "b0d27db3-3e55-4041-aca6-83c2da3645bb"). InnerVolumeSpecName "kube-api-access-fhhmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:18:37.063148 master-0 kubenswrapper[28120]: I0220 15:18:37.063080 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0d27db3-3e55-4041-aca6-83c2da3645bb" (UID: "b0d27db3-3e55-4041-aca6-83c2da3645bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:37.072675 master-0 kubenswrapper[28120]: I0220 15:18:37.072626 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-config-data" (OuterVolumeSpecName: "config-data") pod "b0d27db3-3e55-4041-aca6-83c2da3645bb" (UID: "b0d27db3-3e55-4041-aca6-83c2da3645bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:37.097035 master-0 kubenswrapper[28120]: I0220 15:18:37.096961 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:37.097035 master-0 kubenswrapper[28120]: I0220 15:18:37.097021 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:37.097035 master-0 kubenswrapper[28120]: I0220 15:18:37.097032 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:37.097035 master-0 kubenswrapper[28120]: I0220 15:18:37.097047 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhhmj\" (UniqueName: \"kubernetes.io/projected/b0d27db3-3e55-4041-aca6-83c2da3645bb-kube-api-access-fhhmj\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:37.159036 master-0 kubenswrapper[28120]: I0220 15:18:37.158973 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b0d27db3-3e55-4041-aca6-83c2da3645bb" (UID: "b0d27db3-3e55-4041-aca6-83c2da3645bb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:37.164346 master-0 kubenswrapper[28120]: I0220 15:18:37.164308 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b0d27db3-3e55-4041-aca6-83c2da3645bb" (UID: "b0d27db3-3e55-4041-aca6-83c2da3645bb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:37.202181 master-0 kubenswrapper[28120]: I0220 15:18:37.202093 28120 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:37.202181 master-0 kubenswrapper[28120]: I0220 15:18:37.202134 28120 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0d27db3-3e55-4041-aca6-83c2da3645bb-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:37.300567 master-0 kubenswrapper[28120]: I0220 15:18:37.300483 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7fb458c87b-4xkt8"] Feb 20 15:18:37.312659 master-0 kubenswrapper[28120]: I0220 15:18:37.312512 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7fb458c87b-4xkt8"] Feb 20 15:18:37.939360 master-0 kubenswrapper[28120]: I0220 15:18:37.939303 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-internal-api-0" event={"ID":"72b461ef-72b8-4632-803c-a0416f9cfbd3","Type":"ContainerStarted","Data":"79057be160123df4cf2ccf6130a17c4b7d3b4763676190b5f6db85ec89d0b3a5"} Feb 20 15:18:37.980895 master-0 kubenswrapper[28120]: I0220 15:18:37.980706 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-c0df7-default-internal-api-0" podStartSLOduration=4.980686726 podStartE2EDuration="4.980686726s" podCreationTimestamp="2026-02-20 15:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:18:37.972114612 +0000 UTC m=+1056.232908215" watchObservedRunningTime="2026-02-20 15:18:37.980686726 +0000 UTC m=+1056.241480289" Feb 20 15:18:38.077965 master-0 kubenswrapper[28120]: I0220 15:18:38.077879 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d27db3-3e55-4041-aca6-83c2da3645bb" path="/var/lib/kubelet/pods/b0d27db3-3e55-4041-aca6-83c2da3645bb/volumes" Feb 20 15:18:38.751038 master-0 kubenswrapper[28120]: I0220 15:18:38.750419 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:18:38.834060 master-0 kubenswrapper[28120]: I0220 15:18:38.832242 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c77ddfc-pgzhs"] Feb 20 15:18:38.834060 master-0 kubenswrapper[28120]: I0220 15:18:38.832538 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" podUID="89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8" containerName="dnsmasq-dns" containerID="cri-o://7c9a421fb027d0770d7b1c388fc7608c71fe42d5f4abc94a48954842aec29544" gracePeriod=10 Feb 20 15:18:38.985789 master-0 kubenswrapper[28120]: I0220 15:18:38.985611 28120 generic.go:334] "Generic (PLEG): container finished" podID="8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" containerID="7876bd0e3790df9dcf01e0fefd2e587d51e3b8bb2707a59f4fc0b26d9446461c" exitCode=0 Feb 20 15:18:38.985789 master-0 kubenswrapper[28120]: I0220 15:18:38.985728 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-external-api-0" event={"ID":"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b","Type":"ContainerDied","Data":"7876bd0e3790df9dcf01e0fefd2e587d51e3b8bb2707a59f4fc0b26d9446461c"} Feb 20 15:18:39.788879 master-0 kubenswrapper[28120]: I0220 15:18:39.788821 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:18:39.917852 master-0 kubenswrapper[28120]: I0220 15:18:39.917702 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-ovsdbserver-nb\") pod \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " Feb 20 15:18:39.917852 master-0 kubenswrapper[28120]: I0220 15:18:39.917796 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-dns-svc\") pod \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " Feb 20 15:18:39.917852 master-0 kubenswrapper[28120]: I0220 15:18:39.917840 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-dns-swift-storage-0\") pod \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " Feb 20 15:18:39.918154 master-0 kubenswrapper[28120]: I0220 15:18:39.918054 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-config\") pod \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " Feb 20 15:18:39.918246 master-0 kubenswrapper[28120]: I0220 15:18:39.918215 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mb22\" (UniqueName: \"kubernetes.io/projected/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-kube-api-access-8mb22\") pod \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " Feb 20 15:18:39.918295 master-0 kubenswrapper[28120]: I0220 15:18:39.918259 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-ovsdbserver-sb\") pod \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\" (UID: \"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8\") " Feb 20 15:18:39.922327 master-0 kubenswrapper[28120]: I0220 15:18:39.922268 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-kube-api-access-8mb22" (OuterVolumeSpecName: "kube-api-access-8mb22") pod "89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8" (UID: "89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8"). InnerVolumeSpecName "kube-api-access-8mb22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:18:39.996301 master-0 kubenswrapper[28120]: I0220 15:18:39.994694 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-config" (OuterVolumeSpecName: "config") pod "89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8" (UID: "89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:18:40.000364 master-0 kubenswrapper[28120]: I0220 15:18:40.000286 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8" (UID: "89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:18:40.000860 master-0 kubenswrapper[28120]: I0220 15:18:40.000702 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8" (UID: "89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:18:40.006855 master-0 kubenswrapper[28120]: I0220 15:18:40.006801 28120 generic.go:334] "Generic (PLEG): container finished" podID="89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8" containerID="7c9a421fb027d0770d7b1c388fc7608c71fe42d5f4abc94a48954842aec29544" exitCode=0 Feb 20 15:18:40.007045 master-0 kubenswrapper[28120]: I0220 15:18:40.006870 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" event={"ID":"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8","Type":"ContainerDied","Data":"7c9a421fb027d0770d7b1c388fc7608c71fe42d5f4abc94a48954842aec29544"} Feb 20 15:18:40.007045 master-0 kubenswrapper[28120]: I0220 15:18:40.006903 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" event={"ID":"89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8","Type":"ContainerDied","Data":"288b476546e53a74c0a10270dda70d19b6d4247e66baf0dbac0b86eae2dccd4f"} Feb 20 15:18:40.007045 master-0 kubenswrapper[28120]: I0220 15:18:40.006939 28120 scope.go:117] "RemoveContainer" containerID="7c9a421fb027d0770d7b1c388fc7608c71fe42d5f4abc94a48954842aec29544" Feb 20 15:18:40.007321 master-0 kubenswrapper[28120]: I0220 15:18:40.007267 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c77ddfc-pgzhs" Feb 20 15:18:40.027042 master-0 kubenswrapper[28120]: I0220 15:18:40.026997 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:40.027042 master-0 kubenswrapper[28120]: I0220 15:18:40.027025 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:40.027042 master-0 kubenswrapper[28120]: I0220 15:18:40.027034 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mb22\" (UniqueName: \"kubernetes.io/projected/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-kube-api-access-8mb22\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:40.027042 master-0 kubenswrapper[28120]: I0220 15:18:40.027043 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:40.047958 master-0 kubenswrapper[28120]: I0220 15:18:40.047726 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8" (UID: "89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:18:40.051560 master-0 kubenswrapper[28120]: I0220 15:18:40.048229 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8" (UID: "89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:18:40.084274 master-0 kubenswrapper[28120]: I0220 15:18:40.084211 28120 scope.go:117] "RemoveContainer" containerID="ef7a450eba6522e9892401d56871b2da56525ce9347458a175974c7cbcbb863a" Feb 20 15:18:40.119822 master-0 kubenswrapper[28120]: I0220 15:18:40.119783 28120 scope.go:117] "RemoveContainer" containerID="7c9a421fb027d0770d7b1c388fc7608c71fe42d5f4abc94a48954842aec29544" Feb 20 15:18:40.120199 master-0 kubenswrapper[28120]: E0220 15:18:40.120165 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c9a421fb027d0770d7b1c388fc7608c71fe42d5f4abc94a48954842aec29544\": container with ID starting with 7c9a421fb027d0770d7b1c388fc7608c71fe42d5f4abc94a48954842aec29544 not found: ID does not exist" containerID="7c9a421fb027d0770d7b1c388fc7608c71fe42d5f4abc94a48954842aec29544" Feb 20 15:18:40.120256 master-0 kubenswrapper[28120]: I0220 15:18:40.120195 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c9a421fb027d0770d7b1c388fc7608c71fe42d5f4abc94a48954842aec29544"} err="failed to get container status \"7c9a421fb027d0770d7b1c388fc7608c71fe42d5f4abc94a48954842aec29544\": rpc error: code = NotFound desc = could not find container \"7c9a421fb027d0770d7b1c388fc7608c71fe42d5f4abc94a48954842aec29544\": container with ID starting with 7c9a421fb027d0770d7b1c388fc7608c71fe42d5f4abc94a48954842aec29544 not found: ID does not exist" Feb 20 15:18:40.120256 master-0 kubenswrapper[28120]: I0220 15:18:40.120215 28120 scope.go:117] "RemoveContainer" containerID="ef7a450eba6522e9892401d56871b2da56525ce9347458a175974c7cbcbb863a" Feb 20 15:18:40.120475 master-0 kubenswrapper[28120]: E0220 15:18:40.120441 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef7a450eba6522e9892401d56871b2da56525ce9347458a175974c7cbcbb863a\": container with ID starting with ef7a450eba6522e9892401d56871b2da56525ce9347458a175974c7cbcbb863a not found: ID does not exist" containerID="ef7a450eba6522e9892401d56871b2da56525ce9347458a175974c7cbcbb863a" Feb 20 15:18:40.120475 master-0 kubenswrapper[28120]: I0220 15:18:40.120460 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef7a450eba6522e9892401d56871b2da56525ce9347458a175974c7cbcbb863a"} err="failed to get container status \"ef7a450eba6522e9892401d56871b2da56525ce9347458a175974c7cbcbb863a\": rpc error: code = NotFound desc = could not find container \"ef7a450eba6522e9892401d56871b2da56525ce9347458a175974c7cbcbb863a\": container with ID starting with ef7a450eba6522e9892401d56871b2da56525ce9347458a175974c7cbcbb863a not found: ID does not exist" Feb 20 15:18:40.128430 master-0 kubenswrapper[28120]: I0220 15:18:40.128383 28120 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:40.128430 master-0 kubenswrapper[28120]: I0220 15:18:40.128423 28120 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:40.150854 master-0 kubenswrapper[28120]: I0220 15:18:40.150818 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:40.229329 master-0 kubenswrapper[28120]: I0220 15:18:40.229271 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-logs\") pod \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " Feb 20 15:18:40.229329 master-0 kubenswrapper[28120]: I0220 15:18:40.229341 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-httpd-run\") pod \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " Feb 20 15:18:40.229652 master-0 kubenswrapper[28120]: I0220 15:18:40.229377 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-combined-ca-bundle\") pod \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " Feb 20 15:18:40.229652 master-0 kubenswrapper[28120]: I0220 15:18:40.229405 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-config-data\") pod \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " Feb 20 15:18:40.229652 master-0 kubenswrapper[28120]: I0220 15:18:40.229456 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-scripts\") pod \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " Feb 20 15:18:40.229652 master-0 kubenswrapper[28120]: I0220 15:18:40.229581 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") pod \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " Feb 20 15:18:40.229652 master-0 kubenswrapper[28120]: I0220 15:18:40.229605 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-public-tls-certs\") pod \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " Feb 20 15:18:40.231463 master-0 kubenswrapper[28120]: I0220 15:18:40.230336 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-logs" (OuterVolumeSpecName: "logs") pod "8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" (UID: "8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:18:40.231463 master-0 kubenswrapper[28120]: I0220 15:18:40.230521 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" (UID: "8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:18:40.233748 master-0 kubenswrapper[28120]: I0220 15:18:40.233717 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-scripts" (OuterVolumeSpecName: "scripts") pod "8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" (UID: "8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:40.251985 master-0 kubenswrapper[28120]: I0220 15:18:40.251945 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12" (OuterVolumeSpecName: "glance") pod "8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" (UID: "8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b"). InnerVolumeSpecName "pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 20 15:18:40.261854 master-0 kubenswrapper[28120]: I0220 15:18:40.260914 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" (UID: "8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:40.287115 master-0 kubenswrapper[28120]: I0220 15:18:40.287050 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" (UID: "8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:40.296522 master-0 kubenswrapper[28120]: I0220 15:18:40.296440 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-config-data" (OuterVolumeSpecName: "config-data") pod "8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" (UID: "8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:40.334035 master-0 kubenswrapper[28120]: I0220 15:18:40.333242 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v46dv\" (UniqueName: \"kubernetes.io/projected/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-kube-api-access-v46dv\") pod \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\" (UID: \"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b\") " Feb 20 15:18:40.334273 master-0 kubenswrapper[28120]: I0220 15:18:40.334104 28120 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:40.334273 master-0 kubenswrapper[28120]: I0220 15:18:40.334123 28120 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:40.334273 master-0 kubenswrapper[28120]: I0220 15:18:40.334136 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:40.334273 master-0 kubenswrapper[28120]: I0220 15:18:40.334147 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:40.334273 master-0 kubenswrapper[28120]: I0220 15:18:40.334156 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:40.334273 master-0 kubenswrapper[28120]: I0220 15:18:40.334176 28120 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") on node \"master-0\" " Feb 20 15:18:40.334273 master-0 kubenswrapper[28120]: I0220 15:18:40.334187 28120 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:40.337268 master-0 kubenswrapper[28120]: I0220 15:18:40.337215 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-kube-api-access-v46dv" (OuterVolumeSpecName: "kube-api-access-v46dv") pod "8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" (UID: "8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b"). InnerVolumeSpecName "kube-api-access-v46dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:18:40.375443 master-0 kubenswrapper[28120]: I0220 15:18:40.375392 28120 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 20 15:18:40.375682 master-0 kubenswrapper[28120]: I0220 15:18:40.375597 28120 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a" (UniqueName: "kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12") on node "master-0" Feb 20 15:18:40.436462 master-0 kubenswrapper[28120]: I0220 15:18:40.436392 28120 reconciler_common.go:293] "Volume detached for volume \"pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:40.436462 master-0 kubenswrapper[28120]: I0220 15:18:40.436444 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v46dv\" (UniqueName: \"kubernetes.io/projected/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b-kube-api-access-v46dv\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:40.500973 master-0 kubenswrapper[28120]: I0220 15:18:40.500891 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c77ddfc-pgzhs"] Feb 20 15:18:40.519169 master-0 kubenswrapper[28120]: I0220 15:18:40.519108 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b9c77ddfc-pgzhs"] Feb 20 15:18:40.592843 master-0 kubenswrapper[28120]: I0220 15:18:40.592788 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-g9knn"] Feb 20 15:18:40.593402 master-0 kubenswrapper[28120]: E0220 15:18:40.593381 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d27db3-3e55-4041-aca6-83c2da3645bb" containerName="placement-log" Feb 20 15:18:40.593402 master-0 kubenswrapper[28120]: I0220 15:18:40.593401 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d27db3-3e55-4041-aca6-83c2da3645bb" containerName="placement-log" Feb 20 15:18:40.593500 master-0 kubenswrapper[28120]: E0220 15:18:40.593421 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" containerName="glance-httpd" Feb 20 15:18:40.593500 master-0 kubenswrapper[28120]: I0220 15:18:40.593428 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" containerName="glance-httpd" Feb 20 15:18:40.593500 master-0 kubenswrapper[28120]: E0220 15:18:40.593444 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" containerName="glance-log" Feb 20 15:18:40.593500 master-0 kubenswrapper[28120]: I0220 15:18:40.593450 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" containerName="glance-log" Feb 20 15:18:40.593500 master-0 kubenswrapper[28120]: E0220 15:18:40.593462 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d27db3-3e55-4041-aca6-83c2da3645bb" containerName="placement-api" Feb 20 15:18:40.593500 master-0 kubenswrapper[28120]: I0220 15:18:40.593470 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d27db3-3e55-4041-aca6-83c2da3645bb" containerName="placement-api" Feb 20 15:18:40.593500 master-0 kubenswrapper[28120]: E0220 15:18:40.593483 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8" containerName="dnsmasq-dns" Feb 20 15:18:40.593500 master-0 kubenswrapper[28120]: I0220 15:18:40.593489 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8" containerName="dnsmasq-dns" Feb 20 15:18:40.593500 master-0 kubenswrapper[28120]: E0220 15:18:40.593502 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8" containerName="init" Feb 20 15:18:40.593500 master-0 kubenswrapper[28120]: I0220 15:18:40.593509 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8" containerName="init" Feb 20 15:18:40.593793 master-0 kubenswrapper[28120]: I0220 15:18:40.593763 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d27db3-3e55-4041-aca6-83c2da3645bb" containerName="placement-api" Feb 20 15:18:40.593825 master-0 kubenswrapper[28120]: I0220 15:18:40.593795 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8" containerName="dnsmasq-dns" Feb 20 15:18:40.593825 master-0 kubenswrapper[28120]: I0220 15:18:40.593813 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d27db3-3e55-4041-aca6-83c2da3645bb" containerName="placement-log" Feb 20 15:18:40.593825 master-0 kubenswrapper[28120]: I0220 15:18:40.593824 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" containerName="glance-httpd" Feb 20 15:18:40.593934 master-0 kubenswrapper[28120]: I0220 15:18:40.593845 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" containerName="glance-log" Feb 20 15:18:40.594621 master-0 kubenswrapper[28120]: I0220 15:18:40.594597 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g9knn" Feb 20 15:18:40.640941 master-0 kubenswrapper[28120]: I0220 15:18:40.634973 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-g9knn"] Feb 20 15:18:40.640941 master-0 kubenswrapper[28120]: I0220 15:18:40.640714 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwf9s\" (UniqueName: \"kubernetes.io/projected/23c2f822-4035-41a5-862d-2510ea856786-kube-api-access-gwf9s\") pod \"nova-api-db-create-g9knn\" (UID: \"23c2f822-4035-41a5-862d-2510ea856786\") " pod="openstack/nova-api-db-create-g9knn" Feb 20 15:18:40.640941 master-0 kubenswrapper[28120]: I0220 15:18:40.640791 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c2f822-4035-41a5-862d-2510ea856786-operator-scripts\") pod \"nova-api-db-create-g9knn\" (UID: \"23c2f822-4035-41a5-862d-2510ea856786\") " pod="openstack/nova-api-db-create-g9knn" Feb 20 15:18:40.743942 master-0 kubenswrapper[28120]: I0220 15:18:40.743485 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c2f822-4035-41a5-862d-2510ea856786-operator-scripts\") pod \"nova-api-db-create-g9knn\" (UID: \"23c2f822-4035-41a5-862d-2510ea856786\") " pod="openstack/nova-api-db-create-g9knn" Feb 20 15:18:40.743942 master-0 kubenswrapper[28120]: I0220 15:18:40.743898 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwf9s\" (UniqueName: \"kubernetes.io/projected/23c2f822-4035-41a5-862d-2510ea856786-kube-api-access-gwf9s\") pod \"nova-api-db-create-g9knn\" (UID: \"23c2f822-4035-41a5-862d-2510ea856786\") " pod="openstack/nova-api-db-create-g9knn" Feb 20 15:18:40.745173 master-0 kubenswrapper[28120]: I0220 15:18:40.744307 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c2f822-4035-41a5-862d-2510ea856786-operator-scripts\") pod \"nova-api-db-create-g9knn\" (UID: \"23c2f822-4035-41a5-862d-2510ea856786\") " pod="openstack/nova-api-db-create-g9knn" Feb 20 15:18:40.762987 master-0 kubenswrapper[28120]: I0220 15:18:40.762840 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwf9s\" (UniqueName: \"kubernetes.io/projected/23c2f822-4035-41a5-862d-2510ea856786-kube-api-access-gwf9s\") pod \"nova-api-db-create-g9knn\" (UID: \"23c2f822-4035-41a5-862d-2510ea856786\") " pod="openstack/nova-api-db-create-g9knn" Feb 20 15:18:40.817875 master-0 kubenswrapper[28120]: I0220 15:18:40.817804 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-5qkvd"] Feb 20 15:18:40.819807 master-0 kubenswrapper[28120]: I0220 15:18:40.819779 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5qkvd" Feb 20 15:18:40.837494 master-0 kubenswrapper[28120]: I0220 15:18:40.835946 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9efe-account-create-update-ln54q"] Feb 20 15:18:40.851785 master-0 kubenswrapper[28120]: I0220 15:18:40.838028 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9efe-account-create-update-ln54q" Feb 20 15:18:40.851785 master-0 kubenswrapper[28120]: I0220 15:18:40.846898 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d609009e-7cce-4b2a-8506-279710de8190-operator-scripts\") pod \"nova-api-9efe-account-create-update-ln54q\" (UID: \"d609009e-7cce-4b2a-8506-279710de8190\") " pod="openstack/nova-api-9efe-account-create-update-ln54q" Feb 20 15:18:40.851785 master-0 kubenswrapper[28120]: I0220 15:18:40.846976 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24qvv\" (UniqueName: \"kubernetes.io/projected/d609009e-7cce-4b2a-8506-279710de8190-kube-api-access-24qvv\") pod \"nova-api-9efe-account-create-update-ln54q\" (UID: \"d609009e-7cce-4b2a-8506-279710de8190\") " pod="openstack/nova-api-9efe-account-create-update-ln54q" Feb 20 15:18:40.851785 master-0 kubenswrapper[28120]: I0220 15:18:40.847083 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6f8l\" (UniqueName: \"kubernetes.io/projected/73782ca9-6bfa-4c3b-abeb-05a890839bc8-kube-api-access-t6f8l\") pod \"nova-cell0-db-create-5qkvd\" (UID: \"73782ca9-6bfa-4c3b-abeb-05a890839bc8\") " pod="openstack/nova-cell0-db-create-5qkvd" Feb 20 15:18:40.851785 master-0 kubenswrapper[28120]: I0220 15:18:40.847109 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73782ca9-6bfa-4c3b-abeb-05a890839bc8-operator-scripts\") pod \"nova-cell0-db-create-5qkvd\" (UID: \"73782ca9-6bfa-4c3b-abeb-05a890839bc8\") " pod="openstack/nova-cell0-db-create-5qkvd" Feb 20 15:18:40.852798 master-0 kubenswrapper[28120]: I0220 15:18:40.852495 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 20 15:18:40.860210 master-0 kubenswrapper[28120]: I0220 15:18:40.856859 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5qkvd"] Feb 20 15:18:40.890234 master-0 kubenswrapper[28120]: I0220 15:18:40.877811 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9efe-account-create-update-ln54q"] Feb 20 15:18:40.930165 master-0 kubenswrapper[28120]: I0220 15:18:40.914592 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g9knn" Feb 20 15:18:40.930165 master-0 kubenswrapper[28120]: I0220 15:18:40.925655 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-dcvql"] Feb 20 15:18:40.930165 master-0 kubenswrapper[28120]: I0220 15:18:40.927599 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dcvql" Feb 20 15:18:40.941166 master-0 kubenswrapper[28120]: I0220 15:18:40.938089 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dcvql"] Feb 20 15:18:40.975169 master-0 kubenswrapper[28120]: I0220 15:18:40.972693 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6f8l\" (UniqueName: \"kubernetes.io/projected/73782ca9-6bfa-4c3b-abeb-05a890839bc8-kube-api-access-t6f8l\") pod \"nova-cell0-db-create-5qkvd\" (UID: \"73782ca9-6bfa-4c3b-abeb-05a890839bc8\") " pod="openstack/nova-cell0-db-create-5qkvd" Feb 20 15:18:40.975169 master-0 kubenswrapper[28120]: I0220 15:18:40.972746 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73782ca9-6bfa-4c3b-abeb-05a890839bc8-operator-scripts\") pod \"nova-cell0-db-create-5qkvd\" (UID: \"73782ca9-6bfa-4c3b-abeb-05a890839bc8\") " pod="openstack/nova-cell0-db-create-5qkvd" Feb 20 15:18:40.975169 master-0 kubenswrapper[28120]: I0220 15:18:40.974912 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73782ca9-6bfa-4c3b-abeb-05a890839bc8-operator-scripts\") pod \"nova-cell0-db-create-5qkvd\" (UID: \"73782ca9-6bfa-4c3b-abeb-05a890839bc8\") " pod="openstack/nova-cell0-db-create-5qkvd" Feb 20 15:18:40.976496 master-0 kubenswrapper[28120]: I0220 15:18:40.975548 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d609009e-7cce-4b2a-8506-279710de8190-operator-scripts\") pod \"nova-api-9efe-account-create-update-ln54q\" (UID: \"d609009e-7cce-4b2a-8506-279710de8190\") " pod="openstack/nova-api-9efe-account-create-update-ln54q" Feb 20 15:18:40.976496 master-0 kubenswrapper[28120]: I0220 15:18:40.975616 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24qvv\" (UniqueName: \"kubernetes.io/projected/d609009e-7cce-4b2a-8506-279710de8190-kube-api-access-24qvv\") pod \"nova-api-9efe-account-create-update-ln54q\" (UID: \"d609009e-7cce-4b2a-8506-279710de8190\") " pod="openstack/nova-api-9efe-account-create-update-ln54q" Feb 20 15:18:40.976496 master-0 kubenswrapper[28120]: I0220 15:18:40.976461 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d609009e-7cce-4b2a-8506-279710de8190-operator-scripts\") pod \"nova-api-9efe-account-create-update-ln54q\" (UID: \"d609009e-7cce-4b2a-8506-279710de8190\") " pod="openstack/nova-api-9efe-account-create-update-ln54q" Feb 20 15:18:41.006514 master-0 kubenswrapper[28120]: I0220 15:18:41.006381 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6f8l\" (UniqueName: \"kubernetes.io/projected/73782ca9-6bfa-4c3b-abeb-05a890839bc8-kube-api-access-t6f8l\") pod \"nova-cell0-db-create-5qkvd\" (UID: \"73782ca9-6bfa-4c3b-abeb-05a890839bc8\") " pod="openstack/nova-cell0-db-create-5qkvd" Feb 20 15:18:41.007862 master-0 kubenswrapper[28120]: I0220 15:18:41.007800 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24qvv\" (UniqueName: \"kubernetes.io/projected/d609009e-7cce-4b2a-8506-279710de8190-kube-api-access-24qvv\") pod \"nova-api-9efe-account-create-update-ln54q\" (UID: \"d609009e-7cce-4b2a-8506-279710de8190\") " pod="openstack/nova-api-9efe-account-create-update-ln54q" Feb 20 15:18:41.055342 master-0 kubenswrapper[28120]: I0220 15:18:41.055218 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"af765e06-e2aa-4239-8a51-fc29e02fa257","Type":"ContainerStarted","Data":"a26c972f1b27276fe2c2e67689f1d943da6b8323848b9c5dd16ab3fde727a2ab"} Feb 20 15:18:41.077406 master-0 kubenswrapper[28120]: I0220 15:18:41.077271 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b3af-account-create-update-2ltqm"] Feb 20 15:18:41.089821 master-0 kubenswrapper[28120]: I0220 15:18:41.088054 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b3af-account-create-update-2ltqm" Feb 20 15:18:41.090880 master-0 kubenswrapper[28120]: I0220 15:18:41.090798 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 20 15:18:41.097053 master-0 kubenswrapper[28120]: I0220 15:18:41.096996 28120 generic.go:334] "Generic (PLEG): container finished" podID="79055f50-7353-4dc9-90d0-dcd790817149" containerID="2fa867ca854f135856419d998a5414c022a8b9932b1858ea236a7afebe138ef0" exitCode=0 Feb 20 15:18:41.097207 master-0 kubenswrapper[28120]: I0220 15:18:41.097120 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"79055f50-7353-4dc9-90d0-dcd790817149","Type":"ContainerDied","Data":"2fa867ca854f135856419d998a5414c022a8b9932b1858ea236a7afebe138ef0"} Feb 20 15:18:41.100543 master-0 kubenswrapper[28120]: I0220 15:18:41.100496 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-external-api-0" event={"ID":"8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b","Type":"ContainerDied","Data":"b2b2056d86fa93ae35e0cb2f1b218779c4657bedf9c221d45027c5c2fd17209e"} Feb 20 15:18:41.100696 master-0 kubenswrapper[28120]: I0220 15:18:41.100678 28120 scope.go:117] "RemoveContainer" containerID="7876bd0e3790df9dcf01e0fefd2e587d51e3b8bb2707a59f4fc0b26d9446461c" Feb 20 15:18:41.101041 master-0 kubenswrapper[28120]: I0220 15:18:41.101020 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.111759 master-0 kubenswrapper[28120]: I0220 15:18:41.107429 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r74ft\" (UniqueName: \"kubernetes.io/projected/146df682-64c3-4786-9489-2ac4b0ca2811-kube-api-access-r74ft\") pod \"nova-cell1-db-create-dcvql\" (UID: \"146df682-64c3-4786-9489-2ac4b0ca2811\") " pod="openstack/nova-cell1-db-create-dcvql" Feb 20 15:18:41.119461 master-0 kubenswrapper[28120]: I0220 15:18:41.117836 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/146df682-64c3-4786-9489-2ac4b0ca2811-operator-scripts\") pod \"nova-cell1-db-create-dcvql\" (UID: \"146df682-64c3-4786-9489-2ac4b0ca2811\") " pod="openstack/nova-cell1-db-create-dcvql" Feb 20 15:18:41.138860 master-0 kubenswrapper[28120]: I0220 15:18:41.135186 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b3af-account-create-update-2ltqm"] Feb 20 15:18:41.154075 master-0 kubenswrapper[28120]: I0220 15:18:41.154026 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5qkvd" Feb 20 15:18:41.161255 master-0 kubenswrapper[28120]: I0220 15:18:41.161038 28120 scope.go:117] "RemoveContainer" containerID="75f52938914e3c2ceead05696ed96550dd740a1b3e1f9d04aca0d29cf16fd722" Feb 20 15:18:41.188127 master-0 kubenswrapper[28120]: I0220 15:18:41.187654 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9efe-account-create-update-ln54q" Feb 20 15:18:41.206278 master-0 kubenswrapper[28120]: I0220 15:18:41.206236 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c0df7-default-external-api-0"] Feb 20 15:18:41.219158 master-0 kubenswrapper[28120]: I0220 15:18:41.219123 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-c0df7-default-external-api-0"] Feb 20 15:18:41.220792 master-0 kubenswrapper[28120]: I0220 15:18:41.220771 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/146df682-64c3-4786-9489-2ac4b0ca2811-operator-scripts\") pod \"nova-cell1-db-create-dcvql\" (UID: \"146df682-64c3-4786-9489-2ac4b0ca2811\") " pod="openstack/nova-cell1-db-create-dcvql" Feb 20 15:18:41.221726 master-0 kubenswrapper[28120]: I0220 15:18:41.221691 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/146df682-64c3-4786-9489-2ac4b0ca2811-operator-scripts\") pod \"nova-cell1-db-create-dcvql\" (UID: \"146df682-64c3-4786-9489-2ac4b0ca2811\") " pod="openstack/nova-cell1-db-create-dcvql" Feb 20 15:18:41.221866 master-0 kubenswrapper[28120]: I0220 15:18:41.221843 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bedaebf9-d467-474c-9b78-446fd89cb799-operator-scripts\") pod \"nova-cell0-b3af-account-create-update-2ltqm\" (UID: \"bedaebf9-d467-474c-9b78-446fd89cb799\") " pod="openstack/nova-cell0-b3af-account-create-update-2ltqm" Feb 20 15:18:41.222054 master-0 kubenswrapper[28120]: I0220 15:18:41.222036 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4tjm\" (UniqueName: \"kubernetes.io/projected/bedaebf9-d467-474c-9b78-446fd89cb799-kube-api-access-s4tjm\") pod \"nova-cell0-b3af-account-create-update-2ltqm\" (UID: \"bedaebf9-d467-474c-9b78-446fd89cb799\") " pod="openstack/nova-cell0-b3af-account-create-update-2ltqm" Feb 20 15:18:41.222234 master-0 kubenswrapper[28120]: I0220 15:18:41.222217 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r74ft\" (UniqueName: \"kubernetes.io/projected/146df682-64c3-4786-9489-2ac4b0ca2811-kube-api-access-r74ft\") pod \"nova-cell1-db-create-dcvql\" (UID: \"146df682-64c3-4786-9489-2ac4b0ca2811\") " pod="openstack/nova-cell1-db-create-dcvql" Feb 20 15:18:41.249816 master-0 kubenswrapper[28120]: I0220 15:18:41.249260 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-c0df7-default-external-api-0"] Feb 20 15:18:41.252621 master-0 kubenswrapper[28120]: I0220 15:18:41.252588 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.255758 master-0 kubenswrapper[28120]: I0220 15:18:41.255732 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 20 15:18:41.257994 master-0 kubenswrapper[28120]: I0220 15:18:41.256169 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-c0df7-default-external-config-data" Feb 20 15:18:41.259561 master-0 kubenswrapper[28120]: I0220 15:18:41.259471 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r74ft\" (UniqueName: \"kubernetes.io/projected/146df682-64c3-4786-9489-2ac4b0ca2811-kube-api-access-r74ft\") pod \"nova-cell1-db-create-dcvql\" (UID: \"146df682-64c3-4786-9489-2ac4b0ca2811\") " pod="openstack/nova-cell1-db-create-dcvql" Feb 20 15:18:41.266163 master-0 kubenswrapper[28120]: I0220 15:18:41.262964 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c0df7-default-external-api-0"] Feb 20 15:18:41.320542 master-0 kubenswrapper[28120]: I0220 15:18:41.318850 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7035-account-create-update-mldfs"] Feb 20 15:18:41.320724 master-0 kubenswrapper[28120]: I0220 15:18:41.320572 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7035-account-create-update-mldfs" Feb 20 15:18:41.324396 master-0 kubenswrapper[28120]: I0220 15:18:41.323127 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 20 15:18:41.329757 master-0 kubenswrapper[28120]: I0220 15:18:41.325848 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bedaebf9-d467-474c-9b78-446fd89cb799-operator-scripts\") pod \"nova-cell0-b3af-account-create-update-2ltqm\" (UID: \"bedaebf9-d467-474c-9b78-446fd89cb799\") " pod="openstack/nova-cell0-b3af-account-create-update-2ltqm" Feb 20 15:18:41.329757 master-0 kubenswrapper[28120]: I0220 15:18:41.325959 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4tjm\" (UniqueName: \"kubernetes.io/projected/bedaebf9-d467-474c-9b78-446fd89cb799-kube-api-access-s4tjm\") pod \"nova-cell0-b3af-account-create-update-2ltqm\" (UID: \"bedaebf9-d467-474c-9b78-446fd89cb799\") " pod="openstack/nova-cell0-b3af-account-create-update-2ltqm" Feb 20 15:18:41.329757 master-0 kubenswrapper[28120]: I0220 15:18:41.326538 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bedaebf9-d467-474c-9b78-446fd89cb799-operator-scripts\") pod \"nova-cell0-b3af-account-create-update-2ltqm\" (UID: \"bedaebf9-d467-474c-9b78-446fd89cb799\") " pod="openstack/nova-cell0-b3af-account-create-update-2ltqm" Feb 20 15:18:41.344405 master-0 kubenswrapper[28120]: I0220 15:18:41.341010 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7035-account-create-update-mldfs"] Feb 20 15:18:41.351700 master-0 kubenswrapper[28120]: I0220 15:18:41.351655 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4tjm\" (UniqueName: \"kubernetes.io/projected/bedaebf9-d467-474c-9b78-446fd89cb799-kube-api-access-s4tjm\") pod \"nova-cell0-b3af-account-create-update-2ltqm\" (UID: \"bedaebf9-d467-474c-9b78-446fd89cb799\") " pod="openstack/nova-cell0-b3af-account-create-update-2ltqm" Feb 20 15:18:41.386355 master-0 kubenswrapper[28120]: I0220 15:18:41.385232 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dcvql" Feb 20 15:18:41.429245 master-0 kubenswrapper[28120]: I0220 15:18:41.427888 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b15a78c-0233-420f-989c-f52832b53e28-logs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.429245 master-0 kubenswrapper[28120]: I0220 15:18:41.427953 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b15a78c-0233-420f-989c-f52832b53e28-combined-ca-bundle\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.429245 master-0 kubenswrapper[28120]: I0220 15:18:41.428060 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b15a78c-0233-420f-989c-f52832b53e28-public-tls-certs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.429245 master-0 kubenswrapper[28120]: I0220 15:18:41.428152 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a62f40e1-6a44-4a9b-a939-53cd2f75472d-operator-scripts\") pod \"nova-cell1-7035-account-create-update-mldfs\" (UID: \"a62f40e1-6a44-4a9b-a939-53cd2f75472d\") " pod="openstack/nova-cell1-7035-account-create-update-mldfs" Feb 20 15:18:41.429245 master-0 kubenswrapper[28120]: I0220 15:18:41.428464 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b15a78c-0233-420f-989c-f52832b53e28-httpd-run\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.429245 master-0 kubenswrapper[28120]: I0220 15:18:41.428587 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b15a78c-0233-420f-989c-f52832b53e28-scripts\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.429245 master-0 kubenswrapper[28120]: I0220 15:18:41.428610 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnxcc\" (UniqueName: \"kubernetes.io/projected/2b15a78c-0233-420f-989c-f52832b53e28-kube-api-access-cnxcc\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.429245 master-0 kubenswrapper[28120]: I0220 15:18:41.428812 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ktqg\" (UniqueName: \"kubernetes.io/projected/a62f40e1-6a44-4a9b-a939-53cd2f75472d-kube-api-access-8ktqg\") pod \"nova-cell1-7035-account-create-update-mldfs\" (UID: \"a62f40e1-6a44-4a9b-a939-53cd2f75472d\") " pod="openstack/nova-cell1-7035-account-create-update-mldfs" Feb 20 15:18:41.429575 master-0 kubenswrapper[28120]: I0220 15:18:41.429081 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.429575 master-0 kubenswrapper[28120]: I0220 15:18:41.429535 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b15a78c-0233-420f-989c-f52832b53e28-config-data\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.461457 master-0 kubenswrapper[28120]: I0220 15:18:41.461397 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-g9knn"] Feb 20 15:18:41.495055 master-0 kubenswrapper[28120]: I0220 15:18:41.494412 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b3af-account-create-update-2ltqm" Feb 20 15:18:41.535283 master-0 kubenswrapper[28120]: I0220 15:18:41.535234 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.535428 master-0 kubenswrapper[28120]: I0220 15:18:41.535326 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b15a78c-0233-420f-989c-f52832b53e28-config-data\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.535428 master-0 kubenswrapper[28120]: I0220 15:18:41.535368 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b15a78c-0233-420f-989c-f52832b53e28-combined-ca-bundle\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.535428 master-0 kubenswrapper[28120]: I0220 15:18:41.535384 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b15a78c-0233-420f-989c-f52832b53e28-logs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.535428 master-0 kubenswrapper[28120]: I0220 15:18:41.535402 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b15a78c-0233-420f-989c-f52832b53e28-public-tls-certs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.535428 master-0 kubenswrapper[28120]: I0220 15:18:41.535428 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a62f40e1-6a44-4a9b-a939-53cd2f75472d-operator-scripts\") pod \"nova-cell1-7035-account-create-update-mldfs\" (UID: \"a62f40e1-6a44-4a9b-a939-53cd2f75472d\") " pod="openstack/nova-cell1-7035-account-create-update-mldfs" Feb 20 15:18:41.535575 master-0 kubenswrapper[28120]: I0220 15:18:41.535482 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b15a78c-0233-420f-989c-f52832b53e28-httpd-run\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.535575 master-0 kubenswrapper[28120]: I0220 15:18:41.535505 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b15a78c-0233-420f-989c-f52832b53e28-scripts\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.535575 master-0 kubenswrapper[28120]: I0220 15:18:41.535524 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnxcc\" (UniqueName: \"kubernetes.io/projected/2b15a78c-0233-420f-989c-f52832b53e28-kube-api-access-cnxcc\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.535575 master-0 kubenswrapper[28120]: I0220 15:18:41.535567 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ktqg\" (UniqueName: \"kubernetes.io/projected/a62f40e1-6a44-4a9b-a939-53cd2f75472d-kube-api-access-8ktqg\") pod \"nova-cell1-7035-account-create-update-mldfs\" (UID: \"a62f40e1-6a44-4a9b-a939-53cd2f75472d\") " pod="openstack/nova-cell1-7035-account-create-update-mldfs" Feb 20 15:18:41.537847 master-0 kubenswrapper[28120]: I0220 15:18:41.535936 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b15a78c-0233-420f-989c-f52832b53e28-logs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.547221 master-0 kubenswrapper[28120]: I0220 15:18:41.539394 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a62f40e1-6a44-4a9b-a939-53cd2f75472d-operator-scripts\") pod \"nova-cell1-7035-account-create-update-mldfs\" (UID: \"a62f40e1-6a44-4a9b-a939-53cd2f75472d\") " pod="openstack/nova-cell1-7035-account-create-update-mldfs" Feb 20 15:18:41.547322 master-0 kubenswrapper[28120]: I0220 15:18:41.547299 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b15a78c-0233-420f-989c-f52832b53e28-httpd-run\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.551112 master-0 kubenswrapper[28120]: I0220 15:18:41.550983 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b15a78c-0233-420f-989c-f52832b53e28-combined-ca-bundle\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.551374 master-0 kubenswrapper[28120]: I0220 15:18:41.551339 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b15a78c-0233-420f-989c-f52832b53e28-public-tls-certs\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.555443 master-0 kubenswrapper[28120]: I0220 15:18:41.554876 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b15a78c-0233-420f-989c-f52832b53e28-scripts\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.559738 master-0 kubenswrapper[28120]: I0220 15:18:41.557053 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b15a78c-0233-420f-989c-f52832b53e28-config-data\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.561724 master-0 kubenswrapper[28120]: I0220 15:18:41.561245 28120 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 20 15:18:41.561724 master-0 kubenswrapper[28120]: I0220 15:18:41.561279 28120 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/975746d433a1b57bdfbaee59d00ca3c763a79be4597c58f513ba3507287deebe/globalmount\"" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.562928 master-0 kubenswrapper[28120]: I0220 15:18:41.562886 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ktqg\" (UniqueName: \"kubernetes.io/projected/a62f40e1-6a44-4a9b-a939-53cd2f75472d-kube-api-access-8ktqg\") pod \"nova-cell1-7035-account-create-update-mldfs\" (UID: \"a62f40e1-6a44-4a9b-a939-53cd2f75472d\") " pod="openstack/nova-cell1-7035-account-create-update-mldfs" Feb 20 15:18:41.564516 master-0 kubenswrapper[28120]: I0220 15:18:41.564467 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnxcc\" (UniqueName: \"kubernetes.io/projected/2b15a78c-0233-420f-989c-f52832b53e28-kube-api-access-cnxcc\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:41.674123 master-0 kubenswrapper[28120]: I0220 15:18:41.674089 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7035-account-create-update-mldfs" Feb 20 15:18:41.727776 master-0 kubenswrapper[28120]: I0220 15:18:41.727689 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 20 15:18:41.877943 master-0 kubenswrapper[28120]: I0220 15:18:41.866702 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5qkvd"] Feb 20 15:18:41.878568 master-0 kubenswrapper[28120]: I0220 15:18:41.878347 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9efe-account-create-update-ln54q"] Feb 20 15:18:41.884735 master-0 kubenswrapper[28120]: I0220 15:18:41.883031 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/79055f50-7353-4dc9-90d0-dcd790817149-etc-podinfo\") pod \"79055f50-7353-4dc9-90d0-dcd790817149\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " Feb 20 15:18:41.884735 master-0 kubenswrapper[28120]: I0220 15:18:41.883119 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/79055f50-7353-4dc9-90d0-dcd790817149-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"79055f50-7353-4dc9-90d0-dcd790817149\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " Feb 20 15:18:41.884735 master-0 kubenswrapper[28120]: I0220 15:18:41.883209 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-config\") pod \"79055f50-7353-4dc9-90d0-dcd790817149\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " Feb 20 15:18:41.884735 master-0 kubenswrapper[28120]: I0220 15:18:41.883240 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6zgf\" (UniqueName: \"kubernetes.io/projected/79055f50-7353-4dc9-90d0-dcd790817149-kube-api-access-z6zgf\") pod \"79055f50-7353-4dc9-90d0-dcd790817149\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " Feb 20 15:18:41.884735 master-0 kubenswrapper[28120]: I0220 15:18:41.883310 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/79055f50-7353-4dc9-90d0-dcd790817149-var-lib-ironic\") pod \"79055f50-7353-4dc9-90d0-dcd790817149\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " Feb 20 15:18:41.884735 master-0 kubenswrapper[28120]: I0220 15:18:41.883405 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-scripts\") pod \"79055f50-7353-4dc9-90d0-dcd790817149\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " Feb 20 15:18:41.884735 master-0 kubenswrapper[28120]: I0220 15:18:41.883423 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-combined-ca-bundle\") pod \"79055f50-7353-4dc9-90d0-dcd790817149\" (UID: \"79055f50-7353-4dc9-90d0-dcd790817149\") " Feb 20 15:18:41.884735 master-0 kubenswrapper[28120]: I0220 15:18:41.883524 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79055f50-7353-4dc9-90d0-dcd790817149-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "79055f50-7353-4dc9-90d0-dcd790817149" (UID: "79055f50-7353-4dc9-90d0-dcd790817149"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:18:41.884735 master-0 kubenswrapper[28120]: I0220 15:18:41.883850 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79055f50-7353-4dc9-90d0-dcd790817149-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "79055f50-7353-4dc9-90d0-dcd790817149" (UID: "79055f50-7353-4dc9-90d0-dcd790817149"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:18:41.884735 master-0 kubenswrapper[28120]: I0220 15:18:41.884517 28120 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/79055f50-7353-4dc9-90d0-dcd790817149-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:41.884735 master-0 kubenswrapper[28120]: I0220 15:18:41.884533 28120 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/79055f50-7353-4dc9-90d0-dcd790817149-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:41.888672 master-0 kubenswrapper[28120]: I0220 15:18:41.888397 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-scripts" (OuterVolumeSpecName: "scripts") pod "79055f50-7353-4dc9-90d0-dcd790817149" (UID: "79055f50-7353-4dc9-90d0-dcd790817149"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:41.889740 master-0 kubenswrapper[28120]: I0220 15:18:41.888784 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/79055f50-7353-4dc9-90d0-dcd790817149-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "79055f50-7353-4dc9-90d0-dcd790817149" (UID: "79055f50-7353-4dc9-90d0-dcd790817149"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 20 15:18:41.889740 master-0 kubenswrapper[28120]: I0220 15:18:41.889277 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-config" (OuterVolumeSpecName: "config") pod "79055f50-7353-4dc9-90d0-dcd790817149" (UID: "79055f50-7353-4dc9-90d0-dcd790817149"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:41.896403 master-0 kubenswrapper[28120]: I0220 15:18:41.896346 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79055f50-7353-4dc9-90d0-dcd790817149-kube-api-access-z6zgf" (OuterVolumeSpecName: "kube-api-access-z6zgf") pod "79055f50-7353-4dc9-90d0-dcd790817149" (UID: "79055f50-7353-4dc9-90d0-dcd790817149"). InnerVolumeSpecName "kube-api-access-z6zgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:18:41.928070 master-0 kubenswrapper[28120]: I0220 15:18:41.927713 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79055f50-7353-4dc9-90d0-dcd790817149" (UID: "79055f50-7353-4dc9-90d0-dcd790817149"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:18:41.986455 master-0 kubenswrapper[28120]: I0220 15:18:41.986402 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:41.986455 master-0 kubenswrapper[28120]: I0220 15:18:41.986447 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:41.986455 master-0 kubenswrapper[28120]: I0220 15:18:41.986458 28120 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/79055f50-7353-4dc9-90d0-dcd790817149-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:41.986618 master-0 kubenswrapper[28120]: I0220 15:18:41.986467 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/79055f50-7353-4dc9-90d0-dcd790817149-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:41.986618 master-0 kubenswrapper[28120]: I0220 15:18:41.986479 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6zgf\" (UniqueName: \"kubernetes.io/projected/79055f50-7353-4dc9-90d0-dcd790817149-kube-api-access-z6zgf\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:42.104068 master-0 kubenswrapper[28120]: I0220 15:18:42.103998 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8" path="/var/lib/kubelet/pods/89d3e8f0-e3b4-4ec7-870a-9fc5934f91c8/volumes" Feb 20 15:18:42.105083 master-0 kubenswrapper[28120]: I0220 15:18:42.105055 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b" path="/var/lib/kubelet/pods/8cc39299-2dae-4ea0-9fb5-c23d3ae5e24b/volumes" Feb 20 15:18:42.115168 master-0 kubenswrapper[28120]: I0220 15:18:42.114778 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dcvql"] Feb 20 15:18:42.126278 master-0 kubenswrapper[28120]: I0220 15:18:42.126220 28120 generic.go:334] "Generic (PLEG): container finished" podID="23c2f822-4035-41a5-862d-2510ea856786" containerID="2bb50f40ab01f7e197f89d19c4b445e30ac25e7f082474323530afd5d2b1a715" exitCode=0 Feb 20 15:18:42.126506 master-0 kubenswrapper[28120]: I0220 15:18:42.126293 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g9knn" event={"ID":"23c2f822-4035-41a5-862d-2510ea856786","Type":"ContainerDied","Data":"2bb50f40ab01f7e197f89d19c4b445e30ac25e7f082474323530afd5d2b1a715"} Feb 20 15:18:42.126506 master-0 kubenswrapper[28120]: I0220 15:18:42.126319 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g9knn" event={"ID":"23c2f822-4035-41a5-862d-2510ea856786","Type":"ContainerStarted","Data":"fa10b88157a8c61c53451082f5e6ef645e49d9cee56b90ef463bd81437c00560"} Feb 20 15:18:42.130077 master-0 kubenswrapper[28120]: I0220 15:18:42.130021 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"79055f50-7353-4dc9-90d0-dcd790817149","Type":"ContainerDied","Data":"1c3a377282bdbd7bfd266916a3c34c3c76f02757c850835f2b9ce8ed4c399939"} Feb 20 15:18:42.130161 master-0 kubenswrapper[28120]: I0220 15:18:42.130084 28120 scope.go:117] "RemoveContainer" containerID="2fa867ca854f135856419d998a5414c022a8b9932b1858ea236a7afebe138ef0" Feb 20 15:18:42.130218 master-0 kubenswrapper[28120]: I0220 15:18:42.130197 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 20 15:18:42.148757 master-0 kubenswrapper[28120]: I0220 15:18:42.148677 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9efe-account-create-update-ln54q" event={"ID":"d609009e-7cce-4b2a-8506-279710de8190","Type":"ContainerStarted","Data":"30e47f0b94e03dcf8de53ffe0bc833d845677ad45c07fb677e6768f4ae1cd698"} Feb 20 15:18:42.153308 master-0 kubenswrapper[28120]: I0220 15:18:42.152292 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5qkvd" event={"ID":"73782ca9-6bfa-4c3b-abeb-05a890839bc8","Type":"ContainerStarted","Data":"d869f9f83942d839e9c8c907b7016bd8cde13864d9ef9d0826211d3c6bfc25c3"} Feb 20 15:18:42.332092 master-0 kubenswrapper[28120]: I0220 15:18:42.332007 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b3af-account-create-update-2ltqm"] Feb 20 15:18:42.344126 master-0 kubenswrapper[28120]: I0220 15:18:42.344077 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 15:18:42.355106 master-0 kubenswrapper[28120]: I0220 15:18:42.355048 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 15:18:42.360262 master-0 kubenswrapper[28120]: I0220 15:18:42.359558 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-5qkvd" podStartSLOduration=2.359538521 podStartE2EDuration="2.359538521s" podCreationTimestamp="2026-02-20 15:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:18:42.277460836 +0000 UTC m=+1060.538254419" watchObservedRunningTime="2026-02-20 15:18:42.359538521 +0000 UTC m=+1060.620332084" Feb 20 15:18:42.412248 master-0 kubenswrapper[28120]: I0220 15:18:42.399396 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 15:18:42.412248 master-0 kubenswrapper[28120]: E0220 15:18:42.400039 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79055f50-7353-4dc9-90d0-dcd790817149" containerName="ironic-python-agent-init" Feb 20 15:18:42.412248 master-0 kubenswrapper[28120]: I0220 15:18:42.400056 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="79055f50-7353-4dc9-90d0-dcd790817149" containerName="ironic-python-agent-init" Feb 20 15:18:42.412248 master-0 kubenswrapper[28120]: I0220 15:18:42.400386 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="79055f50-7353-4dc9-90d0-dcd790817149" containerName="ironic-python-agent-init" Feb 20 15:18:42.412248 master-0 kubenswrapper[28120]: I0220 15:18:42.403711 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 20 15:18:42.433316 master-0 kubenswrapper[28120]: I0220 15:18:42.425711 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 20 15:18:42.433316 master-0 kubenswrapper[28120]: I0220 15:18:42.425911 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 20 15:18:42.433316 master-0 kubenswrapper[28120]: I0220 15:18:42.426691 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 20 15:18:42.433316 master-0 kubenswrapper[28120]: I0220 15:18:42.426792 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Feb 20 15:18:42.433316 master-0 kubenswrapper[28120]: I0220 15:18:42.432430 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Feb 20 15:18:42.461284 master-0 kubenswrapper[28120]: I0220 15:18:42.457959 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 15:18:42.504723 master-0 kubenswrapper[28120]: I0220 15:18:42.504591 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7035-account-create-update-mldfs"] Feb 20 15:18:42.532812 master-0 kubenswrapper[28120]: I0220 15:18:42.528284 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe50c46-f30b-45e5-88ad-740e508e429b-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.532812 master-0 kubenswrapper[28120]: I0220 15:18:42.528361 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffe50c46-f30b-45e5-88ad-740e508e429b-scripts\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.532812 master-0 kubenswrapper[28120]: I0220 15:18:42.528383 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe50c46-f30b-45e5-88ad-740e508e429b-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.532812 master-0 kubenswrapper[28120]: I0220 15:18:42.528419 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/ffe50c46-f30b-45e5-88ad-740e508e429b-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.532812 master-0 kubenswrapper[28120]: I0220 15:18:42.528469 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ffe50c46-f30b-45e5-88ad-740e508e429b-config\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.532812 master-0 kubenswrapper[28120]: I0220 15:18:42.528492 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/ffe50c46-f30b-45e5-88ad-740e508e429b-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.532812 master-0 kubenswrapper[28120]: I0220 15:18:42.528537 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/ffe50c46-f30b-45e5-88ad-740e508e429b-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.532812 master-0 kubenswrapper[28120]: I0220 15:18:42.528592 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phslb\" (UniqueName: \"kubernetes.io/projected/ffe50c46-f30b-45e5-88ad-740e508e429b-kube-api-access-phslb\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.532812 master-0 kubenswrapper[28120]: I0220 15:18:42.528632 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe50c46-f30b-45e5-88ad-740e508e429b-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.584269 master-0 kubenswrapper[28120]: I0220 15:18:42.584220 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d9697113-6e6c-4a10-ac1b-5299d0bc397a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a382691f-8fd3-4d0d-89a8-8dbd32acfc12\") pod \"glance-c0df7-default-external-api-0\" (UID: \"2b15a78c-0233-420f-989c-f52832b53e28\") " pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:42.630895 master-0 kubenswrapper[28120]: I0220 15:18:42.630840 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe50c46-f30b-45e5-88ad-740e508e429b-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.631135 master-0 kubenswrapper[28120]: I0220 15:18:42.630941 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe50c46-f30b-45e5-88ad-740e508e429b-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.631135 master-0 kubenswrapper[28120]: I0220 15:18:42.630977 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffe50c46-f30b-45e5-88ad-740e508e429b-scripts\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.631135 master-0 kubenswrapper[28120]: I0220 15:18:42.631000 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe50c46-f30b-45e5-88ad-740e508e429b-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.631135 master-0 kubenswrapper[28120]: I0220 15:18:42.631042 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/ffe50c46-f30b-45e5-88ad-740e508e429b-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.631135 master-0 kubenswrapper[28120]: I0220 15:18:42.631090 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ffe50c46-f30b-45e5-88ad-740e508e429b-config\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.631135 master-0 kubenswrapper[28120]: I0220 15:18:42.631111 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/ffe50c46-f30b-45e5-88ad-740e508e429b-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.631328 master-0 kubenswrapper[28120]: I0220 15:18:42.631155 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/ffe50c46-f30b-45e5-88ad-740e508e429b-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.631328 master-0 kubenswrapper[28120]: I0220 15:18:42.631193 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phslb\" (UniqueName: \"kubernetes.io/projected/ffe50c46-f30b-45e5-88ad-740e508e429b-kube-api-access-phslb\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.635340 master-0 kubenswrapper[28120]: I0220 15:18:42.634937 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/ffe50c46-f30b-45e5-88ad-740e508e429b-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.646939 master-0 kubenswrapper[28120]: I0220 15:18:42.636638 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe50c46-f30b-45e5-88ad-740e508e429b-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.646939 master-0 kubenswrapper[28120]: I0220 15:18:42.637851 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffe50c46-f30b-45e5-88ad-740e508e429b-scripts\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.646939 master-0 kubenswrapper[28120]: I0220 15:18:42.644847 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ffe50c46-f30b-45e5-88ad-740e508e429b-config\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.646939 master-0 kubenswrapper[28120]: I0220 15:18:42.645109 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/ffe50c46-f30b-45e5-88ad-740e508e429b-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.647589 master-0 kubenswrapper[28120]: I0220 15:18:42.647534 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe50c46-f30b-45e5-88ad-740e508e429b-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.647692 master-0 kubenswrapper[28120]: I0220 15:18:42.647660 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/ffe50c46-f30b-45e5-88ad-740e508e429b-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.670028 master-0 kubenswrapper[28120]: I0220 15:18:42.650730 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phslb\" (UniqueName: \"kubernetes.io/projected/ffe50c46-f30b-45e5-88ad-740e508e429b-kube-api-access-phslb\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.670028 master-0 kubenswrapper[28120]: I0220 15:18:42.653414 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe50c46-f30b-45e5-88ad-740e508e429b-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"ffe50c46-f30b-45e5-88ad-740e508e429b\") " pod="openstack/ironic-inspector-0" Feb 20 15:18:42.775714 master-0 kubenswrapper[28120]: I0220 15:18:42.775417 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:42.892979 master-0 kubenswrapper[28120]: I0220 15:18:42.892416 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 20 15:18:43.184887 master-0 kubenswrapper[28120]: I0220 15:18:43.184524 28120 generic.go:334] "Generic (PLEG): container finished" podID="73782ca9-6bfa-4c3b-abeb-05a890839bc8" containerID="94c15355c0bd90975317c350c57edcae707ceb7232028d339487c55f10af41c0" exitCode=0 Feb 20 15:18:43.184887 master-0 kubenswrapper[28120]: I0220 15:18:43.184631 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5qkvd" event={"ID":"73782ca9-6bfa-4c3b-abeb-05a890839bc8","Type":"ContainerDied","Data":"94c15355c0bd90975317c350c57edcae707ceb7232028d339487c55f10af41c0"} Feb 20 15:18:43.187228 master-0 kubenswrapper[28120]: I0220 15:18:43.187089 28120 generic.go:334] "Generic (PLEG): container finished" podID="146df682-64c3-4786-9489-2ac4b0ca2811" containerID="049dd4be7bb971d4180e89f42cca168d6a7ae1700137642d345f85c7cbdcca0f" exitCode=0 Feb 20 15:18:43.187228 master-0 kubenswrapper[28120]: I0220 15:18:43.187173 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dcvql" event={"ID":"146df682-64c3-4786-9489-2ac4b0ca2811","Type":"ContainerDied","Data":"049dd4be7bb971d4180e89f42cca168d6a7ae1700137642d345f85c7cbdcca0f"} Feb 20 15:18:43.187228 master-0 kubenswrapper[28120]: I0220 15:18:43.187212 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dcvql" event={"ID":"146df682-64c3-4786-9489-2ac4b0ca2811","Type":"ContainerStarted","Data":"9188671291c2e2d0bec35f9cf08ce9d566853077b4c5a16b7f395fa55ac73b84"} Feb 20 15:18:43.190297 master-0 kubenswrapper[28120]: I0220 15:18:43.190196 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7035-account-create-update-mldfs" event={"ID":"a62f40e1-6a44-4a9b-a939-53cd2f75472d","Type":"ContainerStarted","Data":"82d16db9fee5d39c4b72fee66a71e601aac9ce37c79f30d9b0a14396b5e580c8"} Feb 20 15:18:43.190297 master-0 kubenswrapper[28120]: I0220 15:18:43.190227 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7035-account-create-update-mldfs" event={"ID":"a62f40e1-6a44-4a9b-a939-53cd2f75472d","Type":"ContainerStarted","Data":"942bfffcc065be7485374808da5d4522deda8a518d4d31755a5b91c6b89bbb9f"} Feb 20 15:18:43.196345 master-0 kubenswrapper[28120]: I0220 15:18:43.196289 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b3af-account-create-update-2ltqm" event={"ID":"bedaebf9-d467-474c-9b78-446fd89cb799","Type":"ContainerStarted","Data":"177dfb4dbb933226def2a02fc66e07aaa7aada2791f996c46c55084a48bbdc2d"} Feb 20 15:18:43.196345 master-0 kubenswrapper[28120]: I0220 15:18:43.196319 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b3af-account-create-update-2ltqm" event={"ID":"bedaebf9-d467-474c-9b78-446fd89cb799","Type":"ContainerStarted","Data":"39a0815d587ac7cba57c78faaadb3f92b765b7b69e7844b28bdb8bfc21e09c04"} Feb 20 15:18:43.204860 master-0 kubenswrapper[28120]: I0220 15:18:43.203664 28120 generic.go:334] "Generic (PLEG): container finished" podID="d609009e-7cce-4b2a-8506-279710de8190" containerID="724d87d1629f7bd23debae84406ff4569267123e5a68c683d12bc0ac40e7328b" exitCode=0 Feb 20 15:18:43.204860 master-0 kubenswrapper[28120]: I0220 15:18:43.203946 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9efe-account-create-update-ln54q" event={"ID":"d609009e-7cce-4b2a-8506-279710de8190","Type":"ContainerDied","Data":"724d87d1629f7bd23debae84406ff4569267123e5a68c683d12bc0ac40e7328b"} Feb 20 15:18:43.296190 master-0 kubenswrapper[28120]: I0220 15:18:43.295781 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-b3af-account-create-update-2ltqm" podStartSLOduration=3.295759555 podStartE2EDuration="3.295759555s" podCreationTimestamp="2026-02-20 15:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:18:43.251738798 +0000 UTC m=+1061.512532361" watchObservedRunningTime="2026-02-20 15:18:43.295759555 +0000 UTC m=+1061.556553118" Feb 20 15:18:43.327105 master-0 kubenswrapper[28120]: I0220 15:18:43.326898 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-7035-account-create-update-mldfs" podStartSLOduration=2.3268763310000002 podStartE2EDuration="2.326876331s" podCreationTimestamp="2026-02-20 15:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:18:43.274831214 +0000 UTC m=+1061.535624777" watchObservedRunningTime="2026-02-20 15:18:43.326876331 +0000 UTC m=+1061.587669904" Feb 20 15:18:43.362943 master-0 kubenswrapper[28120]: W0220 15:18:43.358037 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b15a78c_0233_420f_989c_f52832b53e28.slice/crio-43a683b61eb5b53e424ea164c78e79547fdcd3e58c065c61f1f7fad0960b1ae5 WatchSource:0}: Error finding container 43a683b61eb5b53e424ea164c78e79547fdcd3e58c065c61f1f7fad0960b1ae5: Status 404 returned error can't find the container with id 43a683b61eb5b53e424ea164c78e79547fdcd3e58c065c61f1f7fad0960b1ae5 Feb 20 15:18:43.371942 master-0 kubenswrapper[28120]: I0220 15:18:43.369644 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c0df7-default-external-api-0"] Feb 20 15:18:43.650416 master-0 kubenswrapper[28120]: I0220 15:18:43.650352 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 20 15:18:43.661762 master-0 kubenswrapper[28120]: W0220 15:18:43.661652 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffe50c46_f30b_45e5_88ad_740e508e429b.slice/crio-f6b22e2028c43abed1e807cd5aa0ca3d4fbc4776177a37179c9cf6fe150a99d7 WatchSource:0}: Error finding container f6b22e2028c43abed1e807cd5aa0ca3d4fbc4776177a37179c9cf6fe150a99d7: Status 404 returned error can't find the container with id f6b22e2028c43abed1e807cd5aa0ca3d4fbc4776177a37179c9cf6fe150a99d7 Feb 20 15:18:43.759284 master-0 kubenswrapper[28120]: I0220 15:18:43.759229 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g9knn" Feb 20 15:18:43.872965 master-0 kubenswrapper[28120]: I0220 15:18:43.870999 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c2f822-4035-41a5-862d-2510ea856786-operator-scripts\") pod \"23c2f822-4035-41a5-862d-2510ea856786\" (UID: \"23c2f822-4035-41a5-862d-2510ea856786\") " Feb 20 15:18:43.872965 master-0 kubenswrapper[28120]: I0220 15:18:43.871196 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwf9s\" (UniqueName: \"kubernetes.io/projected/23c2f822-4035-41a5-862d-2510ea856786-kube-api-access-gwf9s\") pod \"23c2f822-4035-41a5-862d-2510ea856786\" (UID: \"23c2f822-4035-41a5-862d-2510ea856786\") " Feb 20 15:18:43.872965 master-0 kubenswrapper[28120]: I0220 15:18:43.872535 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23c2f822-4035-41a5-862d-2510ea856786-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23c2f822-4035-41a5-862d-2510ea856786" (UID: "23c2f822-4035-41a5-862d-2510ea856786"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:18:43.876965 master-0 kubenswrapper[28120]: I0220 15:18:43.876085 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23c2f822-4035-41a5-862d-2510ea856786-kube-api-access-gwf9s" (OuterVolumeSpecName: "kube-api-access-gwf9s") pod "23c2f822-4035-41a5-862d-2510ea856786" (UID: "23c2f822-4035-41a5-862d-2510ea856786"). InnerVolumeSpecName "kube-api-access-gwf9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:18:43.974897 master-0 kubenswrapper[28120]: I0220 15:18:43.974745 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c2f822-4035-41a5-862d-2510ea856786-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:43.974897 master-0 kubenswrapper[28120]: I0220 15:18:43.974807 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwf9s\" (UniqueName: \"kubernetes.io/projected/23c2f822-4035-41a5-862d-2510ea856786-kube-api-access-gwf9s\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:44.075214 master-0 kubenswrapper[28120]: I0220 15:18:44.075153 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79055f50-7353-4dc9-90d0-dcd790817149" path="/var/lib/kubelet/pods/79055f50-7353-4dc9-90d0-dcd790817149/volumes" Feb 20 15:18:44.229685 master-0 kubenswrapper[28120]: I0220 15:18:44.229544 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-external-api-0" event={"ID":"2b15a78c-0233-420f-989c-f52832b53e28","Type":"ContainerStarted","Data":"e67f5c13eace168e405687c6aabb1c0b5bea6b901cdd45f76f32db9c65a522db"} Feb 20 15:18:44.229685 master-0 kubenswrapper[28120]: I0220 15:18:44.229610 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-external-api-0" event={"ID":"2b15a78c-0233-420f-989c-f52832b53e28","Type":"ContainerStarted","Data":"43a683b61eb5b53e424ea164c78e79547fdcd3e58c065c61f1f7fad0960b1ae5"} Feb 20 15:18:44.234104 master-0 kubenswrapper[28120]: I0220 15:18:44.234046 28120 generic.go:334] "Generic (PLEG): container finished" podID="ffe50c46-f30b-45e5-88ad-740e508e429b" containerID="8bfe3b45b8f90b729c348ef95aad4a90715cfa955e2113b35c477eb33cd754e5" exitCode=0 Feb 20 15:18:44.234245 master-0 kubenswrapper[28120]: I0220 15:18:44.234146 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"ffe50c46-f30b-45e5-88ad-740e508e429b","Type":"ContainerDied","Data":"8bfe3b45b8f90b729c348ef95aad4a90715cfa955e2113b35c477eb33cd754e5"} Feb 20 15:18:44.234245 master-0 kubenswrapper[28120]: I0220 15:18:44.234194 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"ffe50c46-f30b-45e5-88ad-740e508e429b","Type":"ContainerStarted","Data":"f6b22e2028c43abed1e807cd5aa0ca3d4fbc4776177a37179c9cf6fe150a99d7"} Feb 20 15:18:44.238574 master-0 kubenswrapper[28120]: I0220 15:18:44.238485 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g9knn" event={"ID":"23c2f822-4035-41a5-862d-2510ea856786","Type":"ContainerDied","Data":"fa10b88157a8c61c53451082f5e6ef645e49d9cee56b90ef463bd81437c00560"} Feb 20 15:18:44.238662 master-0 kubenswrapper[28120]: I0220 15:18:44.238586 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa10b88157a8c61c53451082f5e6ef645e49d9cee56b90ef463bd81437c00560" Feb 20 15:18:44.238662 master-0 kubenswrapper[28120]: I0220 15:18:44.238539 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g9knn" Feb 20 15:18:44.244150 master-0 kubenswrapper[28120]: I0220 15:18:44.244076 28120 generic.go:334] "Generic (PLEG): container finished" podID="a62f40e1-6a44-4a9b-a939-53cd2f75472d" containerID="82d16db9fee5d39c4b72fee66a71e601aac9ce37c79f30d9b0a14396b5e580c8" exitCode=0 Feb 20 15:18:44.244361 master-0 kubenswrapper[28120]: I0220 15:18:44.244316 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7035-account-create-update-mldfs" event={"ID":"a62f40e1-6a44-4a9b-a939-53cd2f75472d","Type":"ContainerDied","Data":"82d16db9fee5d39c4b72fee66a71e601aac9ce37c79f30d9b0a14396b5e580c8"} Feb 20 15:18:44.247375 master-0 kubenswrapper[28120]: I0220 15:18:44.247319 28120 generic.go:334] "Generic (PLEG): container finished" podID="bedaebf9-d467-474c-9b78-446fd89cb799" containerID="177dfb4dbb933226def2a02fc66e07aaa7aada2791f996c46c55084a48bbdc2d" exitCode=0 Feb 20 15:18:44.247603 master-0 kubenswrapper[28120]: I0220 15:18:44.247570 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b3af-account-create-update-2ltqm" event={"ID":"bedaebf9-d467-474c-9b78-446fd89cb799","Type":"ContainerDied","Data":"177dfb4dbb933226def2a02fc66e07aaa7aada2791f996c46c55084a48bbdc2d"} Feb 20 15:18:44.923829 master-0 kubenswrapper[28120]: I0220 15:18:44.923777 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5qkvd" Feb 20 15:18:45.017274 master-0 kubenswrapper[28120]: I0220 15:18:45.017220 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73782ca9-6bfa-4c3b-abeb-05a890839bc8-operator-scripts\") pod \"73782ca9-6bfa-4c3b-abeb-05a890839bc8\" (UID: \"73782ca9-6bfa-4c3b-abeb-05a890839bc8\") " Feb 20 15:18:45.017508 master-0 kubenswrapper[28120]: I0220 15:18:45.017478 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6f8l\" (UniqueName: \"kubernetes.io/projected/73782ca9-6bfa-4c3b-abeb-05a890839bc8-kube-api-access-t6f8l\") pod \"73782ca9-6bfa-4c3b-abeb-05a890839bc8\" (UID: \"73782ca9-6bfa-4c3b-abeb-05a890839bc8\") " Feb 20 15:18:45.017846 master-0 kubenswrapper[28120]: I0220 15:18:45.017733 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73782ca9-6bfa-4c3b-abeb-05a890839bc8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73782ca9-6bfa-4c3b-abeb-05a890839bc8" (UID: "73782ca9-6bfa-4c3b-abeb-05a890839bc8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:18:45.018394 master-0 kubenswrapper[28120]: I0220 15:18:45.018354 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73782ca9-6bfa-4c3b-abeb-05a890839bc8-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:45.021867 master-0 kubenswrapper[28120]: I0220 15:18:45.021829 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73782ca9-6bfa-4c3b-abeb-05a890839bc8-kube-api-access-t6f8l" (OuterVolumeSpecName: "kube-api-access-t6f8l") pod "73782ca9-6bfa-4c3b-abeb-05a890839bc8" (UID: "73782ca9-6bfa-4c3b-abeb-05a890839bc8"). InnerVolumeSpecName "kube-api-access-t6f8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:18:45.113808 master-0 kubenswrapper[28120]: I0220 15:18:45.113760 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9efe-account-create-update-ln54q" Feb 20 15:18:45.122218 master-0 kubenswrapper[28120]: I0220 15:18:45.122180 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6f8l\" (UniqueName: \"kubernetes.io/projected/73782ca9-6bfa-4c3b-abeb-05a890839bc8-kube-api-access-t6f8l\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:45.126078 master-0 kubenswrapper[28120]: I0220 15:18:45.126042 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dcvql" Feb 20 15:18:45.223649 master-0 kubenswrapper[28120]: I0220 15:18:45.223554 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24qvv\" (UniqueName: \"kubernetes.io/projected/d609009e-7cce-4b2a-8506-279710de8190-kube-api-access-24qvv\") pod \"d609009e-7cce-4b2a-8506-279710de8190\" (UID: \"d609009e-7cce-4b2a-8506-279710de8190\") " Feb 20 15:18:45.224037 master-0 kubenswrapper[28120]: I0220 15:18:45.224015 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d609009e-7cce-4b2a-8506-279710de8190-operator-scripts\") pod \"d609009e-7cce-4b2a-8506-279710de8190\" (UID: \"d609009e-7cce-4b2a-8506-279710de8190\") " Feb 20 15:18:45.224284 master-0 kubenswrapper[28120]: I0220 15:18:45.224264 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/146df682-64c3-4786-9489-2ac4b0ca2811-operator-scripts\") pod \"146df682-64c3-4786-9489-2ac4b0ca2811\" (UID: \"146df682-64c3-4786-9489-2ac4b0ca2811\") " Feb 20 15:18:45.224445 master-0 kubenswrapper[28120]: I0220 15:18:45.224423 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r74ft\" (UniqueName: \"kubernetes.io/projected/146df682-64c3-4786-9489-2ac4b0ca2811-kube-api-access-r74ft\") pod \"146df682-64c3-4786-9489-2ac4b0ca2811\" (UID: \"146df682-64c3-4786-9489-2ac4b0ca2811\") " Feb 20 15:18:45.224821 master-0 kubenswrapper[28120]: I0220 15:18:45.224425 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d609009e-7cce-4b2a-8506-279710de8190-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d609009e-7cce-4b2a-8506-279710de8190" (UID: "d609009e-7cce-4b2a-8506-279710de8190"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:18:45.224890 master-0 kubenswrapper[28120]: I0220 15:18:45.224718 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/146df682-64c3-4786-9489-2ac4b0ca2811-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "146df682-64c3-4786-9489-2ac4b0ca2811" (UID: "146df682-64c3-4786-9489-2ac4b0ca2811"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:18:45.225409 master-0 kubenswrapper[28120]: I0220 15:18:45.225385 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d609009e-7cce-4b2a-8506-279710de8190-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:45.225506 master-0 kubenswrapper[28120]: I0220 15:18:45.225492 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/146df682-64c3-4786-9489-2ac4b0ca2811-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:45.227376 master-0 kubenswrapper[28120]: I0220 15:18:45.227315 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d609009e-7cce-4b2a-8506-279710de8190-kube-api-access-24qvv" (OuterVolumeSpecName: "kube-api-access-24qvv") pod "d609009e-7cce-4b2a-8506-279710de8190" (UID: "d609009e-7cce-4b2a-8506-279710de8190"). InnerVolumeSpecName "kube-api-access-24qvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:18:45.229130 master-0 kubenswrapper[28120]: I0220 15:18:45.229081 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/146df682-64c3-4786-9489-2ac4b0ca2811-kube-api-access-r74ft" (OuterVolumeSpecName: "kube-api-access-r74ft") pod "146df682-64c3-4786-9489-2ac4b0ca2811" (UID: "146df682-64c3-4786-9489-2ac4b0ca2811"). InnerVolumeSpecName "kube-api-access-r74ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:18:45.263441 master-0 kubenswrapper[28120]: I0220 15:18:45.263391 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c0df7-default-external-api-0" event={"ID":"2b15a78c-0233-420f-989c-f52832b53e28","Type":"ContainerStarted","Data":"69d18e2378188f4b929f06062bd506b28100db6d6d58e6d9e417f9fd4b7a7782"} Feb 20 15:18:45.269099 master-0 kubenswrapper[28120]: I0220 15:18:45.268907 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"ffe50c46-f30b-45e5-88ad-740e508e429b","Type":"ContainerStarted","Data":"da222804a415543b864a91e86b4e16c9ad08cce9dad3b82a1e8fc38e62ac9f30"} Feb 20 15:18:45.271414 master-0 kubenswrapper[28120]: I0220 15:18:45.271367 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9efe-account-create-update-ln54q" Feb 20 15:18:45.271500 master-0 kubenswrapper[28120]: I0220 15:18:45.271399 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9efe-account-create-update-ln54q" event={"ID":"d609009e-7cce-4b2a-8506-279710de8190","Type":"ContainerDied","Data":"30e47f0b94e03dcf8de53ffe0bc833d845677ad45c07fb677e6768f4ae1cd698"} Feb 20 15:18:45.271500 master-0 kubenswrapper[28120]: I0220 15:18:45.271445 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30e47f0b94e03dcf8de53ffe0bc833d845677ad45c07fb677e6768f4ae1cd698" Feb 20 15:18:45.273376 master-0 kubenswrapper[28120]: I0220 15:18:45.273354 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5qkvd" Feb 20 15:18:45.273458 master-0 kubenswrapper[28120]: I0220 15:18:45.273373 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5qkvd" event={"ID":"73782ca9-6bfa-4c3b-abeb-05a890839bc8","Type":"ContainerDied","Data":"d869f9f83942d839e9c8c907b7016bd8cde13864d9ef9d0826211d3c6bfc25c3"} Feb 20 15:18:45.273585 master-0 kubenswrapper[28120]: I0220 15:18:45.273538 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d869f9f83942d839e9c8c907b7016bd8cde13864d9ef9d0826211d3c6bfc25c3" Feb 20 15:18:45.275995 master-0 kubenswrapper[28120]: I0220 15:18:45.275965 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dcvql" event={"ID":"146df682-64c3-4786-9489-2ac4b0ca2811","Type":"ContainerDied","Data":"9188671291c2e2d0bec35f9cf08ce9d566853077b4c5a16b7f395fa55ac73b84"} Feb 20 15:18:45.276120 master-0 kubenswrapper[28120]: I0220 15:18:45.276100 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9188671291c2e2d0bec35f9cf08ce9d566853077b4c5a16b7f395fa55ac73b84" Feb 20 15:18:45.276218 master-0 kubenswrapper[28120]: I0220 15:18:45.275971 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dcvql" Feb 20 15:18:45.330017 master-0 kubenswrapper[28120]: I0220 15:18:45.327911 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r74ft\" (UniqueName: \"kubernetes.io/projected/146df682-64c3-4786-9489-2ac4b0ca2811-kube-api-access-r74ft\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:45.330017 master-0 kubenswrapper[28120]: I0220 15:18:45.327981 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24qvv\" (UniqueName: \"kubernetes.io/projected/d609009e-7cce-4b2a-8506-279710de8190-kube-api-access-24qvv\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:45.409244 master-0 kubenswrapper[28120]: I0220 15:18:45.402109 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:45.409244 master-0 kubenswrapper[28120]: I0220 15:18:45.402172 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:45.436821 master-0 kubenswrapper[28120]: I0220 15:18:45.436717 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:45.470042 master-0 kubenswrapper[28120]: I0220 15:18:45.462992 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:45.744899 master-0 kubenswrapper[28120]: I0220 15:18:45.744846 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b3af-account-create-update-2ltqm" Feb 20 15:18:45.909726 master-0 kubenswrapper[28120]: I0220 15:18:45.909683 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7035-account-create-update-mldfs" Feb 20 15:18:46.043263 master-0 kubenswrapper[28120]: I0220 15:18:46.043202 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4tjm\" (UniqueName: \"kubernetes.io/projected/bedaebf9-d467-474c-9b78-446fd89cb799-kube-api-access-s4tjm\") pod \"bedaebf9-d467-474c-9b78-446fd89cb799\" (UID: \"bedaebf9-d467-474c-9b78-446fd89cb799\") " Feb 20 15:18:46.043533 master-0 kubenswrapper[28120]: I0220 15:18:46.043430 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a62f40e1-6a44-4a9b-a939-53cd2f75472d-operator-scripts\") pod \"a62f40e1-6a44-4a9b-a939-53cd2f75472d\" (UID: \"a62f40e1-6a44-4a9b-a939-53cd2f75472d\") " Feb 20 15:18:46.043627 master-0 kubenswrapper[28120]: I0220 15:18:46.043605 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bedaebf9-d467-474c-9b78-446fd89cb799-operator-scripts\") pod \"bedaebf9-d467-474c-9b78-446fd89cb799\" (UID: \"bedaebf9-d467-474c-9b78-446fd89cb799\") " Feb 20 15:18:46.043682 master-0 kubenswrapper[28120]: I0220 15:18:46.043671 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ktqg\" (UniqueName: \"kubernetes.io/projected/a62f40e1-6a44-4a9b-a939-53cd2f75472d-kube-api-access-8ktqg\") pod \"a62f40e1-6a44-4a9b-a939-53cd2f75472d\" (UID: \"a62f40e1-6a44-4a9b-a939-53cd2f75472d\") " Feb 20 15:18:46.044116 master-0 kubenswrapper[28120]: I0220 15:18:46.044060 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a62f40e1-6a44-4a9b-a939-53cd2f75472d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a62f40e1-6a44-4a9b-a939-53cd2f75472d" (UID: "a62f40e1-6a44-4a9b-a939-53cd2f75472d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:18:46.046175 master-0 kubenswrapper[28120]: I0220 15:18:46.044534 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bedaebf9-d467-474c-9b78-446fd89cb799-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bedaebf9-d467-474c-9b78-446fd89cb799" (UID: "bedaebf9-d467-474c-9b78-446fd89cb799"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:18:46.046175 master-0 kubenswrapper[28120]: I0220 15:18:46.045018 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a62f40e1-6a44-4a9b-a939-53cd2f75472d-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:46.046175 master-0 kubenswrapper[28120]: I0220 15:18:46.045051 28120 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bedaebf9-d467-474c-9b78-446fd89cb799-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:46.047597 master-0 kubenswrapper[28120]: I0220 15:18:46.047526 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a62f40e1-6a44-4a9b-a939-53cd2f75472d-kube-api-access-8ktqg" (OuterVolumeSpecName: "kube-api-access-8ktqg") pod "a62f40e1-6a44-4a9b-a939-53cd2f75472d" (UID: "a62f40e1-6a44-4a9b-a939-53cd2f75472d"). InnerVolumeSpecName "kube-api-access-8ktqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:18:46.048189 master-0 kubenswrapper[28120]: I0220 15:18:46.047954 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bedaebf9-d467-474c-9b78-446fd89cb799-kube-api-access-s4tjm" (OuterVolumeSpecName: "kube-api-access-s4tjm") pod "bedaebf9-d467-474c-9b78-446fd89cb799" (UID: "bedaebf9-d467-474c-9b78-446fd89cb799"). InnerVolumeSpecName "kube-api-access-s4tjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:18:46.109632 master-0 kubenswrapper[28120]: I0220 15:18:46.109541 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-c0df7-default-external-api-0" podStartSLOduration=5.109515654 podStartE2EDuration="5.109515654s" podCreationTimestamp="2026-02-20 15:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:18:46.109155475 +0000 UTC m=+1064.369949038" watchObservedRunningTime="2026-02-20 15:18:46.109515654 +0000 UTC m=+1064.370309257" Feb 20 15:18:46.147710 master-0 kubenswrapper[28120]: I0220 15:18:46.147601 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ktqg\" (UniqueName: \"kubernetes.io/projected/a62f40e1-6a44-4a9b-a939-53cd2f75472d-kube-api-access-8ktqg\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:46.147710 master-0 kubenswrapper[28120]: I0220 15:18:46.147654 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4tjm\" (UniqueName: \"kubernetes.io/projected/bedaebf9-d467-474c-9b78-446fd89cb799-kube-api-access-s4tjm\") on node \"master-0\" DevicePath \"\"" Feb 20 15:18:46.295652 master-0 kubenswrapper[28120]: I0220 15:18:46.295592 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7035-account-create-update-mldfs" Feb 20 15:18:46.296278 master-0 kubenswrapper[28120]: I0220 15:18:46.295620 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7035-account-create-update-mldfs" event={"ID":"a62f40e1-6a44-4a9b-a939-53cd2f75472d","Type":"ContainerDied","Data":"942bfffcc065be7485374808da5d4522deda8a518d4d31755a5b91c6b89bbb9f"} Feb 20 15:18:46.296278 master-0 kubenswrapper[28120]: I0220 15:18:46.295705 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="942bfffcc065be7485374808da5d4522deda8a518d4d31755a5b91c6b89bbb9f" Feb 20 15:18:46.298034 master-0 kubenswrapper[28120]: I0220 15:18:46.297994 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b3af-account-create-update-2ltqm" Feb 20 15:18:46.298114 master-0 kubenswrapper[28120]: I0220 15:18:46.298028 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b3af-account-create-update-2ltqm" event={"ID":"bedaebf9-d467-474c-9b78-446fd89cb799","Type":"ContainerDied","Data":"39a0815d587ac7cba57c78faaadb3f92b765b7b69e7844b28bdb8bfc21e09c04"} Feb 20 15:18:46.298114 master-0 kubenswrapper[28120]: I0220 15:18:46.298068 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39a0815d587ac7cba57c78faaadb3f92b765b7b69e7844b28bdb8bfc21e09c04" Feb 20 15:18:46.301409 master-0 kubenswrapper[28120]: I0220 15:18:46.301343 28120 generic.go:334] "Generic (PLEG): container finished" podID="ffe50c46-f30b-45e5-88ad-740e508e429b" containerID="da222804a415543b864a91e86b4e16c9ad08cce9dad3b82a1e8fc38e62ac9f30" exitCode=0 Feb 20 15:18:46.301485 master-0 kubenswrapper[28120]: I0220 15:18:46.301424 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"ffe50c46-f30b-45e5-88ad-740e508e429b","Type":"ContainerDied","Data":"da222804a415543b864a91e86b4e16c9ad08cce9dad3b82a1e8fc38e62ac9f30"} Feb 20 15:18:46.302881 master-0 kubenswrapper[28120]: I0220 15:18:46.302846 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:46.302974 master-0 kubenswrapper[28120]: I0220 15:18:46.302885 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:47.321693 master-0 kubenswrapper[28120]: I0220 15:18:47.321644 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"ffe50c46-f30b-45e5-88ad-740e508e429b","Type":"ContainerStarted","Data":"bfad8627931f5b3db71d661202e01669797a2cc0fcf5e16cc7c685f5f9638af4"} Feb 20 15:18:48.289440 master-0 kubenswrapper[28120]: I0220 15:18:48.289365 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:48.294823 master-0 kubenswrapper[28120]: I0220 15:18:48.294475 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-c0df7-default-internal-api-0" Feb 20 15:18:48.361901 master-0 kubenswrapper[28120]: I0220 15:18:48.361841 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"ffe50c46-f30b-45e5-88ad-740e508e429b","Type":"ContainerStarted","Data":"38ac4f4133a5f8cc258473e0a8ea2514a336df8e4c24c74af3cf418215c9f851"} Feb 20 15:18:48.361901 master-0 kubenswrapper[28120]: I0220 15:18:48.361897 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"ffe50c46-f30b-45e5-88ad-740e508e429b","Type":"ContainerStarted","Data":"9e895a746a31ab12a1fd5f4e040454659b5006e9bb2ce38c2472ec077f013702"} Feb 20 15:18:49.377066 master-0 kubenswrapper[28120]: I0220 15:18:49.376842 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"ffe50c46-f30b-45e5-88ad-740e508e429b","Type":"ContainerStarted","Data":"0181c4428a5610ebd7d24d7b848dbd5d804df4ab40f08f8013276a8e9267f989"} Feb 20 15:18:49.377066 master-0 kubenswrapper[28120]: I0220 15:18:49.376957 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"ffe50c46-f30b-45e5-88ad-740e508e429b","Type":"ContainerStarted","Data":"f134ade8013e9ed046297f0d7f33300ef4f2ff85874e5236a9693b9adacff349"} Feb 20 15:18:49.423943 master-0 kubenswrapper[28120]: I0220 15:18:49.423819 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=7.423791407 podStartE2EDuration="7.423791407s" podCreationTimestamp="2026-02-20 15:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:18:49.419583512 +0000 UTC m=+1067.680377095" watchObservedRunningTime="2026-02-20 15:18:49.423791407 +0000 UTC m=+1067.684584990" Feb 20 15:18:50.396882 master-0 kubenswrapper[28120]: I0220 15:18:50.396815 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 20 15:18:50.401530 master-0 kubenswrapper[28120]: I0220 15:18:50.401477 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: I0220 15:18:51.562332 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tqz9k"] Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: E0220 15:18:51.562847 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f40e1-6a44-4a9b-a939-53cd2f75472d" containerName="mariadb-account-create-update" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: I0220 15:18:51.562859 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f40e1-6a44-4a9b-a939-53cd2f75472d" containerName="mariadb-account-create-update" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: E0220 15:18:51.562879 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d609009e-7cce-4b2a-8506-279710de8190" containerName="mariadb-account-create-update" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: I0220 15:18:51.562898 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d609009e-7cce-4b2a-8506-279710de8190" containerName="mariadb-account-create-update" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: E0220 15:18:51.562931 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73782ca9-6bfa-4c3b-abeb-05a890839bc8" containerName="mariadb-database-create" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: I0220 15:18:51.562938 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="73782ca9-6bfa-4c3b-abeb-05a890839bc8" containerName="mariadb-database-create" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: E0220 15:18:51.562950 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedaebf9-d467-474c-9b78-446fd89cb799" containerName="mariadb-account-create-update" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: I0220 15:18:51.562956 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedaebf9-d467-474c-9b78-446fd89cb799" containerName="mariadb-account-create-update" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: E0220 15:18:51.562976 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c2f822-4035-41a5-862d-2510ea856786" containerName="mariadb-database-create" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: I0220 15:18:51.562982 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c2f822-4035-41a5-862d-2510ea856786" containerName="mariadb-database-create" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: E0220 15:18:51.563003 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146df682-64c3-4786-9489-2ac4b0ca2811" containerName="mariadb-database-create" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: I0220 15:18:51.563009 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="146df682-64c3-4786-9489-2ac4b0ca2811" containerName="mariadb-database-create" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: I0220 15:18:51.563206 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="bedaebf9-d467-474c-9b78-446fd89cb799" containerName="mariadb-account-create-update" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: I0220 15:18:51.563236 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f40e1-6a44-4a9b-a939-53cd2f75472d" containerName="mariadb-account-create-update" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: I0220 15:18:51.563260 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c2f822-4035-41a5-862d-2510ea856786" containerName="mariadb-database-create" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: I0220 15:18:51.563275 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="73782ca9-6bfa-4c3b-abeb-05a890839bc8" containerName="mariadb-database-create" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: I0220 15:18:51.563285 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="146df682-64c3-4786-9489-2ac4b0ca2811" containerName="mariadb-database-create" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: I0220 15:18:51.563309 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="d609009e-7cce-4b2a-8506-279710de8190" containerName="mariadb-account-create-update" Feb 20 15:18:51.564721 master-0 kubenswrapper[28120]: I0220 15:18:51.563979 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tqz9k" Feb 20 15:18:51.568582 master-0 kubenswrapper[28120]: I0220 15:18:51.568538 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 20 15:18:51.569296 master-0 kubenswrapper[28120]: I0220 15:18:51.569235 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 20 15:18:51.574985 master-0 kubenswrapper[28120]: I0220 15:18:51.574628 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tqz9k"] Feb 20 15:18:51.727195 master-0 kubenswrapper[28120]: I0220 15:18:51.727067 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzn89\" (UniqueName: \"kubernetes.io/projected/eb0c9f2b-f709-4d41-b910-d08623295e4d-kube-api-access-mzn89\") pod \"nova-cell0-conductor-db-sync-tqz9k\" (UID: \"eb0c9f2b-f709-4d41-b910-d08623295e4d\") " pod="openstack/nova-cell0-conductor-db-sync-tqz9k" Feb 20 15:18:51.727195 master-0 kubenswrapper[28120]: I0220 15:18:51.727182 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-config-data\") pod \"nova-cell0-conductor-db-sync-tqz9k\" (UID: \"eb0c9f2b-f709-4d41-b910-d08623295e4d\") " pod="openstack/nova-cell0-conductor-db-sync-tqz9k" Feb 20 15:18:51.727435 master-0 kubenswrapper[28120]: I0220 15:18:51.727223 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-scripts\") pod \"nova-cell0-conductor-db-sync-tqz9k\" (UID: \"eb0c9f2b-f709-4d41-b910-d08623295e4d\") " pod="openstack/nova-cell0-conductor-db-sync-tqz9k" Feb 20 15:18:51.727435 master-0 kubenswrapper[28120]: I0220 15:18:51.727393 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tqz9k\" (UID: \"eb0c9f2b-f709-4d41-b910-d08623295e4d\") " pod="openstack/nova-cell0-conductor-db-sync-tqz9k" Feb 20 15:18:51.829824 master-0 kubenswrapper[28120]: I0220 15:18:51.829685 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-config-data\") pod \"nova-cell0-conductor-db-sync-tqz9k\" (UID: \"eb0c9f2b-f709-4d41-b910-d08623295e4d\") " pod="openstack/nova-cell0-conductor-db-sync-tqz9k" Feb 20 15:18:51.829824 master-0 kubenswrapper[28120]: I0220 15:18:51.829807 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-scripts\") pod \"nova-cell0-conductor-db-sync-tqz9k\" (UID: \"eb0c9f2b-f709-4d41-b910-d08623295e4d\") " pod="openstack/nova-cell0-conductor-db-sync-tqz9k" Feb 20 15:18:51.830074 master-0 kubenswrapper[28120]: I0220 15:18:51.829948 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tqz9k\" (UID: \"eb0c9f2b-f709-4d41-b910-d08623295e4d\") " pod="openstack/nova-cell0-conductor-db-sync-tqz9k" Feb 20 15:18:51.830113 master-0 kubenswrapper[28120]: I0220 15:18:51.830086 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzn89\" (UniqueName: \"kubernetes.io/projected/eb0c9f2b-f709-4d41-b910-d08623295e4d-kube-api-access-mzn89\") pod \"nova-cell0-conductor-db-sync-tqz9k\" (UID: \"eb0c9f2b-f709-4d41-b910-d08623295e4d\") " pod="openstack/nova-cell0-conductor-db-sync-tqz9k" Feb 20 15:18:51.834113 master-0 kubenswrapper[28120]: I0220 15:18:51.834077 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-config-data\") pod \"nova-cell0-conductor-db-sync-tqz9k\" (UID: \"eb0c9f2b-f709-4d41-b910-d08623295e4d\") " pod="openstack/nova-cell0-conductor-db-sync-tqz9k" Feb 20 15:18:51.834562 master-0 kubenswrapper[28120]: I0220 15:18:51.834530 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-scripts\") pod \"nova-cell0-conductor-db-sync-tqz9k\" (UID: \"eb0c9f2b-f709-4d41-b910-d08623295e4d\") " pod="openstack/nova-cell0-conductor-db-sync-tqz9k" Feb 20 15:18:51.835565 master-0 kubenswrapper[28120]: I0220 15:18:51.835527 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tqz9k\" (UID: \"eb0c9f2b-f709-4d41-b910-d08623295e4d\") " pod="openstack/nova-cell0-conductor-db-sync-tqz9k" Feb 20 15:18:51.844260 master-0 kubenswrapper[28120]: I0220 15:18:51.844221 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzn89\" (UniqueName: \"kubernetes.io/projected/eb0c9f2b-f709-4d41-b910-d08623295e4d-kube-api-access-mzn89\") pod \"nova-cell0-conductor-db-sync-tqz9k\" (UID: \"eb0c9f2b-f709-4d41-b910-d08623295e4d\") " pod="openstack/nova-cell0-conductor-db-sync-tqz9k" Feb 20 15:18:51.884705 master-0 kubenswrapper[28120]: I0220 15:18:51.884628 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tqz9k" Feb 20 15:18:52.425679 master-0 kubenswrapper[28120]: I0220 15:18:52.425620 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tqz9k"] Feb 20 15:18:52.776333 master-0 kubenswrapper[28120]: I0220 15:18:52.776260 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:52.776333 master-0 kubenswrapper[28120]: I0220 15:18:52.776335 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:52.837841 master-0 kubenswrapper[28120]: I0220 15:18:52.837782 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:52.846483 master-0 kubenswrapper[28120]: I0220 15:18:52.846440 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:52.893621 master-0 kubenswrapper[28120]: I0220 15:18:52.893542 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 20 15:18:52.893621 master-0 kubenswrapper[28120]: I0220 15:18:52.893621 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 20 15:18:52.894341 master-0 kubenswrapper[28120]: I0220 15:18:52.893665 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 20 15:18:52.894341 master-0 kubenswrapper[28120]: I0220 15:18:52.893680 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 20 15:18:52.898807 master-0 kubenswrapper[28120]: I0220 15:18:52.898758 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 20 15:18:52.945464 master-0 kubenswrapper[28120]: I0220 15:18:52.945395 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 20 15:18:52.947285 master-0 kubenswrapper[28120]: I0220 15:18:52.947252 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 20 15:18:52.955905 master-0 kubenswrapper[28120]: I0220 15:18:52.955847 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 20 15:18:53.448678 master-0 kubenswrapper[28120]: I0220 15:18:53.448593 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tqz9k" event={"ID":"eb0c9f2b-f709-4d41-b910-d08623295e4d","Type":"ContainerStarted","Data":"e30ce4dab61841b6fab69cef64ae6b33c691d2acd08c7cc735ad56878a12a280"} Feb 20 15:18:53.450392 master-0 kubenswrapper[28120]: I0220 15:18:53.450325 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:53.450392 master-0 kubenswrapper[28120]: I0220 15:18:53.450360 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:53.462107 master-0 kubenswrapper[28120]: I0220 15:18:53.459712 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 20 15:18:53.462107 master-0 kubenswrapper[28120]: I0220 15:18:53.461403 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 20 15:18:55.423113 master-0 kubenswrapper[28120]: I0220 15:18:55.423054 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:18:55.442094 master-0 kubenswrapper[28120]: I0220 15:18:55.441982 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-c0df7-default-external-api-0" Feb 20 15:19:01.566424 master-0 kubenswrapper[28120]: I0220 15:19:01.566322 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tqz9k" event={"ID":"eb0c9f2b-f709-4d41-b910-d08623295e4d","Type":"ContainerStarted","Data":"16287f77eec78e2bf151d8ce86595c86958de37d220a6bca5aa3149425764ab0"} Feb 20 15:19:01.639011 master-0 kubenswrapper[28120]: I0220 15:19:01.638810 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-tqz9k" podStartSLOduration=2.579167064 podStartE2EDuration="10.638770627s" podCreationTimestamp="2026-02-20 15:18:51 +0000 UTC" firstStartedPulling="2026-02-20 15:18:52.429232403 +0000 UTC m=+1070.690025966" lastFinishedPulling="2026-02-20 15:19:00.488835956 +0000 UTC m=+1078.749629529" observedRunningTime="2026-02-20 15:19:01.619508217 +0000 UTC m=+1079.880301820" watchObservedRunningTime="2026-02-20 15:19:01.638770627 +0000 UTC m=+1079.899564230" Feb 20 15:19:17.846561 master-0 kubenswrapper[28120]: I0220 15:19:17.846451 28120 generic.go:334] "Generic (PLEG): container finished" podID="eb0c9f2b-f709-4d41-b910-d08623295e4d" containerID="16287f77eec78e2bf151d8ce86595c86958de37d220a6bca5aa3149425764ab0" exitCode=0 Feb 20 15:19:17.847534 master-0 kubenswrapper[28120]: I0220 15:19:17.846530 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tqz9k" event={"ID":"eb0c9f2b-f709-4d41-b910-d08623295e4d","Type":"ContainerDied","Data":"16287f77eec78e2bf151d8ce86595c86958de37d220a6bca5aa3149425764ab0"} Feb 20 15:19:19.408166 master-0 kubenswrapper[28120]: I0220 15:19:19.408109 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tqz9k" Feb 20 15:19:19.422116 master-0 kubenswrapper[28120]: I0220 15:19:19.422056 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-combined-ca-bundle\") pod \"eb0c9f2b-f709-4d41-b910-d08623295e4d\" (UID: \"eb0c9f2b-f709-4d41-b910-d08623295e4d\") " Feb 20 15:19:19.422684 master-0 kubenswrapper[28120]: I0220 15:19:19.422214 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-scripts\") pod \"eb0c9f2b-f709-4d41-b910-d08623295e4d\" (UID: \"eb0c9f2b-f709-4d41-b910-d08623295e4d\") " Feb 20 15:19:19.423181 master-0 kubenswrapper[28120]: I0220 15:19:19.422827 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzn89\" (UniqueName: \"kubernetes.io/projected/eb0c9f2b-f709-4d41-b910-d08623295e4d-kube-api-access-mzn89\") pod \"eb0c9f2b-f709-4d41-b910-d08623295e4d\" (UID: \"eb0c9f2b-f709-4d41-b910-d08623295e4d\") " Feb 20 15:19:19.423181 master-0 kubenswrapper[28120]: I0220 15:19:19.422905 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-config-data\") pod \"eb0c9f2b-f709-4d41-b910-d08623295e4d\" (UID: \"eb0c9f2b-f709-4d41-b910-d08623295e4d\") " Feb 20 15:19:19.426248 master-0 kubenswrapper[28120]: I0220 15:19:19.426176 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0c9f2b-f709-4d41-b910-d08623295e4d-kube-api-access-mzn89" (OuterVolumeSpecName: "kube-api-access-mzn89") pod "eb0c9f2b-f709-4d41-b910-d08623295e4d" (UID: "eb0c9f2b-f709-4d41-b910-d08623295e4d"). InnerVolumeSpecName "kube-api-access-mzn89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:19:19.429028 master-0 kubenswrapper[28120]: I0220 15:19:19.427200 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-scripts" (OuterVolumeSpecName: "scripts") pod "eb0c9f2b-f709-4d41-b910-d08623295e4d" (UID: "eb0c9f2b-f709-4d41-b910-d08623295e4d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:19.483877 master-0 kubenswrapper[28120]: I0220 15:19:19.483784 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb0c9f2b-f709-4d41-b910-d08623295e4d" (UID: "eb0c9f2b-f709-4d41-b910-d08623295e4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:19.503319 master-0 kubenswrapper[28120]: I0220 15:19:19.503247 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-config-data" (OuterVolumeSpecName: "config-data") pod "eb0c9f2b-f709-4d41-b910-d08623295e4d" (UID: "eb0c9f2b-f709-4d41-b910-d08623295e4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:19.526075 master-0 kubenswrapper[28120]: I0220 15:19:19.526002 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:19.526075 master-0 kubenswrapper[28120]: I0220 15:19:19.526069 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzn89\" (UniqueName: \"kubernetes.io/projected/eb0c9f2b-f709-4d41-b910-d08623295e4d-kube-api-access-mzn89\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:19.526075 master-0 kubenswrapper[28120]: I0220 15:19:19.526084 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:19.526356 master-0 kubenswrapper[28120]: I0220 15:19:19.526093 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0c9f2b-f709-4d41-b910-d08623295e4d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:19.880967 master-0 kubenswrapper[28120]: I0220 15:19:19.880849 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tqz9k" event={"ID":"eb0c9f2b-f709-4d41-b910-d08623295e4d","Type":"ContainerDied","Data":"e30ce4dab61841b6fab69cef64ae6b33c691d2acd08c7cc735ad56878a12a280"} Feb 20 15:19:19.881269 master-0 kubenswrapper[28120]: I0220 15:19:19.880987 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e30ce4dab61841b6fab69cef64ae6b33c691d2acd08c7cc735ad56878a12a280" Feb 20 15:19:19.881269 master-0 kubenswrapper[28120]: I0220 15:19:19.880884 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tqz9k" Feb 20 15:19:20.126144 master-0 kubenswrapper[28120]: I0220 15:19:20.126041 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 20 15:19:20.134405 master-0 kubenswrapper[28120]: E0220 15:19:20.126953 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0c9f2b-f709-4d41-b910-d08623295e4d" containerName="nova-cell0-conductor-db-sync" Feb 20 15:19:20.134405 master-0 kubenswrapper[28120]: I0220 15:19:20.134177 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0c9f2b-f709-4d41-b910-d08623295e4d" containerName="nova-cell0-conductor-db-sync" Feb 20 15:19:20.135146 master-0 kubenswrapper[28120]: I0220 15:19:20.135051 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0c9f2b-f709-4d41-b910-d08623295e4d" containerName="nova-cell0-conductor-db-sync" Feb 20 15:19:20.136386 master-0 kubenswrapper[28120]: I0220 15:19:20.136327 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 20 15:19:20.140502 master-0 kubenswrapper[28120]: I0220 15:19:20.140447 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 20 15:19:20.146124 master-0 kubenswrapper[28120]: I0220 15:19:20.146066 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 20 15:19:20.244947 master-0 kubenswrapper[28120]: I0220 15:19:20.244839 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pqc9\" (UniqueName: \"kubernetes.io/projected/2283a2d2-a451-40c2-8cee-c66af2e1f90d-kube-api-access-2pqc9\") pod \"nova-cell0-conductor-0\" (UID: \"2283a2d2-a451-40c2-8cee-c66af2e1f90d\") " pod="openstack/nova-cell0-conductor-0" Feb 20 15:19:20.245138 master-0 kubenswrapper[28120]: I0220 15:19:20.245067 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2283a2d2-a451-40c2-8cee-c66af2e1f90d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2283a2d2-a451-40c2-8cee-c66af2e1f90d\") " pod="openstack/nova-cell0-conductor-0" Feb 20 15:19:20.246355 master-0 kubenswrapper[28120]: I0220 15:19:20.245428 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2283a2d2-a451-40c2-8cee-c66af2e1f90d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2283a2d2-a451-40c2-8cee-c66af2e1f90d\") " pod="openstack/nova-cell0-conductor-0" Feb 20 15:19:20.351432 master-0 kubenswrapper[28120]: I0220 15:19:20.349637 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pqc9\" (UniqueName: \"kubernetes.io/projected/2283a2d2-a451-40c2-8cee-c66af2e1f90d-kube-api-access-2pqc9\") pod \"nova-cell0-conductor-0\" (UID: \"2283a2d2-a451-40c2-8cee-c66af2e1f90d\") " pod="openstack/nova-cell0-conductor-0" Feb 20 15:19:20.351432 master-0 kubenswrapper[28120]: I0220 15:19:20.349739 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2283a2d2-a451-40c2-8cee-c66af2e1f90d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2283a2d2-a451-40c2-8cee-c66af2e1f90d\") " pod="openstack/nova-cell0-conductor-0" Feb 20 15:19:20.351432 master-0 kubenswrapper[28120]: I0220 15:19:20.350058 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2283a2d2-a451-40c2-8cee-c66af2e1f90d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2283a2d2-a451-40c2-8cee-c66af2e1f90d\") " pod="openstack/nova-cell0-conductor-0" Feb 20 15:19:20.362132 master-0 kubenswrapper[28120]: I0220 15:19:20.355739 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2283a2d2-a451-40c2-8cee-c66af2e1f90d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2283a2d2-a451-40c2-8cee-c66af2e1f90d\") " pod="openstack/nova-cell0-conductor-0" Feb 20 15:19:20.365355 master-0 kubenswrapper[28120]: I0220 15:19:20.365261 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2283a2d2-a451-40c2-8cee-c66af2e1f90d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2283a2d2-a451-40c2-8cee-c66af2e1f90d\") " pod="openstack/nova-cell0-conductor-0" Feb 20 15:19:20.380571 master-0 kubenswrapper[28120]: I0220 15:19:20.380508 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pqc9\" (UniqueName: \"kubernetes.io/projected/2283a2d2-a451-40c2-8cee-c66af2e1f90d-kube-api-access-2pqc9\") pod \"nova-cell0-conductor-0\" (UID: \"2283a2d2-a451-40c2-8cee-c66af2e1f90d\") " pod="openstack/nova-cell0-conductor-0" Feb 20 15:19:20.469409 master-0 kubenswrapper[28120]: I0220 15:19:20.469237 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 20 15:19:21.038647 master-0 kubenswrapper[28120]: I0220 15:19:21.038509 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 20 15:19:21.052759 master-0 kubenswrapper[28120]: W0220 15:19:21.050833 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2283a2d2_a451_40c2_8cee_c66af2e1f90d.slice/crio-cf6e37104c3c46d8f21baf42846df7c9c036c9b6c842341e1acf0e4ccc36ec43 WatchSource:0}: Error finding container cf6e37104c3c46d8f21baf42846df7c9c036c9b6c842341e1acf0e4ccc36ec43: Status 404 returned error can't find the container with id cf6e37104c3c46d8f21baf42846df7c9c036c9b6c842341e1acf0e4ccc36ec43 Feb 20 15:19:21.909183 master-0 kubenswrapper[28120]: I0220 15:19:21.909093 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2283a2d2-a451-40c2-8cee-c66af2e1f90d","Type":"ContainerStarted","Data":"1d6ba2b4ea99bd764ef65500a0eef0639fbed5ea2155cb3e038bb9faf49a2a9d"} Feb 20 15:19:21.909183 master-0 kubenswrapper[28120]: I0220 15:19:21.909152 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2283a2d2-a451-40c2-8cee-c66af2e1f90d","Type":"ContainerStarted","Data":"cf6e37104c3c46d8f21baf42846df7c9c036c9b6c842341e1acf0e4ccc36ec43"} Feb 20 15:19:21.910072 master-0 kubenswrapper[28120]: I0220 15:19:21.909300 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 20 15:19:21.962344 master-0 kubenswrapper[28120]: I0220 15:19:21.962260 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.962242378 podStartE2EDuration="1.962242378s" podCreationTimestamp="2026-02-20 15:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:19:21.961002287 +0000 UTC m=+1100.221795860" watchObservedRunningTime="2026-02-20 15:19:21.962242378 +0000 UTC m=+1100.223035941" Feb 20 15:19:23.975275 master-0 kubenswrapper[28120]: I0220 15:19:23.975118 28120 generic.go:334] "Generic (PLEG): container finished" podID="af765e06-e2aa-4239-8a51-fc29e02fa257" containerID="a26c972f1b27276fe2c2e67689f1d943da6b8323848b9c5dd16ab3fde727a2ab" exitCode=0 Feb 20 15:19:23.975275 master-0 kubenswrapper[28120]: I0220 15:19:23.975189 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"af765e06-e2aa-4239-8a51-fc29e02fa257","Type":"ContainerDied","Data":"a26c972f1b27276fe2c2e67689f1d943da6b8323848b9c5dd16ab3fde727a2ab"} Feb 20 15:19:24.992735 master-0 kubenswrapper[28120]: I0220 15:19:24.992663 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"af765e06-e2aa-4239-8a51-fc29e02fa257","Type":"ContainerStarted","Data":"21dd2c002bb38a126ffc6d442978113e11ddb44bb0f48cb0482f09756d015d08"} Feb 20 15:19:26.011681 master-0 kubenswrapper[28120]: I0220 15:19:26.011599 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"af765e06-e2aa-4239-8a51-fc29e02fa257","Type":"ContainerStarted","Data":"ca028e8281a014b53123edb5f32c3d294d6da1c77ac9a4f48be1a4f321a0af18"} Feb 20 15:19:26.011681 master-0 kubenswrapper[28120]: I0220 15:19:26.011678 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"af765e06-e2aa-4239-8a51-fc29e02fa257","Type":"ContainerStarted","Data":"8e1845acce1bcc3db8eb034c8686c89dc579e23c180b4a638b6d9b8f68013141"} Feb 20 15:19:26.012321 master-0 kubenswrapper[28120]: I0220 15:19:26.011900 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 20 15:19:26.012321 master-0 kubenswrapper[28120]: I0220 15:19:26.012020 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 20 15:19:26.075337 master-0 kubenswrapper[28120]: I0220 15:19:26.075250 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=56.45699802 podStartE2EDuration="1m32.07523015s" podCreationTimestamp="2026-02-20 15:17:54 +0000 UTC" firstStartedPulling="2026-02-20 15:18:04.008049779 +0000 UTC m=+1022.268843342" lastFinishedPulling="2026-02-20 15:18:39.626281919 +0000 UTC m=+1057.887075472" observedRunningTime="2026-02-20 15:19:26.065265311 +0000 UTC m=+1104.326058934" watchObservedRunningTime="2026-02-20 15:19:26.07523015 +0000 UTC m=+1104.336023713" Feb 20 15:19:27.037250 master-0 kubenswrapper[28120]: I0220 15:19:27.037162 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Feb 20 15:19:28.126280 master-0 kubenswrapper[28120]: I0220 15:19:28.126191 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 20 15:19:28.348392 master-0 kubenswrapper[28120]: I0220 15:19:28.348280 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Feb 20 15:19:30.097809 master-0 kubenswrapper[28120]: I0220 15:19:30.097710 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 20 15:19:30.525197 master-0 kubenswrapper[28120]: I0220 15:19:30.525125 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 20 15:19:31.127144 master-0 kubenswrapper[28120]: I0220 15:19:31.127088 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-mhjxd"] Feb 20 15:19:31.129024 master-0 kubenswrapper[28120]: I0220 15:19:31.128989 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mhjxd" Feb 20 15:19:31.134143 master-0 kubenswrapper[28120]: I0220 15:19:31.133896 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 20 15:19:31.134735 master-0 kubenswrapper[28120]: I0220 15:19:31.134699 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 20 15:19:31.154670 master-0 kubenswrapper[28120]: I0220 15:19:31.154606 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mhjxd"] Feb 20 15:19:31.243807 master-0 kubenswrapper[28120]: I0220 15:19:31.242648 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 20 15:19:31.250759 master-0 kubenswrapper[28120]: I0220 15:19:31.250626 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 15:19:31.329697 master-0 kubenswrapper[28120]: I0220 15:19:31.329636 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Feb 20 15:19:31.339916 master-0 kubenswrapper[28120]: I0220 15:19:31.338302 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 20 15:19:31.370799 master-0 kubenswrapper[28120]: I0220 15:19:31.370742 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 20 15:19:31.378087 master-0 kubenswrapper[28120]: I0220 15:19:31.374134 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 15:19:31.378087 master-0 kubenswrapper[28120]: I0220 15:19:31.376802 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 20 15:19:31.428049 master-0 kubenswrapper[28120]: I0220 15:19:31.425418 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:19:31.431560 master-0 kubenswrapper[28120]: I0220 15:19:31.431503 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mhjxd\" (UID: \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\") " pod="openstack/nova-cell0-cell-mapping-mhjxd" Feb 20 15:19:31.431630 master-0 kubenswrapper[28120]: I0220 15:19:31.431606 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnq9c\" (UniqueName: \"kubernetes.io/projected/9a28ab67-ed88-4926-8ebc-a1b0f52d2726-kube-api-access-qnq9c\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"9a28ab67-ed88-4926-8ebc-a1b0f52d2726\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 15:19:31.431692 master-0 kubenswrapper[28120]: I0220 15:19:31.431666 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-scripts\") pod \"nova-cell0-cell-mapping-mhjxd\" (UID: \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\") " pod="openstack/nova-cell0-cell-mapping-mhjxd" Feb 20 15:19:31.431736 master-0 kubenswrapper[28120]: I0220 15:19:31.431712 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-config-data\") pod \"nova-cell0-cell-mapping-mhjxd\" (UID: \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\") " pod="openstack/nova-cell0-cell-mapping-mhjxd" Feb 20 15:19:31.431781 master-0 kubenswrapper[28120]: I0220 15:19:31.431763 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a28ab67-ed88-4926-8ebc-a1b0f52d2726-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"9a28ab67-ed88-4926-8ebc-a1b0f52d2726\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 15:19:31.431868 master-0 kubenswrapper[28120]: I0220 15:19:31.431840 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a28ab67-ed88-4926-8ebc-a1b0f52d2726-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"9a28ab67-ed88-4926-8ebc-a1b0f52d2726\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 15:19:31.432027 master-0 kubenswrapper[28120]: I0220 15:19:31.432000 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56cmj\" (UniqueName: \"kubernetes.io/projected/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-kube-api-access-56cmj\") pod \"nova-cell0-cell-mapping-mhjxd\" (UID: \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\") " pod="openstack/nova-cell0-cell-mapping-mhjxd" Feb 20 15:19:31.539706 master-0 kubenswrapper[28120]: I0220 15:19:31.535862 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56cmj\" (UniqueName: \"kubernetes.io/projected/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-kube-api-access-56cmj\") pod \"nova-cell0-cell-mapping-mhjxd\" (UID: \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\") " pod="openstack/nova-cell0-cell-mapping-mhjxd" Feb 20 15:19:31.539706 master-0 kubenswrapper[28120]: I0220 15:19:31.535950 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mhjxd\" (UID: \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\") " pod="openstack/nova-cell0-cell-mapping-mhjxd" Feb 20 15:19:31.539706 master-0 kubenswrapper[28120]: I0220 15:19:31.535994 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnq9c\" (UniqueName: \"kubernetes.io/projected/9a28ab67-ed88-4926-8ebc-a1b0f52d2726-kube-api-access-qnq9c\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"9a28ab67-ed88-4926-8ebc-a1b0f52d2726\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 15:19:31.539706 master-0 kubenswrapper[28120]: I0220 15:19:31.536036 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-scripts\") pod \"nova-cell0-cell-mapping-mhjxd\" (UID: \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\") " pod="openstack/nova-cell0-cell-mapping-mhjxd" Feb 20 15:19:31.539706 master-0 kubenswrapper[28120]: I0220 15:19:31.536062 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488fe4b5-a9c6-4373-b370-c9d253d13bf9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\") " pod="openstack/nova-api-0" Feb 20 15:19:31.539706 master-0 kubenswrapper[28120]: I0220 15:19:31.536088 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-config-data\") pod \"nova-cell0-cell-mapping-mhjxd\" (UID: \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\") " pod="openstack/nova-cell0-cell-mapping-mhjxd" Feb 20 15:19:31.539706 master-0 kubenswrapper[28120]: I0220 15:19:31.536155 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a28ab67-ed88-4926-8ebc-a1b0f52d2726-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"9a28ab67-ed88-4926-8ebc-a1b0f52d2726\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 15:19:31.539706 master-0 kubenswrapper[28120]: I0220 15:19:31.536188 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/488fe4b5-a9c6-4373-b370-c9d253d13bf9-config-data\") pod \"nova-api-0\" (UID: \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\") " pod="openstack/nova-api-0" Feb 20 15:19:31.539706 master-0 kubenswrapper[28120]: I0220 15:19:31.536236 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a28ab67-ed88-4926-8ebc-a1b0f52d2726-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"9a28ab67-ed88-4926-8ebc-a1b0f52d2726\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 15:19:31.539706 master-0 kubenswrapper[28120]: I0220 15:19:31.536266 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qzml\" (UniqueName: \"kubernetes.io/projected/488fe4b5-a9c6-4373-b370-c9d253d13bf9-kube-api-access-8qzml\") pod \"nova-api-0\" (UID: \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\") " pod="openstack/nova-api-0" Feb 20 15:19:31.539706 master-0 kubenswrapper[28120]: I0220 15:19:31.536307 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/488fe4b5-a9c6-4373-b370-c9d253d13bf9-logs\") pod \"nova-api-0\" (UID: \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\") " pod="openstack/nova-api-0" Feb 20 15:19:31.544530 master-0 kubenswrapper[28120]: I0220 15:19:31.544497 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-config-data\") pod \"nova-cell0-cell-mapping-mhjxd\" (UID: \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\") " pod="openstack/nova-cell0-cell-mapping-mhjxd" Feb 20 15:19:31.545087 master-0 kubenswrapper[28120]: I0220 15:19:31.545045 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a28ab67-ed88-4926-8ebc-a1b0f52d2726-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"9a28ab67-ed88-4926-8ebc-a1b0f52d2726\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 15:19:31.560068 master-0 kubenswrapper[28120]: I0220 15:19:31.560020 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-scripts\") pod \"nova-cell0-cell-mapping-mhjxd\" (UID: \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\") " pod="openstack/nova-cell0-cell-mapping-mhjxd" Feb 20 15:19:31.578136 master-0 kubenswrapper[28120]: I0220 15:19:31.575297 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mhjxd\" (UID: \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\") " pod="openstack/nova-cell0-cell-mapping-mhjxd" Feb 20 15:19:31.578891 master-0 kubenswrapper[28120]: I0220 15:19:31.578752 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:19:31.581041 master-0 kubenswrapper[28120]: I0220 15:19:31.580984 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 15:19:31.583880 master-0 kubenswrapper[28120]: I0220 15:19:31.583843 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 20 15:19:31.589398 master-0 kubenswrapper[28120]: I0220 15:19:31.585701 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnq9c\" (UniqueName: \"kubernetes.io/projected/9a28ab67-ed88-4926-8ebc-a1b0f52d2726-kube-api-access-qnq9c\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"9a28ab67-ed88-4926-8ebc-a1b0f52d2726\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 15:19:31.590176 master-0 kubenswrapper[28120]: I0220 15:19:31.590147 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56cmj\" (UniqueName: \"kubernetes.io/projected/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-kube-api-access-56cmj\") pod \"nova-cell0-cell-mapping-mhjxd\" (UID: \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\") " pod="openstack/nova-cell0-cell-mapping-mhjxd" Feb 20 15:19:31.592419 master-0 kubenswrapper[28120]: I0220 15:19:31.592382 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a28ab67-ed88-4926-8ebc-a1b0f52d2726-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"9a28ab67-ed88-4926-8ebc-a1b0f52d2726\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 15:19:31.611998 master-0 kubenswrapper[28120]: I0220 15:19:31.610396 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 15:19:31.619226 master-0 kubenswrapper[28120]: I0220 15:19:31.619172 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:19:31.642875 master-0 kubenswrapper[28120]: I0220 15:19:31.632954 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:19:31.642875 master-0 kubenswrapper[28120]: I0220 15:19:31.633293 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 20 15:19:31.642875 master-0 kubenswrapper[28120]: I0220 15:19:31.639344 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488fe4b5-a9c6-4373-b370-c9d253d13bf9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\") " pod="openstack/nova-api-0" Feb 20 15:19:31.642875 master-0 kubenswrapper[28120]: I0220 15:19:31.639436 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/488fe4b5-a9c6-4373-b370-c9d253d13bf9-config-data\") pod \"nova-api-0\" (UID: \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\") " pod="openstack/nova-api-0" Feb 20 15:19:31.642875 master-0 kubenswrapper[28120]: I0220 15:19:31.639495 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qzml\" (UniqueName: \"kubernetes.io/projected/488fe4b5-a9c6-4373-b370-c9d253d13bf9-kube-api-access-8qzml\") pod \"nova-api-0\" (UID: \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\") " pod="openstack/nova-api-0" Feb 20 15:19:31.642875 master-0 kubenswrapper[28120]: I0220 15:19:31.639530 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/488fe4b5-a9c6-4373-b370-c9d253d13bf9-logs\") pod \"nova-api-0\" (UID: \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\") " pod="openstack/nova-api-0" Feb 20 15:19:31.642875 master-0 kubenswrapper[28120]: I0220 15:19:31.641581 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/488fe4b5-a9c6-4373-b370-c9d253d13bf9-logs\") pod \"nova-api-0\" (UID: \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\") " pod="openstack/nova-api-0" Feb 20 15:19:31.643374 master-0 kubenswrapper[28120]: I0220 15:19:31.643342 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/488fe4b5-a9c6-4373-b370-c9d253d13bf9-config-data\") pod \"nova-api-0\" (UID: \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\") " pod="openstack/nova-api-0" Feb 20 15:19:31.645939 master-0 kubenswrapper[28120]: I0220 15:19:31.645899 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488fe4b5-a9c6-4373-b370-c9d253d13bf9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\") " pod="openstack/nova-api-0" Feb 20 15:19:31.666832 master-0 kubenswrapper[28120]: I0220 15:19:31.666782 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 15:19:31.695735 master-0 kubenswrapper[28120]: I0220 15:19:31.690679 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 15:19:31.707944 master-0 kubenswrapper[28120]: I0220 15:19:31.707131 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qzml\" (UniqueName: \"kubernetes.io/projected/488fe4b5-a9c6-4373-b370-c9d253d13bf9-kube-api-access-8qzml\") pod \"nova-api-0\" (UID: \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\") " pod="openstack/nova-api-0" Feb 20 15:19:31.744867 master-0 kubenswrapper[28120]: I0220 15:19:31.744751 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8ef7eb7-14c0-4364-a011-a9c22ea465a0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:19:31.744867 master-0 kubenswrapper[28120]: I0220 15:19:31.744828 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8ef7eb7-14c0-4364-a011-a9c22ea465a0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:19:31.744867 master-0 kubenswrapper[28120]: I0220 15:19:31.744872 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/536236e3-76b3-4dea-831c-cd2f327fda59-logs\") pod \"nova-metadata-0\" (UID: \"536236e3-76b3-4dea-831c-cd2f327fda59\") " pod="openstack/nova-metadata-0" Feb 20 15:19:31.745157 master-0 kubenswrapper[28120]: I0220 15:19:31.744974 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm5k4\" (UniqueName: \"kubernetes.io/projected/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-kube-api-access-fm5k4\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8ef7eb7-14c0-4364-a011-a9c22ea465a0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:19:31.745157 master-0 kubenswrapper[28120]: I0220 15:19:31.745084 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/536236e3-76b3-4dea-831c-cd2f327fda59-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"536236e3-76b3-4dea-831c-cd2f327fda59\") " pod="openstack/nova-metadata-0" Feb 20 15:19:31.745240 master-0 kubenswrapper[28120]: I0220 15:19:31.745163 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/536236e3-76b3-4dea-831c-cd2f327fda59-config-data\") pod \"nova-metadata-0\" (UID: \"536236e3-76b3-4dea-831c-cd2f327fda59\") " pod="openstack/nova-metadata-0" Feb 20 15:19:31.745240 master-0 kubenswrapper[28120]: I0220 15:19:31.745193 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nhf5\" (UniqueName: \"kubernetes.io/projected/536236e3-76b3-4dea-831c-cd2f327fda59-kube-api-access-2nhf5\") pod \"nova-metadata-0\" (UID: \"536236e3-76b3-4dea-831c-cd2f327fda59\") " pod="openstack/nova-metadata-0" Feb 20 15:19:31.767566 master-0 kubenswrapper[28120]: I0220 15:19:31.767003 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mhjxd" Feb 20 15:19:31.801372 master-0 kubenswrapper[28120]: I0220 15:19:31.801323 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 15:19:31.802988 master-0 kubenswrapper[28120]: I0220 15:19:31.802962 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 15:19:31.804569 master-0 kubenswrapper[28120]: I0220 15:19:31.804520 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 20 15:19:31.837056 master-0 kubenswrapper[28120]: I0220 15:19:31.835009 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78d5d45447-5tn9v"] Feb 20 15:19:31.839347 master-0 kubenswrapper[28120]: I0220 15:19:31.839228 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:31.855750 master-0 kubenswrapper[28120]: I0220 15:19:31.851944 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/536236e3-76b3-4dea-831c-cd2f327fda59-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"536236e3-76b3-4dea-831c-cd2f327fda59\") " pod="openstack/nova-metadata-0" Feb 20 15:19:31.855750 master-0 kubenswrapper[28120]: I0220 15:19:31.853657 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/536236e3-76b3-4dea-831c-cd2f327fda59-config-data\") pod \"nova-metadata-0\" (UID: \"536236e3-76b3-4dea-831c-cd2f327fda59\") " pod="openstack/nova-metadata-0" Feb 20 15:19:31.855750 master-0 kubenswrapper[28120]: I0220 15:19:31.853715 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nhf5\" (UniqueName: \"kubernetes.io/projected/536236e3-76b3-4dea-831c-cd2f327fda59-kube-api-access-2nhf5\") pod \"nova-metadata-0\" (UID: \"536236e3-76b3-4dea-831c-cd2f327fda59\") " pod="openstack/nova-metadata-0" Feb 20 15:19:31.855750 master-0 kubenswrapper[28120]: I0220 15:19:31.853884 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8ef7eb7-14c0-4364-a011-a9c22ea465a0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:19:31.855750 master-0 kubenswrapper[28120]: I0220 15:19:31.853970 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8ef7eb7-14c0-4364-a011-a9c22ea465a0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:19:31.855750 master-0 kubenswrapper[28120]: I0220 15:19:31.854041 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/536236e3-76b3-4dea-831c-cd2f327fda59-logs\") pod \"nova-metadata-0\" (UID: \"536236e3-76b3-4dea-831c-cd2f327fda59\") " pod="openstack/nova-metadata-0" Feb 20 15:19:31.855750 master-0 kubenswrapper[28120]: I0220 15:19:31.854185 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm5k4\" (UniqueName: \"kubernetes.io/projected/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-kube-api-access-fm5k4\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8ef7eb7-14c0-4364-a011-a9c22ea465a0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:19:31.869944 master-0 kubenswrapper[28120]: I0220 15:19:31.861286 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/536236e3-76b3-4dea-831c-cd2f327fda59-logs\") pod \"nova-metadata-0\" (UID: \"536236e3-76b3-4dea-831c-cd2f327fda59\") " pod="openstack/nova-metadata-0" Feb 20 15:19:31.869944 master-0 kubenswrapper[28120]: I0220 15:19:31.862468 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8ef7eb7-14c0-4364-a011-a9c22ea465a0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:19:31.869944 master-0 kubenswrapper[28120]: I0220 15:19:31.863227 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8ef7eb7-14c0-4364-a011-a9c22ea465a0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:19:31.869944 master-0 kubenswrapper[28120]: I0220 15:19:31.863885 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/536236e3-76b3-4dea-831c-cd2f327fda59-config-data\") pod \"nova-metadata-0\" (UID: \"536236e3-76b3-4dea-831c-cd2f327fda59\") " pod="openstack/nova-metadata-0" Feb 20 15:19:31.884005 master-0 kubenswrapper[28120]: I0220 15:19:31.870838 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 15:19:31.884005 master-0 kubenswrapper[28120]: I0220 15:19:31.871031 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/536236e3-76b3-4dea-831c-cd2f327fda59-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"536236e3-76b3-4dea-831c-cd2f327fda59\") " pod="openstack/nova-metadata-0" Feb 20 15:19:31.884005 master-0 kubenswrapper[28120]: I0220 15:19:31.879916 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nhf5\" (UniqueName: \"kubernetes.io/projected/536236e3-76b3-4dea-831c-cd2f327fda59-kube-api-access-2nhf5\") pod \"nova-metadata-0\" (UID: \"536236e3-76b3-4dea-831c-cd2f327fda59\") " pod="openstack/nova-metadata-0" Feb 20 15:19:31.884005 master-0 kubenswrapper[28120]: I0220 15:19:31.880651 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm5k4\" (UniqueName: \"kubernetes.io/projected/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-kube-api-access-fm5k4\") pod \"nova-cell1-novncproxy-0\" (UID: \"a8ef7eb7-14c0-4364-a011-a9c22ea465a0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:19:31.920390 master-0 kubenswrapper[28120]: I0220 15:19:31.919866 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78d5d45447-5tn9v"] Feb 20 15:19:31.957652 master-0 kubenswrapper[28120]: I0220 15:19:31.956787 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-ovsdbserver-nb\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:31.957866 master-0 kubenswrapper[28120]: I0220 15:19:31.957676 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-dns-svc\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:31.957960 master-0 kubenswrapper[28120]: I0220 15:19:31.957898 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-config\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:31.958017 master-0 kubenswrapper[28120]: I0220 15:19:31.957994 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbfwp\" (UniqueName: \"kubernetes.io/projected/4bf741d6-32da-404c-a508-ddf648ba8b62-kube-api-access-vbfwp\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:31.958589 master-0 kubenswrapper[28120]: I0220 15:19:31.958568 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b07a801-92f0-4edc-ae58-a816afea6976-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6b07a801-92f0-4edc-ae58-a816afea6976\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:31.958664 master-0 kubenswrapper[28120]: I0220 15:19:31.958613 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b07a801-92f0-4edc-ae58-a816afea6976-config-data\") pod \"nova-scheduler-0\" (UID: \"6b07a801-92f0-4edc-ae58-a816afea6976\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:31.958807 master-0 kubenswrapper[28120]: I0220 15:19:31.958789 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n2dr\" (UniqueName: \"kubernetes.io/projected/6b07a801-92f0-4edc-ae58-a816afea6976-kube-api-access-5n2dr\") pod \"nova-scheduler-0\" (UID: \"6b07a801-92f0-4edc-ae58-a816afea6976\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:31.958967 master-0 kubenswrapper[28120]: I0220 15:19:31.958949 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-dns-swift-storage-0\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:31.959052 master-0 kubenswrapper[28120]: I0220 15:19:31.959032 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-ovsdbserver-sb\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:32.003179 master-0 kubenswrapper[28120]: I0220 15:19:32.003126 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 15:19:32.064270 master-0 kubenswrapper[28120]: I0220 15:19:32.064215 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b07a801-92f0-4edc-ae58-a816afea6976-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6b07a801-92f0-4edc-ae58-a816afea6976\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:32.067211 master-0 kubenswrapper[28120]: I0220 15:19:32.064283 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b07a801-92f0-4edc-ae58-a816afea6976-config-data\") pod \"nova-scheduler-0\" (UID: \"6b07a801-92f0-4edc-ae58-a816afea6976\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:32.067211 master-0 kubenswrapper[28120]: I0220 15:19:32.064375 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n2dr\" (UniqueName: \"kubernetes.io/projected/6b07a801-92f0-4edc-ae58-a816afea6976-kube-api-access-5n2dr\") pod \"nova-scheduler-0\" (UID: \"6b07a801-92f0-4edc-ae58-a816afea6976\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:32.067211 master-0 kubenswrapper[28120]: I0220 15:19:32.064420 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-dns-swift-storage-0\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:32.067211 master-0 kubenswrapper[28120]: I0220 15:19:32.064460 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-ovsdbserver-sb\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:32.067211 master-0 kubenswrapper[28120]: I0220 15:19:32.064542 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-ovsdbserver-nb\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:32.067211 master-0 kubenswrapper[28120]: I0220 15:19:32.064577 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-dns-svc\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:32.067211 master-0 kubenswrapper[28120]: I0220 15:19:32.064617 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-config\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:32.067211 master-0 kubenswrapper[28120]: I0220 15:19:32.064661 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbfwp\" (UniqueName: \"kubernetes.io/projected/4bf741d6-32da-404c-a508-ddf648ba8b62-kube-api-access-vbfwp\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:32.068459 master-0 kubenswrapper[28120]: I0220 15:19:32.068057 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-ovsdbserver-sb\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:32.068799 master-0 kubenswrapper[28120]: I0220 15:19:32.068771 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b07a801-92f0-4edc-ae58-a816afea6976-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6b07a801-92f0-4edc-ae58-a816afea6976\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:32.069396 master-0 kubenswrapper[28120]: I0220 15:19:32.069353 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-dns-svc\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:32.069661 master-0 kubenswrapper[28120]: I0220 15:19:32.069601 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-config\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:32.069719 master-0 kubenswrapper[28120]: I0220 15:19:32.069672 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-ovsdbserver-nb\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:32.070983 master-0 kubenswrapper[28120]: I0220 15:19:32.069961 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-dns-swift-storage-0\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:32.087846 master-0 kubenswrapper[28120]: I0220 15:19:32.084531 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b07a801-92f0-4edc-ae58-a816afea6976-config-data\") pod \"nova-scheduler-0\" (UID: \"6b07a801-92f0-4edc-ae58-a816afea6976\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:32.088258 master-0 kubenswrapper[28120]: I0220 15:19:32.088216 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n2dr\" (UniqueName: \"kubernetes.io/projected/6b07a801-92f0-4edc-ae58-a816afea6976-kube-api-access-5n2dr\") pod \"nova-scheduler-0\" (UID: \"6b07a801-92f0-4edc-ae58-a816afea6976\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:32.092936 master-0 kubenswrapper[28120]: I0220 15:19:32.092826 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbfwp\" (UniqueName: \"kubernetes.io/projected/4bf741d6-32da-404c-a508-ddf648ba8b62-kube-api-access-vbfwp\") pod \"dnsmasq-dns-78d5d45447-5tn9v\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:32.148001 master-0 kubenswrapper[28120]: I0220 15:19:32.141896 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 15:19:32.167352 master-0 kubenswrapper[28120]: I0220 15:19:32.167311 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:19:32.181791 master-0 kubenswrapper[28120]: I0220 15:19:32.181720 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 15:19:32.190803 master-0 kubenswrapper[28120]: I0220 15:19:32.190748 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:32.231611 master-0 kubenswrapper[28120]: I0220 15:19:32.231560 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 20 15:19:32.289761 master-0 kubenswrapper[28120]: W0220 15:19:32.288786 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a28ab67_ed88_4926_8ebc_a1b0f52d2726.slice/crio-9083f4f0a5b0490093d95982f380ee221f60e1ebb8b2dfaa8d3959a9e8de3aa9 WatchSource:0}: Error finding container 9083f4f0a5b0490093d95982f380ee221f60e1ebb8b2dfaa8d3959a9e8de3aa9: Status 404 returned error can't find the container with id 9083f4f0a5b0490093d95982f380ee221f60e1ebb8b2dfaa8d3959a9e8de3aa9 Feb 20 15:19:32.355388 master-0 kubenswrapper[28120]: I0220 15:19:32.352461 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7mmwb"] Feb 20 15:19:32.355388 master-0 kubenswrapper[28120]: I0220 15:19:32.354353 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7mmwb" Feb 20 15:19:32.361943 master-0 kubenswrapper[28120]: I0220 15:19:32.357022 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 20 15:19:32.361943 master-0 kubenswrapper[28120]: I0220 15:19:32.357195 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 20 15:19:32.382137 master-0 kubenswrapper[28120]: I0220 15:19:32.381240 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7mmwb"] Feb 20 15:19:32.439645 master-0 kubenswrapper[28120]: I0220 15:19:32.439046 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mhjxd"] Feb 20 15:19:32.478106 master-0 kubenswrapper[28120]: I0220 15:19:32.477788 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln52w\" (UniqueName: \"kubernetes.io/projected/2b3471d8-942c-4e40-a983-64f835dfa59c-kube-api-access-ln52w\") pod \"nova-cell1-conductor-db-sync-7mmwb\" (UID: \"2b3471d8-942c-4e40-a983-64f835dfa59c\") " pod="openstack/nova-cell1-conductor-db-sync-7mmwb" Feb 20 15:19:32.478106 master-0 kubenswrapper[28120]: I0220 15:19:32.477871 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-config-data\") pod \"nova-cell1-conductor-db-sync-7mmwb\" (UID: \"2b3471d8-942c-4e40-a983-64f835dfa59c\") " pod="openstack/nova-cell1-conductor-db-sync-7mmwb" Feb 20 15:19:32.478106 master-0 kubenswrapper[28120]: I0220 15:19:32.477955 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7mmwb\" (UID: \"2b3471d8-942c-4e40-a983-64f835dfa59c\") " pod="openstack/nova-cell1-conductor-db-sync-7mmwb" Feb 20 15:19:32.478263 master-0 kubenswrapper[28120]: I0220 15:19:32.478066 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-scripts\") pod \"nova-cell1-conductor-db-sync-7mmwb\" (UID: \"2b3471d8-942c-4e40-a983-64f835dfa59c\") " pod="openstack/nova-cell1-conductor-db-sync-7mmwb" Feb 20 15:19:32.573620 master-0 kubenswrapper[28120]: I0220 15:19:32.573233 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:19:32.586289 master-0 kubenswrapper[28120]: I0220 15:19:32.586258 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln52w\" (UniqueName: \"kubernetes.io/projected/2b3471d8-942c-4e40-a983-64f835dfa59c-kube-api-access-ln52w\") pod \"nova-cell1-conductor-db-sync-7mmwb\" (UID: \"2b3471d8-942c-4e40-a983-64f835dfa59c\") " pod="openstack/nova-cell1-conductor-db-sync-7mmwb" Feb 20 15:19:32.586617 master-0 kubenswrapper[28120]: I0220 15:19:32.586589 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-config-data\") pod \"nova-cell1-conductor-db-sync-7mmwb\" (UID: \"2b3471d8-942c-4e40-a983-64f835dfa59c\") " pod="openstack/nova-cell1-conductor-db-sync-7mmwb" Feb 20 15:19:32.587098 master-0 kubenswrapper[28120]: I0220 15:19:32.587074 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7mmwb\" (UID: \"2b3471d8-942c-4e40-a983-64f835dfa59c\") " pod="openstack/nova-cell1-conductor-db-sync-7mmwb" Feb 20 15:19:32.587195 master-0 kubenswrapper[28120]: I0220 15:19:32.587169 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-scripts\") pod \"nova-cell1-conductor-db-sync-7mmwb\" (UID: \"2b3471d8-942c-4e40-a983-64f835dfa59c\") " pod="openstack/nova-cell1-conductor-db-sync-7mmwb" Feb 20 15:19:32.590347 master-0 kubenswrapper[28120]: I0220 15:19:32.590118 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-config-data\") pod \"nova-cell1-conductor-db-sync-7mmwb\" (UID: \"2b3471d8-942c-4e40-a983-64f835dfa59c\") " pod="openstack/nova-cell1-conductor-db-sync-7mmwb" Feb 20 15:19:32.594063 master-0 kubenswrapper[28120]: I0220 15:19:32.591593 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-scripts\") pod \"nova-cell1-conductor-db-sync-7mmwb\" (UID: \"2b3471d8-942c-4e40-a983-64f835dfa59c\") " pod="openstack/nova-cell1-conductor-db-sync-7mmwb" Feb 20 15:19:32.594063 master-0 kubenswrapper[28120]: I0220 15:19:32.593655 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7mmwb\" (UID: \"2b3471d8-942c-4e40-a983-64f835dfa59c\") " pod="openstack/nova-cell1-conductor-db-sync-7mmwb" Feb 20 15:19:32.604756 master-0 kubenswrapper[28120]: I0220 15:19:32.604718 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln52w\" (UniqueName: \"kubernetes.io/projected/2b3471d8-942c-4e40-a983-64f835dfa59c-kube-api-access-ln52w\") pod \"nova-cell1-conductor-db-sync-7mmwb\" (UID: \"2b3471d8-942c-4e40-a983-64f835dfa59c\") " pod="openstack/nova-cell1-conductor-db-sync-7mmwb" Feb 20 15:19:32.708074 master-0 kubenswrapper[28120]: I0220 15:19:32.700659 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7mmwb" Feb 20 15:19:32.857473 master-0 kubenswrapper[28120]: I0220 15:19:32.853946 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:19:32.976474 master-0 kubenswrapper[28120]: I0220 15:19:32.976335 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 15:19:33.167367 master-0 kubenswrapper[28120]: I0220 15:19:33.167315 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 15:19:33.182431 master-0 kubenswrapper[28120]: W0220 15:19:33.180892 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bf741d6_32da_404c_a508_ddf648ba8b62.slice/crio-157765b7e92e39426d7a54f490519595618fdc1f459a37b748cebc9bdca97d97 WatchSource:0}: Error finding container 157765b7e92e39426d7a54f490519595618fdc1f459a37b748cebc9bdca97d97: Status 404 returned error can't find the container with id 157765b7e92e39426d7a54f490519595618fdc1f459a37b748cebc9bdca97d97 Feb 20 15:19:33.182535 master-0 kubenswrapper[28120]: I0220 15:19:33.182395 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78d5d45447-5tn9v"] Feb 20 15:19:33.184435 master-0 kubenswrapper[28120]: I0220 15:19:33.184399 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"536236e3-76b3-4dea-831c-cd2f327fda59","Type":"ContainerStarted","Data":"a858d836224b60626abc08326a9eb9b99cbf0efdd32839868fc0b2c13c239590"} Feb 20 15:19:33.187714 master-0 kubenswrapper[28120]: I0220 15:19:33.187654 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a8ef7eb7-14c0-4364-a011-a9c22ea465a0","Type":"ContainerStarted","Data":"da14ca86de91fa875a9b8e4994c92f98c46e2a4a0ac1d19be01f4fc57d9be1de"} Feb 20 15:19:33.189277 master-0 kubenswrapper[28120]: I0220 15:19:33.189244 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"488fe4b5-a9c6-4373-b370-c9d253d13bf9","Type":"ContainerStarted","Data":"e8f048c22cf008695a5ac6b274e8bfef55392e354e2768da9ea7bf5d222f83ee"} Feb 20 15:19:33.191622 master-0 kubenswrapper[28120]: I0220 15:19:33.191573 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mhjxd" event={"ID":"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125","Type":"ContainerStarted","Data":"3bb64b304a67b83215030f0de12beded05e217274f2abddb8fd53523ecde31af"} Feb 20 15:19:33.191692 master-0 kubenswrapper[28120]: I0220 15:19:33.191632 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mhjxd" event={"ID":"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125","Type":"ContainerStarted","Data":"f78e5468a0a2c3057e4fa94983c4b4614f3ec7674cb6cabb92bff0a1b2c01c1a"} Feb 20 15:19:33.197903 master-0 kubenswrapper[28120]: I0220 15:19:33.197863 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"9a28ab67-ed88-4926-8ebc-a1b0f52d2726","Type":"ContainerStarted","Data":"9083f4f0a5b0490093d95982f380ee221f60e1ebb8b2dfaa8d3959a9e8de3aa9"} Feb 20 15:19:33.224603 master-0 kubenswrapper[28120]: I0220 15:19:33.224530 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-mhjxd" podStartSLOduration=2.224510875 podStartE2EDuration="2.224510875s" podCreationTimestamp="2026-02-20 15:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:19:33.21148227 +0000 UTC m=+1111.472275843" watchObservedRunningTime="2026-02-20 15:19:33.224510875 +0000 UTC m=+1111.485304438" Feb 20 15:19:33.298283 master-0 kubenswrapper[28120]: W0220 15:19:33.298236 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b3471d8_942c_4e40_a983_64f835dfa59c.slice/crio-ce70628135d28ba59c769cf1704c61c5c89ddc135498f258e8bb75943237ee5c WatchSource:0}: Error finding container ce70628135d28ba59c769cf1704c61c5c89ddc135498f258e8bb75943237ee5c: Status 404 returned error can't find the container with id ce70628135d28ba59c769cf1704c61c5c89ddc135498f258e8bb75943237ee5c Feb 20 15:19:33.302730 master-0 kubenswrapper[28120]: I0220 15:19:33.302671 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7mmwb"] Feb 20 15:19:34.212948 master-0 kubenswrapper[28120]: I0220 15:19:34.211870 28120 generic.go:334] "Generic (PLEG): container finished" podID="4bf741d6-32da-404c-a508-ddf648ba8b62" containerID="3d9da4b096e7092a01b9f71846b734bdab551b42eb80ba3e41fa25ff6abea189" exitCode=0 Feb 20 15:19:34.212948 master-0 kubenswrapper[28120]: I0220 15:19:34.211997 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" event={"ID":"4bf741d6-32da-404c-a508-ddf648ba8b62","Type":"ContainerDied","Data":"3d9da4b096e7092a01b9f71846b734bdab551b42eb80ba3e41fa25ff6abea189"} Feb 20 15:19:34.212948 master-0 kubenswrapper[28120]: I0220 15:19:34.212025 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" event={"ID":"4bf741d6-32da-404c-a508-ddf648ba8b62","Type":"ContainerStarted","Data":"157765b7e92e39426d7a54f490519595618fdc1f459a37b748cebc9bdca97d97"} Feb 20 15:19:34.217267 master-0 kubenswrapper[28120]: I0220 15:19:34.214188 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6b07a801-92f0-4edc-ae58-a816afea6976","Type":"ContainerStarted","Data":"651b0195eb3d4a2c6ab6bc4d292c850abb7512968da35729ba8dca7745f34c4e"} Feb 20 15:19:34.221998 master-0 kubenswrapper[28120]: I0220 15:19:34.219056 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7mmwb" event={"ID":"2b3471d8-942c-4e40-a983-64f835dfa59c","Type":"ContainerStarted","Data":"4e785ac4fb727109b7d8b041af907e1f2c06f49a49d0f675ab61673757e44d30"} Feb 20 15:19:34.221998 master-0 kubenswrapper[28120]: I0220 15:19:34.219148 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7mmwb" event={"ID":"2b3471d8-942c-4e40-a983-64f835dfa59c","Type":"ContainerStarted","Data":"ce70628135d28ba59c769cf1704c61c5c89ddc135498f258e8bb75943237ee5c"} Feb 20 15:19:34.271144 master-0 kubenswrapper[28120]: I0220 15:19:34.271041 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7mmwb" podStartSLOduration=2.271024998 podStartE2EDuration="2.271024998s" podCreationTimestamp="2026-02-20 15:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:19:34.263582522 +0000 UTC m=+1112.524376085" watchObservedRunningTime="2026-02-20 15:19:34.271024998 +0000 UTC m=+1112.531818561" Feb 20 15:19:35.157351 master-0 kubenswrapper[28120]: I0220 15:19:35.157278 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 15:19:35.171602 master-0 kubenswrapper[28120]: I0220 15:19:35.171533 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:19:36.250402 master-0 kubenswrapper[28120]: I0220 15:19:36.250341 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a8ef7eb7-14c0-4364-a011-a9c22ea465a0","Type":"ContainerStarted","Data":"0d7422736568bf586d93c97a0b4a298caeec7f1089717570c8c52980655fd9f1"} Feb 20 15:19:36.250981 master-0 kubenswrapper[28120]: I0220 15:19:36.250408 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a8ef7eb7-14c0-4364-a011-a9c22ea465a0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://0d7422736568bf586d93c97a0b4a298caeec7f1089717570c8c52980655fd9f1" gracePeriod=30 Feb 20 15:19:36.255102 master-0 kubenswrapper[28120]: I0220 15:19:36.255053 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" event={"ID":"4bf741d6-32da-404c-a508-ddf648ba8b62","Type":"ContainerStarted","Data":"5afc18c64ceee0305ad458ab244e54e6cdbd0094d674cdb594580aa525b20d2f"} Feb 20 15:19:36.256007 master-0 kubenswrapper[28120]: I0220 15:19:36.255893 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:36.259091 master-0 kubenswrapper[28120]: I0220 15:19:36.259049 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"488fe4b5-a9c6-4373-b370-c9d253d13bf9","Type":"ContainerStarted","Data":"22add04063dcb1cde225ce4afb4c1ec90039b228e0bc849ebb73f4b7e87aef6d"} Feb 20 15:19:36.266745 master-0 kubenswrapper[28120]: I0220 15:19:36.266716 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"536236e3-76b3-4dea-831c-cd2f327fda59","Type":"ContainerStarted","Data":"49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02"} Feb 20 15:19:36.279399 master-0 kubenswrapper[28120]: I0220 15:19:36.279345 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6b07a801-92f0-4edc-ae58-a816afea6976","Type":"ContainerStarted","Data":"be354745fe949b9fee3f8af0e6e07f2e79c78adcca145bc23c5aeff087f3d89f"} Feb 20 15:19:36.302095 master-0 kubenswrapper[28120]: I0220 15:19:36.301977 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.659913697 podStartE2EDuration="5.301908904s" podCreationTimestamp="2026-02-20 15:19:31 +0000 UTC" firstStartedPulling="2026-02-20 15:19:32.991246801 +0000 UTC m=+1111.252040364" lastFinishedPulling="2026-02-20 15:19:35.633242008 +0000 UTC m=+1113.894035571" observedRunningTime="2026-02-20 15:19:36.27046356 +0000 UTC m=+1114.531257143" watchObservedRunningTime="2026-02-20 15:19:36.301908904 +0000 UTC m=+1114.562702477" Feb 20 15:19:36.308366 master-0 kubenswrapper[28120]: I0220 15:19:36.308292 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" podStartSLOduration=5.308269182 podStartE2EDuration="5.308269182s" podCreationTimestamp="2026-02-20 15:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:19:36.30375851 +0000 UTC m=+1114.564552073" watchObservedRunningTime="2026-02-20 15:19:36.308269182 +0000 UTC m=+1114.569062745" Feb 20 15:19:36.338384 master-0 kubenswrapper[28120]: I0220 15:19:36.338285 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.895211201 podStartE2EDuration="5.33825804s" podCreationTimestamp="2026-02-20 15:19:31 +0000 UTC" firstStartedPulling="2026-02-20 15:19:33.19022145 +0000 UTC m=+1111.451015023" lastFinishedPulling="2026-02-20 15:19:35.633268299 +0000 UTC m=+1113.894061862" observedRunningTime="2026-02-20 15:19:36.326199829 +0000 UTC m=+1114.586993392" watchObservedRunningTime="2026-02-20 15:19:36.33825804 +0000 UTC m=+1114.599051603" Feb 20 15:19:37.168647 master-0 kubenswrapper[28120]: I0220 15:19:37.168565 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:19:37.182790 master-0 kubenswrapper[28120]: I0220 15:19:37.182737 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 20 15:19:37.304916 master-0 kubenswrapper[28120]: I0220 15:19:37.304836 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"536236e3-76b3-4dea-831c-cd2f327fda59","Type":"ContainerStarted","Data":"b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d"} Feb 20 15:19:37.305532 master-0 kubenswrapper[28120]: I0220 15:19:37.305145 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="536236e3-76b3-4dea-831c-cd2f327fda59" containerName="nova-metadata-log" containerID="cri-o://49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02" gracePeriod=30 Feb 20 15:19:37.306137 master-0 kubenswrapper[28120]: I0220 15:19:37.305676 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="536236e3-76b3-4dea-831c-cd2f327fda59" containerName="nova-metadata-metadata" containerID="cri-o://b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d" gracePeriod=30 Feb 20 15:19:37.314000 master-0 kubenswrapper[28120]: I0220 15:19:37.313212 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"488fe4b5-a9c6-4373-b370-c9d253d13bf9","Type":"ContainerStarted","Data":"8be46299983c7984eb66b699659574e8a232daf4ef3ed407e17b693fec2ed59f"} Feb 20 15:19:37.348443 master-0 kubenswrapper[28120]: I0220 15:19:37.348353 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.573463086 podStartE2EDuration="6.348329365s" podCreationTimestamp="2026-02-20 15:19:31 +0000 UTC" firstStartedPulling="2026-02-20 15:19:32.857456276 +0000 UTC m=+1111.118249839" lastFinishedPulling="2026-02-20 15:19:35.632322555 +0000 UTC m=+1113.893116118" observedRunningTime="2026-02-20 15:19:37.330301505 +0000 UTC m=+1115.591095078" watchObservedRunningTime="2026-02-20 15:19:37.348329365 +0000 UTC m=+1115.609122928" Feb 20 15:19:37.366997 master-0 kubenswrapper[28120]: I0220 15:19:37.366857 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.324300476 podStartE2EDuration="6.366841456s" podCreationTimestamp="2026-02-20 15:19:31 +0000 UTC" firstStartedPulling="2026-02-20 15:19:32.588217686 +0000 UTC m=+1110.849011249" lastFinishedPulling="2026-02-20 15:19:35.630758666 +0000 UTC m=+1113.891552229" observedRunningTime="2026-02-20 15:19:37.359222286 +0000 UTC m=+1115.620015849" watchObservedRunningTime="2026-02-20 15:19:37.366841456 +0000 UTC m=+1115.627635019" Feb 20 15:19:38.076954 master-0 kubenswrapper[28120]: I0220 15:19:38.076633 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 15:19:38.181910 master-0 kubenswrapper[28120]: I0220 15:19:38.181845 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/536236e3-76b3-4dea-831c-cd2f327fda59-combined-ca-bundle\") pod \"536236e3-76b3-4dea-831c-cd2f327fda59\" (UID: \"536236e3-76b3-4dea-831c-cd2f327fda59\") " Feb 20 15:19:38.182206 master-0 kubenswrapper[28120]: I0220 15:19:38.182175 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/536236e3-76b3-4dea-831c-cd2f327fda59-logs\") pod \"536236e3-76b3-4dea-831c-cd2f327fda59\" (UID: \"536236e3-76b3-4dea-831c-cd2f327fda59\") " Feb 20 15:19:38.182349 master-0 kubenswrapper[28120]: I0220 15:19:38.182324 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/536236e3-76b3-4dea-831c-cd2f327fda59-config-data\") pod \"536236e3-76b3-4dea-831c-cd2f327fda59\" (UID: \"536236e3-76b3-4dea-831c-cd2f327fda59\") " Feb 20 15:19:38.182469 master-0 kubenswrapper[28120]: I0220 15:19:38.182442 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nhf5\" (UniqueName: \"kubernetes.io/projected/536236e3-76b3-4dea-831c-cd2f327fda59-kube-api-access-2nhf5\") pod \"536236e3-76b3-4dea-831c-cd2f327fda59\" (UID: \"536236e3-76b3-4dea-831c-cd2f327fda59\") " Feb 20 15:19:38.182746 master-0 kubenswrapper[28120]: I0220 15:19:38.182717 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/536236e3-76b3-4dea-831c-cd2f327fda59-logs" (OuterVolumeSpecName: "logs") pod "536236e3-76b3-4dea-831c-cd2f327fda59" (UID: "536236e3-76b3-4dea-831c-cd2f327fda59"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:19:38.183216 master-0 kubenswrapper[28120]: I0220 15:19:38.183192 28120 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/536236e3-76b3-4dea-831c-cd2f327fda59-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:38.186349 master-0 kubenswrapper[28120]: I0220 15:19:38.186299 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536236e3-76b3-4dea-831c-cd2f327fda59-kube-api-access-2nhf5" (OuterVolumeSpecName: "kube-api-access-2nhf5") pod "536236e3-76b3-4dea-831c-cd2f327fda59" (UID: "536236e3-76b3-4dea-831c-cd2f327fda59"). InnerVolumeSpecName "kube-api-access-2nhf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:19:38.214495 master-0 kubenswrapper[28120]: I0220 15:19:38.214363 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536236e3-76b3-4dea-831c-cd2f327fda59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "536236e3-76b3-4dea-831c-cd2f327fda59" (UID: "536236e3-76b3-4dea-831c-cd2f327fda59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:38.226625 master-0 kubenswrapper[28120]: I0220 15:19:38.226582 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536236e3-76b3-4dea-831c-cd2f327fda59-config-data" (OuterVolumeSpecName: "config-data") pod "536236e3-76b3-4dea-831c-cd2f327fda59" (UID: "536236e3-76b3-4dea-831c-cd2f327fda59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:38.285860 master-0 kubenswrapper[28120]: I0220 15:19:38.285788 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/536236e3-76b3-4dea-831c-cd2f327fda59-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:38.285860 master-0 kubenswrapper[28120]: I0220 15:19:38.285838 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/536236e3-76b3-4dea-831c-cd2f327fda59-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:38.285860 master-0 kubenswrapper[28120]: I0220 15:19:38.285855 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nhf5\" (UniqueName: \"kubernetes.io/projected/536236e3-76b3-4dea-831c-cd2f327fda59-kube-api-access-2nhf5\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:38.331554 master-0 kubenswrapper[28120]: I0220 15:19:38.331480 28120 generic.go:334] "Generic (PLEG): container finished" podID="536236e3-76b3-4dea-831c-cd2f327fda59" containerID="b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d" exitCode=0 Feb 20 15:19:38.331554 master-0 kubenswrapper[28120]: I0220 15:19:38.331519 28120 generic.go:334] "Generic (PLEG): container finished" podID="536236e3-76b3-4dea-831c-cd2f327fda59" containerID="49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02" exitCode=143 Feb 20 15:19:38.331554 master-0 kubenswrapper[28120]: I0220 15:19:38.331535 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 15:19:38.331554 master-0 kubenswrapper[28120]: I0220 15:19:38.331552 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"536236e3-76b3-4dea-831c-cd2f327fda59","Type":"ContainerDied","Data":"b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d"} Feb 20 15:19:38.332542 master-0 kubenswrapper[28120]: I0220 15:19:38.331616 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"536236e3-76b3-4dea-831c-cd2f327fda59","Type":"ContainerDied","Data":"49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02"} Feb 20 15:19:38.332542 master-0 kubenswrapper[28120]: I0220 15:19:38.331628 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"536236e3-76b3-4dea-831c-cd2f327fda59","Type":"ContainerDied","Data":"a858d836224b60626abc08326a9eb9b99cbf0efdd32839868fc0b2c13c239590"} Feb 20 15:19:38.332542 master-0 kubenswrapper[28120]: I0220 15:19:38.331645 28120 scope.go:117] "RemoveContainer" containerID="b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d" Feb 20 15:19:38.381846 master-0 kubenswrapper[28120]: I0220 15:19:38.381784 28120 scope.go:117] "RemoveContainer" containerID="49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02" Feb 20 15:19:38.409965 master-0 kubenswrapper[28120]: I0220 15:19:38.405668 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:19:38.421386 master-0 kubenswrapper[28120]: I0220 15:19:38.420694 28120 scope.go:117] "RemoveContainer" containerID="b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d" Feb 20 15:19:38.421543 master-0 kubenswrapper[28120]: E0220 15:19:38.421380 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d\": container with ID starting with b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d not found: ID does not exist" containerID="b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d" Feb 20 15:19:38.421543 master-0 kubenswrapper[28120]: I0220 15:19:38.421454 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d"} err="failed to get container status \"b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d\": rpc error: code = NotFound desc = could not find container \"b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d\": container with ID starting with b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d not found: ID does not exist" Feb 20 15:19:38.421543 master-0 kubenswrapper[28120]: I0220 15:19:38.421493 28120 scope.go:117] "RemoveContainer" containerID="49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02" Feb 20 15:19:38.421975 master-0 kubenswrapper[28120]: E0220 15:19:38.421899 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02\": container with ID starting with 49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02 not found: ID does not exist" containerID="49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02" Feb 20 15:19:38.422080 master-0 kubenswrapper[28120]: I0220 15:19:38.422012 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02"} err="failed to get container status \"49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02\": rpc error: code = NotFound desc = could not find container \"49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02\": container with ID starting with 49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02 not found: ID does not exist" Feb 20 15:19:38.422080 master-0 kubenswrapper[28120]: I0220 15:19:38.422033 28120 scope.go:117] "RemoveContainer" containerID="b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d" Feb 20 15:19:38.422554 master-0 kubenswrapper[28120]: I0220 15:19:38.422480 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d"} err="failed to get container status \"b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d\": rpc error: code = NotFound desc = could not find container \"b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d\": container with ID starting with b5ff2b39c6c27b8bc85a9456d959dc43baf30dba3b6a919fc345289cc10a980d not found: ID does not exist" Feb 20 15:19:38.422554 master-0 kubenswrapper[28120]: I0220 15:19:38.422538 28120 scope.go:117] "RemoveContainer" containerID="49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02" Feb 20 15:19:38.424004 master-0 kubenswrapper[28120]: I0220 15:19:38.422962 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02"} err="failed to get container status \"49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02\": rpc error: code = NotFound desc = could not find container \"49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02\": container with ID starting with 49c23fafa45dcbee391e482768661f9f93b5a7ca1d491175bf962a5d781e7f02 not found: ID does not exist" Feb 20 15:19:38.470757 master-0 kubenswrapper[28120]: I0220 15:19:38.428621 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:19:38.470757 master-0 kubenswrapper[28120]: I0220 15:19:38.441232 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:19:38.470757 master-0 kubenswrapper[28120]: E0220 15:19:38.441830 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536236e3-76b3-4dea-831c-cd2f327fda59" containerName="nova-metadata-log" Feb 20 15:19:38.470757 master-0 kubenswrapper[28120]: I0220 15:19:38.441850 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="536236e3-76b3-4dea-831c-cd2f327fda59" containerName="nova-metadata-log" Feb 20 15:19:38.470757 master-0 kubenswrapper[28120]: E0220 15:19:38.441882 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536236e3-76b3-4dea-831c-cd2f327fda59" containerName="nova-metadata-metadata" Feb 20 15:19:38.470757 master-0 kubenswrapper[28120]: I0220 15:19:38.441891 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="536236e3-76b3-4dea-831c-cd2f327fda59" containerName="nova-metadata-metadata" Feb 20 15:19:38.470757 master-0 kubenswrapper[28120]: I0220 15:19:38.442271 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="536236e3-76b3-4dea-831c-cd2f327fda59" containerName="nova-metadata-metadata" Feb 20 15:19:38.470757 master-0 kubenswrapper[28120]: I0220 15:19:38.442323 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="536236e3-76b3-4dea-831c-cd2f327fda59" containerName="nova-metadata-log" Feb 20 15:19:38.470757 master-0 kubenswrapper[28120]: I0220 15:19:38.461986 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 15:19:38.470757 master-0 kubenswrapper[28120]: I0220 15:19:38.464595 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 20 15:19:38.470757 master-0 kubenswrapper[28120]: I0220 15:19:38.464658 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 20 15:19:38.470757 master-0 kubenswrapper[28120]: I0220 15:19:38.470730 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:19:38.593086 master-0 kubenswrapper[28120]: I0220 15:19:38.592761 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a500f39b-32fe-432e-b7dd-412608cd87aa-logs\") pod \"nova-metadata-0\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " pod="openstack/nova-metadata-0" Feb 20 15:19:38.593086 master-0 kubenswrapper[28120]: I0220 15:19:38.592850 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-config-data\") pod \"nova-metadata-0\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " pod="openstack/nova-metadata-0" Feb 20 15:19:38.593086 master-0 kubenswrapper[28120]: I0220 15:19:38.592945 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " pod="openstack/nova-metadata-0" Feb 20 15:19:38.593086 master-0 kubenswrapper[28120]: I0220 15:19:38.593028 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " pod="openstack/nova-metadata-0" Feb 20 15:19:38.593086 master-0 kubenswrapper[28120]: I0220 15:19:38.593047 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvncc\" (UniqueName: \"kubernetes.io/projected/a500f39b-32fe-432e-b7dd-412608cd87aa-kube-api-access-wvncc\") pod \"nova-metadata-0\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " pod="openstack/nova-metadata-0" Feb 20 15:19:38.695615 master-0 kubenswrapper[28120]: I0220 15:19:38.695536 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " pod="openstack/nova-metadata-0" Feb 20 15:19:38.695615 master-0 kubenswrapper[28120]: I0220 15:19:38.695603 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvncc\" (UniqueName: \"kubernetes.io/projected/a500f39b-32fe-432e-b7dd-412608cd87aa-kube-api-access-wvncc\") pod \"nova-metadata-0\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " pod="openstack/nova-metadata-0" Feb 20 15:19:38.696010 master-0 kubenswrapper[28120]: I0220 15:19:38.695783 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a500f39b-32fe-432e-b7dd-412608cd87aa-logs\") pod \"nova-metadata-0\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " pod="openstack/nova-metadata-0" Feb 20 15:19:38.696010 master-0 kubenswrapper[28120]: I0220 15:19:38.695836 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-config-data\") pod \"nova-metadata-0\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " pod="openstack/nova-metadata-0" Feb 20 15:19:38.696010 master-0 kubenswrapper[28120]: I0220 15:19:38.695894 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " pod="openstack/nova-metadata-0" Feb 20 15:19:38.700068 master-0 kubenswrapper[28120]: I0220 15:19:38.697245 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a500f39b-32fe-432e-b7dd-412608cd87aa-logs\") pod \"nova-metadata-0\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " pod="openstack/nova-metadata-0" Feb 20 15:19:38.700505 master-0 kubenswrapper[28120]: I0220 15:19:38.700445 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " pod="openstack/nova-metadata-0" Feb 20 15:19:38.707310 master-0 kubenswrapper[28120]: I0220 15:19:38.707227 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-config-data\") pod \"nova-metadata-0\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " pod="openstack/nova-metadata-0" Feb 20 15:19:38.710769 master-0 kubenswrapper[28120]: I0220 15:19:38.710717 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " pod="openstack/nova-metadata-0" Feb 20 15:19:38.717909 master-0 kubenswrapper[28120]: I0220 15:19:38.715736 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvncc\" (UniqueName: \"kubernetes.io/projected/a500f39b-32fe-432e-b7dd-412608cd87aa-kube-api-access-wvncc\") pod \"nova-metadata-0\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " pod="openstack/nova-metadata-0" Feb 20 15:19:38.788995 master-0 kubenswrapper[28120]: I0220 15:19:38.788906 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 15:19:39.369187 master-0 kubenswrapper[28120]: I0220 15:19:39.369088 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:19:40.075864 master-0 kubenswrapper[28120]: I0220 15:19:40.075811 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="536236e3-76b3-4dea-831c-cd2f327fda59" path="/var/lib/kubelet/pods/536236e3-76b3-4dea-831c-cd2f327fda59/volumes" Feb 20 15:19:40.371009 master-0 kubenswrapper[28120]: I0220 15:19:40.370566 28120 generic.go:334] "Generic (PLEG): container finished" podID="90a72ded-aa5e-4a56-91a3-a1a7d7c2f125" containerID="3bb64b304a67b83215030f0de12beded05e217274f2abddb8fd53523ecde31af" exitCode=0 Feb 20 15:19:40.371009 master-0 kubenswrapper[28120]: I0220 15:19:40.370612 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mhjxd" event={"ID":"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125","Type":"ContainerDied","Data":"3bb64b304a67b83215030f0de12beded05e217274f2abddb8fd53523ecde31af"} Feb 20 15:19:42.003865 master-0 kubenswrapper[28120]: I0220 15:19:42.003813 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 20 15:19:42.004380 master-0 kubenswrapper[28120]: I0220 15:19:42.003881 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 20 15:19:42.182587 master-0 kubenswrapper[28120]: I0220 15:19:42.182488 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 20 15:19:42.193126 master-0 kubenswrapper[28120]: I0220 15:19:42.191967 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:19:42.250893 master-0 kubenswrapper[28120]: I0220 15:19:42.250841 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 20 15:19:42.307178 master-0 kubenswrapper[28120]: I0220 15:19:42.307044 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz"] Feb 20 15:19:42.417119 master-0 kubenswrapper[28120]: I0220 15:19:42.417055 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" podUID="39cd6f63-b18a-4428-8698-a8c176540d71" containerName="dnsmasq-dns" containerID="cri-o://606d40b0bcd7d370972ca6259083c7b714160955b8cc2e4a99abcd56c7da64fc" gracePeriod=10 Feb 20 15:19:42.457760 master-0 kubenswrapper[28120]: I0220 15:19:42.457713 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 20 15:19:43.087146 master-0 kubenswrapper[28120]: I0220 15:19:43.087085 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="488fe4b5-a9c6-4373-b370-c9d253d13bf9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.6:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 20 15:19:43.087947 master-0 kubenswrapper[28120]: I0220 15:19:43.087081 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="488fe4b5-a9c6-4373-b370-c9d253d13bf9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.6:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 20 15:19:43.436834 master-0 kubenswrapper[28120]: I0220 15:19:43.436713 28120 generic.go:334] "Generic (PLEG): container finished" podID="39cd6f63-b18a-4428-8698-a8c176540d71" containerID="606d40b0bcd7d370972ca6259083c7b714160955b8cc2e4a99abcd56c7da64fc" exitCode=0 Feb 20 15:19:43.437892 master-0 kubenswrapper[28120]: I0220 15:19:43.437820 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" event={"ID":"39cd6f63-b18a-4428-8698-a8c176540d71","Type":"ContainerDied","Data":"606d40b0bcd7d370972ca6259083c7b714160955b8cc2e4a99abcd56c7da64fc"} Feb 20 15:19:43.749072 master-0 kubenswrapper[28120]: I0220 15:19:43.748978 28120 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" podUID="39cd6f63-b18a-4428-8698-a8c176540d71" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.247:5353: connect: connection refused" Feb 20 15:19:44.956759 master-0 kubenswrapper[28120]: W0220 15:19:44.956637 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda500f39b_32fe_432e_b7dd_412608cd87aa.slice/crio-2619867e768f798000d782a0d41161521eb0c04e0b4abf524eeee0bd7ec577b3 WatchSource:0}: Error finding container 2619867e768f798000d782a0d41161521eb0c04e0b4abf524eeee0bd7ec577b3: Status 404 returned error can't find the container with id 2619867e768f798000d782a0d41161521eb0c04e0b4abf524eeee0bd7ec577b3 Feb 20 15:19:45.045020 master-0 kubenswrapper[28120]: I0220 15:19:45.044955 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mhjxd" Feb 20 15:19:45.132762 master-0 kubenswrapper[28120]: I0220 15:19:45.130088 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-combined-ca-bundle\") pod \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\" (UID: \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\") " Feb 20 15:19:45.132762 master-0 kubenswrapper[28120]: I0220 15:19:45.130399 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-scripts\") pod \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\" (UID: \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\") " Feb 20 15:19:45.132762 master-0 kubenswrapper[28120]: I0220 15:19:45.130613 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56cmj\" (UniqueName: \"kubernetes.io/projected/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-kube-api-access-56cmj\") pod \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\" (UID: \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\") " Feb 20 15:19:45.132762 master-0 kubenswrapper[28120]: I0220 15:19:45.130660 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-config-data\") pod \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\" (UID: \"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125\") " Feb 20 15:19:45.146354 master-0 kubenswrapper[28120]: I0220 15:19:45.141535 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-scripts" (OuterVolumeSpecName: "scripts") pod "90a72ded-aa5e-4a56-91a3-a1a7d7c2f125" (UID: "90a72ded-aa5e-4a56-91a3-a1a7d7c2f125"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:45.146719 master-0 kubenswrapper[28120]: I0220 15:19:45.146660 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-kube-api-access-56cmj" (OuterVolumeSpecName: "kube-api-access-56cmj") pod "90a72ded-aa5e-4a56-91a3-a1a7d7c2f125" (UID: "90a72ded-aa5e-4a56-91a3-a1a7d7c2f125"). InnerVolumeSpecName "kube-api-access-56cmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:19:45.198964 master-0 kubenswrapper[28120]: I0220 15:19:45.198880 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90a72ded-aa5e-4a56-91a3-a1a7d7c2f125" (UID: "90a72ded-aa5e-4a56-91a3-a1a7d7c2f125"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:45.203303 master-0 kubenswrapper[28120]: I0220 15:19:45.203238 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-config-data" (OuterVolumeSpecName: "config-data") pod "90a72ded-aa5e-4a56-91a3-a1a7d7c2f125" (UID: "90a72ded-aa5e-4a56-91a3-a1a7d7c2f125"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:45.234811 master-0 kubenswrapper[28120]: I0220 15:19:45.234750 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:45.234930 master-0 kubenswrapper[28120]: I0220 15:19:45.234813 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56cmj\" (UniqueName: \"kubernetes.io/projected/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-kube-api-access-56cmj\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:45.234930 master-0 kubenswrapper[28120]: I0220 15:19:45.234831 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:45.234930 master-0 kubenswrapper[28120]: I0220 15:19:45.234842 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a72ded-aa5e-4a56-91a3-a1a7d7c2f125-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:45.439894 master-0 kubenswrapper[28120]: I0220 15:19:45.439452 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:19:45.507544 master-0 kubenswrapper[28120]: I0220 15:19:45.507169 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" event={"ID":"39cd6f63-b18a-4428-8698-a8c176540d71","Type":"ContainerDied","Data":"0bba76e5dc4e26e2afc1dbccb9084c35bbed8626f4c12638e83ec859dd805663"} Feb 20 15:19:45.507661 master-0 kubenswrapper[28120]: I0220 15:19:45.507592 28120 scope.go:117] "RemoveContainer" containerID="606d40b0bcd7d370972ca6259083c7b714160955b8cc2e4a99abcd56c7da64fc" Feb 20 15:19:45.507725 master-0 kubenswrapper[28120]: I0220 15:19:45.507471 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz" Feb 20 15:19:45.510325 master-0 kubenswrapper[28120]: I0220 15:19:45.510290 28120 generic.go:334] "Generic (PLEG): container finished" podID="2b3471d8-942c-4e40-a983-64f835dfa59c" containerID="4e785ac4fb727109b7d8b041af907e1f2c06f49a49d0f675ab61673757e44d30" exitCode=0 Feb 20 15:19:45.510477 master-0 kubenswrapper[28120]: I0220 15:19:45.510456 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7mmwb" event={"ID":"2b3471d8-942c-4e40-a983-64f835dfa59c","Type":"ContainerDied","Data":"4e785ac4fb727109b7d8b041af907e1f2c06f49a49d0f675ab61673757e44d30"} Feb 20 15:19:45.516295 master-0 kubenswrapper[28120]: I0220 15:19:45.516215 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mhjxd" Feb 20 15:19:45.516955 master-0 kubenswrapper[28120]: I0220 15:19:45.516360 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mhjxd" event={"ID":"90a72ded-aa5e-4a56-91a3-a1a7d7c2f125","Type":"ContainerDied","Data":"f78e5468a0a2c3057e4fa94983c4b4614f3ec7674cb6cabb92bff0a1b2c01c1a"} Feb 20 15:19:45.517055 master-0 kubenswrapper[28120]: I0220 15:19:45.516972 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f78e5468a0a2c3057e4fa94983c4b4614f3ec7674cb6cabb92bff0a1b2c01c1a" Feb 20 15:19:45.519697 master-0 kubenswrapper[28120]: I0220 15:19:45.519649 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a500f39b-32fe-432e-b7dd-412608cd87aa","Type":"ContainerStarted","Data":"29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d"} Feb 20 15:19:45.519697 master-0 kubenswrapper[28120]: I0220 15:19:45.519694 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a500f39b-32fe-432e-b7dd-412608cd87aa","Type":"ContainerStarted","Data":"2619867e768f798000d782a0d41161521eb0c04e0b4abf524eeee0bd7ec577b3"} Feb 20 15:19:45.523463 master-0 kubenswrapper[28120]: I0220 15:19:45.523417 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 15:19:45.539137 master-0 kubenswrapper[28120]: I0220 15:19:45.534060 28120 scope.go:117] "RemoveContainer" containerID="5192ce2dd5ff017015b2aaf15c9d4936de077110e06b2518e2ab1c548aaec450" Feb 20 15:19:45.550517 master-0 kubenswrapper[28120]: I0220 15:19:45.550419 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-ovsdbserver-sb\") pod \"39cd6f63-b18a-4428-8698-a8c176540d71\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " Feb 20 15:19:45.550913 master-0 kubenswrapper[28120]: I0220 15:19:45.550553 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-dns-svc\") pod \"39cd6f63-b18a-4428-8698-a8c176540d71\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " Feb 20 15:19:45.550979 master-0 kubenswrapper[28120]: I0220 15:19:45.550905 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj74z\" (UniqueName: \"kubernetes.io/projected/39cd6f63-b18a-4428-8698-a8c176540d71-kube-api-access-lj74z\") pod \"39cd6f63-b18a-4428-8698-a8c176540d71\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " Feb 20 15:19:45.551079 master-0 kubenswrapper[28120]: I0220 15:19:45.551046 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-dns-swift-storage-0\") pod \"39cd6f63-b18a-4428-8698-a8c176540d71\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " Feb 20 15:19:45.551145 master-0 kubenswrapper[28120]: I0220 15:19:45.551126 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-config\") pod \"39cd6f63-b18a-4428-8698-a8c176540d71\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " Feb 20 15:19:45.551296 master-0 kubenswrapper[28120]: I0220 15:19:45.551265 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-ovsdbserver-nb\") pod \"39cd6f63-b18a-4428-8698-a8c176540d71\" (UID: \"39cd6f63-b18a-4428-8698-a8c176540d71\") " Feb 20 15:19:45.564958 master-0 kubenswrapper[28120]: I0220 15:19:45.564833 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39cd6f63-b18a-4428-8698-a8c176540d71-kube-api-access-lj74z" (OuterVolumeSpecName: "kube-api-access-lj74z") pod "39cd6f63-b18a-4428-8698-a8c176540d71" (UID: "39cd6f63-b18a-4428-8698-a8c176540d71"). InnerVolumeSpecName "kube-api-access-lj74z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:19:45.575938 master-0 kubenswrapper[28120]: I0220 15:19:45.573916 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=1.6350814439999999 podStartE2EDuration="14.573893105s" podCreationTimestamp="2026-02-20 15:19:31 +0000 UTC" firstStartedPulling="2026-02-20 15:19:32.322565305 +0000 UTC m=+1110.583358868" lastFinishedPulling="2026-02-20 15:19:45.261376946 +0000 UTC m=+1123.522170529" observedRunningTime="2026-02-20 15:19:45.557598028 +0000 UTC m=+1123.818391601" watchObservedRunningTime="2026-02-20 15:19:45.573893105 +0000 UTC m=+1123.834686668" Feb 20 15:19:45.611255 master-0 kubenswrapper[28120]: I0220 15:19:45.611193 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 20 15:19:45.628083 master-0 kubenswrapper[28120]: I0220 15:19:45.628028 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "39cd6f63-b18a-4428-8698-a8c176540d71" (UID: "39cd6f63-b18a-4428-8698-a8c176540d71"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:19:45.667272 master-0 kubenswrapper[28120]: I0220 15:19:45.667209 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj74z\" (UniqueName: \"kubernetes.io/projected/39cd6f63-b18a-4428-8698-a8c176540d71-kube-api-access-lj74z\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:45.667570 master-0 kubenswrapper[28120]: I0220 15:19:45.667557 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:45.674000 master-0 kubenswrapper[28120]: I0220 15:19:45.673873 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "39cd6f63-b18a-4428-8698-a8c176540d71" (UID: "39cd6f63-b18a-4428-8698-a8c176540d71"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:19:45.676299 master-0 kubenswrapper[28120]: I0220 15:19:45.676207 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "39cd6f63-b18a-4428-8698-a8c176540d71" (UID: "39cd6f63-b18a-4428-8698-a8c176540d71"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:19:45.689213 master-0 kubenswrapper[28120]: I0220 15:19:45.689148 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "39cd6f63-b18a-4428-8698-a8c176540d71" (UID: "39cd6f63-b18a-4428-8698-a8c176540d71"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:19:45.713144 master-0 kubenswrapper[28120]: I0220 15:19:45.713075 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-config" (OuterVolumeSpecName: "config") pod "39cd6f63-b18a-4428-8698-a8c176540d71" (UID: "39cd6f63-b18a-4428-8698-a8c176540d71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:19:45.770296 master-0 kubenswrapper[28120]: I0220 15:19:45.770141 28120 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:45.770296 master-0 kubenswrapper[28120]: I0220 15:19:45.770213 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:45.770296 master-0 kubenswrapper[28120]: I0220 15:19:45.770228 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:45.770296 master-0 kubenswrapper[28120]: I0220 15:19:45.770240 28120 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39cd6f63-b18a-4428-8698-a8c176540d71-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:45.852164 master-0 kubenswrapper[28120]: I0220 15:19:45.852081 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz"] Feb 20 15:19:45.867941 master-0 kubenswrapper[28120]: I0220 15:19:45.867846 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f4c4c4d6c-sfxkz"] Feb 20 15:19:46.101026 master-0 kubenswrapper[28120]: I0220 15:19:46.100807 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39cd6f63-b18a-4428-8698-a8c176540d71" path="/var/lib/kubelet/pods/39cd6f63-b18a-4428-8698-a8c176540d71/volumes" Feb 20 15:19:46.329583 master-0 kubenswrapper[28120]: I0220 15:19:46.329469 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:19:46.329878 master-0 kubenswrapper[28120]: I0220 15:19:46.329839 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="488fe4b5-a9c6-4373-b370-c9d253d13bf9" containerName="nova-api-log" containerID="cri-o://22add04063dcb1cde225ce4afb4c1ec90039b228e0bc849ebb73f4b7e87aef6d" gracePeriod=30 Feb 20 15:19:46.330060 master-0 kubenswrapper[28120]: I0220 15:19:46.330021 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="488fe4b5-a9c6-4373-b370-c9d253d13bf9" containerName="nova-api-api" containerID="cri-o://8be46299983c7984eb66b699659574e8a232daf4ef3ed407e17b693fec2ed59f" gracePeriod=30 Feb 20 15:19:46.401181 master-0 kubenswrapper[28120]: I0220 15:19:46.401094 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 15:19:46.401422 master-0 kubenswrapper[28120]: I0220 15:19:46.401390 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6b07a801-92f0-4edc-ae58-a816afea6976" containerName="nova-scheduler-scheduler" containerID="cri-o://be354745fe949b9fee3f8af0e6e07f2e79c78adcca145bc23c5aeff087f3d89f" gracePeriod=30 Feb 20 15:19:46.436451 master-0 kubenswrapper[28120]: I0220 15:19:46.436383 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:19:46.542942 master-0 kubenswrapper[28120]: I0220 15:19:46.540788 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"9a28ab67-ed88-4926-8ebc-a1b0f52d2726","Type":"ContainerStarted","Data":"8810106cb1706de0e73a3383b88ebd0506339d1746e8499322288808cb045abf"} Feb 20 15:19:46.554667 master-0 kubenswrapper[28120]: I0220 15:19:46.554603 28120 generic.go:334] "Generic (PLEG): container finished" podID="488fe4b5-a9c6-4373-b370-c9d253d13bf9" containerID="22add04063dcb1cde225ce4afb4c1ec90039b228e0bc849ebb73f4b7e87aef6d" exitCode=143 Feb 20 15:19:46.554849 master-0 kubenswrapper[28120]: I0220 15:19:46.554696 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"488fe4b5-a9c6-4373-b370-c9d253d13bf9","Type":"ContainerDied","Data":"22add04063dcb1cde225ce4afb4c1ec90039b228e0bc849ebb73f4b7e87aef6d"} Feb 20 15:19:46.558988 master-0 kubenswrapper[28120]: I0220 15:19:46.558015 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a500f39b-32fe-432e-b7dd-412608cd87aa","Type":"ContainerStarted","Data":"ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d"} Feb 20 15:19:46.591368 master-0 kubenswrapper[28120]: I0220 15:19:46.591279 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=8.59125776 podStartE2EDuration="8.59125776s" podCreationTimestamp="2026-02-20 15:19:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:19:46.579055846 +0000 UTC m=+1124.839849459" watchObservedRunningTime="2026-02-20 15:19:46.59125776 +0000 UTC m=+1124.852051323" Feb 20 15:19:47.159025 master-0 kubenswrapper[28120]: I0220 15:19:47.158163 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7mmwb" Feb 20 15:19:47.185354 master-0 kubenswrapper[28120]: E0220 15:19:47.185247 28120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be354745fe949b9fee3f8af0e6e07f2e79c78adcca145bc23c5aeff087f3d89f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 20 15:19:47.187976 master-0 kubenswrapper[28120]: E0220 15:19:47.187885 28120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be354745fe949b9fee3f8af0e6e07f2e79c78adcca145bc23c5aeff087f3d89f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 20 15:19:47.189716 master-0 kubenswrapper[28120]: E0220 15:19:47.189675 28120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be354745fe949b9fee3f8af0e6e07f2e79c78adcca145bc23c5aeff087f3d89f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 20 15:19:47.189873 master-0 kubenswrapper[28120]: E0220 15:19:47.189843 28120 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6b07a801-92f0-4edc-ae58-a816afea6976" containerName="nova-scheduler-scheduler" Feb 20 15:19:47.208216 master-0 kubenswrapper[28120]: I0220 15:19:47.208157 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-config-data\") pod \"2b3471d8-942c-4e40-a983-64f835dfa59c\" (UID: \"2b3471d8-942c-4e40-a983-64f835dfa59c\") " Feb 20 15:19:47.208791 master-0 kubenswrapper[28120]: I0220 15:19:47.208760 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln52w\" (UniqueName: \"kubernetes.io/projected/2b3471d8-942c-4e40-a983-64f835dfa59c-kube-api-access-ln52w\") pod \"2b3471d8-942c-4e40-a983-64f835dfa59c\" (UID: \"2b3471d8-942c-4e40-a983-64f835dfa59c\") " Feb 20 15:19:47.208993 master-0 kubenswrapper[28120]: I0220 15:19:47.208967 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-scripts\") pod \"2b3471d8-942c-4e40-a983-64f835dfa59c\" (UID: \"2b3471d8-942c-4e40-a983-64f835dfa59c\") " Feb 20 15:19:47.209144 master-0 kubenswrapper[28120]: I0220 15:19:47.209124 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-combined-ca-bundle\") pod \"2b3471d8-942c-4e40-a983-64f835dfa59c\" (UID: \"2b3471d8-942c-4e40-a983-64f835dfa59c\") " Feb 20 15:19:47.212842 master-0 kubenswrapper[28120]: I0220 15:19:47.212783 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3471d8-942c-4e40-a983-64f835dfa59c-kube-api-access-ln52w" (OuterVolumeSpecName: "kube-api-access-ln52w") pod "2b3471d8-942c-4e40-a983-64f835dfa59c" (UID: "2b3471d8-942c-4e40-a983-64f835dfa59c"). InnerVolumeSpecName "kube-api-access-ln52w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:19:47.221421 master-0 kubenswrapper[28120]: I0220 15:19:47.221333 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-scripts" (OuterVolumeSpecName: "scripts") pod "2b3471d8-942c-4e40-a983-64f835dfa59c" (UID: "2b3471d8-942c-4e40-a983-64f835dfa59c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:47.252037 master-0 kubenswrapper[28120]: I0220 15:19:47.251974 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-config-data" (OuterVolumeSpecName: "config-data") pod "2b3471d8-942c-4e40-a983-64f835dfa59c" (UID: "2b3471d8-942c-4e40-a983-64f835dfa59c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:47.263683 master-0 kubenswrapper[28120]: I0220 15:19:47.263619 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b3471d8-942c-4e40-a983-64f835dfa59c" (UID: "2b3471d8-942c-4e40-a983-64f835dfa59c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:47.312393 master-0 kubenswrapper[28120]: I0220 15:19:47.312018 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln52w\" (UniqueName: \"kubernetes.io/projected/2b3471d8-942c-4e40-a983-64f835dfa59c-kube-api-access-ln52w\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:47.312393 master-0 kubenswrapper[28120]: I0220 15:19:47.312053 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:47.312393 master-0 kubenswrapper[28120]: I0220 15:19:47.312063 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:47.312393 master-0 kubenswrapper[28120]: I0220 15:19:47.312072 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3471d8-942c-4e40-a983-64f835dfa59c-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:47.590809 master-0 kubenswrapper[28120]: I0220 15:19:47.590681 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7mmwb" event={"ID":"2b3471d8-942c-4e40-a983-64f835dfa59c","Type":"ContainerDied","Data":"ce70628135d28ba59c769cf1704c61c5c89ddc135498f258e8bb75943237ee5c"} Feb 20 15:19:47.590809 master-0 kubenswrapper[28120]: I0220 15:19:47.590738 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce70628135d28ba59c769cf1704c61c5c89ddc135498f258e8bb75943237ee5c" Feb 20 15:19:47.590809 master-0 kubenswrapper[28120]: I0220 15:19:47.590798 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7mmwb" Feb 20 15:19:47.598271 master-0 kubenswrapper[28120]: I0220 15:19:47.595083 28120 generic.go:334] "Generic (PLEG): container finished" podID="6b07a801-92f0-4edc-ae58-a816afea6976" containerID="be354745fe949b9fee3f8af0e6e07f2e79c78adcca145bc23c5aeff087f3d89f" exitCode=0 Feb 20 15:19:47.598271 master-0 kubenswrapper[28120]: I0220 15:19:47.595971 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6b07a801-92f0-4edc-ae58-a816afea6976","Type":"ContainerDied","Data":"be354745fe949b9fee3f8af0e6e07f2e79c78adcca145bc23c5aeff087f3d89f"} Feb 20 15:19:47.598271 master-0 kubenswrapper[28120]: I0220 15:19:47.596097 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a500f39b-32fe-432e-b7dd-412608cd87aa" containerName="nova-metadata-log" containerID="cri-o://29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d" gracePeriod=30 Feb 20 15:19:47.598271 master-0 kubenswrapper[28120]: I0220 15:19:47.596506 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a500f39b-32fe-432e-b7dd-412608cd87aa" containerName="nova-metadata-metadata" containerID="cri-o://ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d" gracePeriod=30 Feb 20 15:19:47.664160 master-0 kubenswrapper[28120]: I0220 15:19:47.664094 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 20 15:19:47.664803 master-0 kubenswrapper[28120]: E0220 15:19:47.664779 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39cd6f63-b18a-4428-8698-a8c176540d71" containerName="init" Feb 20 15:19:47.664803 master-0 kubenswrapper[28120]: I0220 15:19:47.664801 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="39cd6f63-b18a-4428-8698-a8c176540d71" containerName="init" Feb 20 15:19:47.664888 master-0 kubenswrapper[28120]: E0220 15:19:47.664821 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3471d8-942c-4e40-a983-64f835dfa59c" containerName="nova-cell1-conductor-db-sync" Feb 20 15:19:47.664888 master-0 kubenswrapper[28120]: I0220 15:19:47.664827 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3471d8-942c-4e40-a983-64f835dfa59c" containerName="nova-cell1-conductor-db-sync" Feb 20 15:19:47.664888 master-0 kubenswrapper[28120]: E0220 15:19:47.664852 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39cd6f63-b18a-4428-8698-a8c176540d71" containerName="dnsmasq-dns" Feb 20 15:19:47.664888 master-0 kubenswrapper[28120]: I0220 15:19:47.664859 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="39cd6f63-b18a-4428-8698-a8c176540d71" containerName="dnsmasq-dns" Feb 20 15:19:47.664888 master-0 kubenswrapper[28120]: E0220 15:19:47.664873 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a72ded-aa5e-4a56-91a3-a1a7d7c2f125" containerName="nova-manage" Feb 20 15:19:47.664888 master-0 kubenswrapper[28120]: I0220 15:19:47.664879 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a72ded-aa5e-4a56-91a3-a1a7d7c2f125" containerName="nova-manage" Feb 20 15:19:47.665199 master-0 kubenswrapper[28120]: I0220 15:19:47.665168 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a72ded-aa5e-4a56-91a3-a1a7d7c2f125" containerName="nova-manage" Feb 20 15:19:47.665258 master-0 kubenswrapper[28120]: I0220 15:19:47.665245 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3471d8-942c-4e40-a983-64f835dfa59c" containerName="nova-cell1-conductor-db-sync" Feb 20 15:19:47.665309 master-0 kubenswrapper[28120]: I0220 15:19:47.665263 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="39cd6f63-b18a-4428-8698-a8c176540d71" containerName="dnsmasq-dns" Feb 20 15:19:47.674292 master-0 kubenswrapper[28120]: I0220 15:19:47.666533 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 20 15:19:47.689773 master-0 kubenswrapper[28120]: I0220 15:19:47.689724 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 20 15:19:47.757483 master-0 kubenswrapper[28120]: I0220 15:19:47.757431 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cdfs\" (UniqueName: \"kubernetes.io/projected/206ddf01-8889-4900-86a6-ca2e848dd0a1-kube-api-access-4cdfs\") pod \"nova-cell1-conductor-0\" (UID: \"206ddf01-8889-4900-86a6-ca2e848dd0a1\") " pod="openstack/nova-cell1-conductor-0" Feb 20 15:19:47.759304 master-0 kubenswrapper[28120]: I0220 15:19:47.759287 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206ddf01-8889-4900-86a6-ca2e848dd0a1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"206ddf01-8889-4900-86a6-ca2e848dd0a1\") " pod="openstack/nova-cell1-conductor-0" Feb 20 15:19:47.759497 master-0 kubenswrapper[28120]: I0220 15:19:47.759451 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206ddf01-8889-4900-86a6-ca2e848dd0a1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"206ddf01-8889-4900-86a6-ca2e848dd0a1\") " pod="openstack/nova-cell1-conductor-0" Feb 20 15:19:47.815079 master-0 kubenswrapper[28120]: I0220 15:19:47.815021 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 20 15:19:47.862012 master-0 kubenswrapper[28120]: I0220 15:19:47.861868 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206ddf01-8889-4900-86a6-ca2e848dd0a1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"206ddf01-8889-4900-86a6-ca2e848dd0a1\") " pod="openstack/nova-cell1-conductor-0" Feb 20 15:19:47.862012 master-0 kubenswrapper[28120]: I0220 15:19:47.861947 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206ddf01-8889-4900-86a6-ca2e848dd0a1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"206ddf01-8889-4900-86a6-ca2e848dd0a1\") " pod="openstack/nova-cell1-conductor-0" Feb 20 15:19:47.862012 master-0 kubenswrapper[28120]: I0220 15:19:47.862002 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cdfs\" (UniqueName: \"kubernetes.io/projected/206ddf01-8889-4900-86a6-ca2e848dd0a1-kube-api-access-4cdfs\") pod \"nova-cell1-conductor-0\" (UID: \"206ddf01-8889-4900-86a6-ca2e848dd0a1\") " pod="openstack/nova-cell1-conductor-0" Feb 20 15:19:47.866198 master-0 kubenswrapper[28120]: I0220 15:19:47.866091 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206ddf01-8889-4900-86a6-ca2e848dd0a1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"206ddf01-8889-4900-86a6-ca2e848dd0a1\") " pod="openstack/nova-cell1-conductor-0" Feb 20 15:19:47.866415 master-0 kubenswrapper[28120]: I0220 15:19:47.866250 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206ddf01-8889-4900-86a6-ca2e848dd0a1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"206ddf01-8889-4900-86a6-ca2e848dd0a1\") " pod="openstack/nova-cell1-conductor-0" Feb 20 15:19:47.880003 master-0 kubenswrapper[28120]: I0220 15:19:47.879957 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cdfs\" (UniqueName: \"kubernetes.io/projected/206ddf01-8889-4900-86a6-ca2e848dd0a1-kube-api-access-4cdfs\") pod \"nova-cell1-conductor-0\" (UID: \"206ddf01-8889-4900-86a6-ca2e848dd0a1\") " pod="openstack/nova-cell1-conductor-0" Feb 20 15:19:47.932676 master-0 kubenswrapper[28120]: I0220 15:19:47.932635 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 15:19:48.065903 master-0 kubenswrapper[28120]: I0220 15:19:48.065837 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b07a801-92f0-4edc-ae58-a816afea6976-config-data\") pod \"6b07a801-92f0-4edc-ae58-a816afea6976\" (UID: \"6b07a801-92f0-4edc-ae58-a816afea6976\") " Feb 20 15:19:48.066195 master-0 kubenswrapper[28120]: I0220 15:19:48.066102 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b07a801-92f0-4edc-ae58-a816afea6976-combined-ca-bundle\") pod \"6b07a801-92f0-4edc-ae58-a816afea6976\" (UID: \"6b07a801-92f0-4edc-ae58-a816afea6976\") " Feb 20 15:19:48.066258 master-0 kubenswrapper[28120]: I0220 15:19:48.066216 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n2dr\" (UniqueName: \"kubernetes.io/projected/6b07a801-92f0-4edc-ae58-a816afea6976-kube-api-access-5n2dr\") pod \"6b07a801-92f0-4edc-ae58-a816afea6976\" (UID: \"6b07a801-92f0-4edc-ae58-a816afea6976\") " Feb 20 15:19:48.070723 master-0 kubenswrapper[28120]: I0220 15:19:48.070666 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b07a801-92f0-4edc-ae58-a816afea6976-kube-api-access-5n2dr" (OuterVolumeSpecName: "kube-api-access-5n2dr") pod "6b07a801-92f0-4edc-ae58-a816afea6976" (UID: "6b07a801-92f0-4edc-ae58-a816afea6976"). InnerVolumeSpecName "kube-api-access-5n2dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:19:48.095652 master-0 kubenswrapper[28120]: I0220 15:19:48.094047 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 20 15:19:48.107245 master-0 kubenswrapper[28120]: I0220 15:19:48.107177 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b07a801-92f0-4edc-ae58-a816afea6976-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b07a801-92f0-4edc-ae58-a816afea6976" (UID: "6b07a801-92f0-4edc-ae58-a816afea6976"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:48.114461 master-0 kubenswrapper[28120]: I0220 15:19:48.114398 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b07a801-92f0-4edc-ae58-a816afea6976-config-data" (OuterVolumeSpecName: "config-data") pod "6b07a801-92f0-4edc-ae58-a816afea6976" (UID: "6b07a801-92f0-4edc-ae58-a816afea6976"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:48.170747 master-0 kubenswrapper[28120]: I0220 15:19:48.170687 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b07a801-92f0-4edc-ae58-a816afea6976-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:48.171179 master-0 kubenswrapper[28120]: I0220 15:19:48.170746 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b07a801-92f0-4edc-ae58-a816afea6976-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:48.171179 master-0 kubenswrapper[28120]: I0220 15:19:48.170781 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n2dr\" (UniqueName: \"kubernetes.io/projected/6b07a801-92f0-4edc-ae58-a816afea6976-kube-api-access-5n2dr\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:48.515280 master-0 kubenswrapper[28120]: I0220 15:19:48.515228 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 15:19:48.602516 master-0 kubenswrapper[28120]: I0220 15:19:48.601557 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 20 15:19:48.615568 master-0 kubenswrapper[28120]: I0220 15:19:48.615509 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 15:19:48.615717 master-0 kubenswrapper[28120]: I0220 15:19:48.615504 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6b07a801-92f0-4edc-ae58-a816afea6976","Type":"ContainerDied","Data":"651b0195eb3d4a2c6ab6bc4d292c850abb7512968da35729ba8dca7745f34c4e"} Feb 20 15:19:48.615793 master-0 kubenswrapper[28120]: I0220 15:19:48.615717 28120 scope.go:117] "RemoveContainer" containerID="be354745fe949b9fee3f8af0e6e07f2e79c78adcca145bc23c5aeff087f3d89f" Feb 20 15:19:48.628043 master-0 kubenswrapper[28120]: I0220 15:19:48.627588 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"206ddf01-8889-4900-86a6-ca2e848dd0a1","Type":"ContainerStarted","Data":"f6c6d93ebe19314cfb502d28fa3c93406b5ab7815719b85984b8303f0ce5dfe2"} Feb 20 15:19:48.636964 master-0 kubenswrapper[28120]: I0220 15:19:48.636843 28120 generic.go:334] "Generic (PLEG): container finished" podID="a500f39b-32fe-432e-b7dd-412608cd87aa" containerID="ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d" exitCode=0 Feb 20 15:19:48.636964 master-0 kubenswrapper[28120]: I0220 15:19:48.636890 28120 generic.go:334] "Generic (PLEG): container finished" podID="a500f39b-32fe-432e-b7dd-412608cd87aa" containerID="29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d" exitCode=143 Feb 20 15:19:48.636964 master-0 kubenswrapper[28120]: I0220 15:19:48.636953 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a500f39b-32fe-432e-b7dd-412608cd87aa","Type":"ContainerDied","Data":"ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d"} Feb 20 15:19:48.637854 master-0 kubenswrapper[28120]: I0220 15:19:48.636987 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a500f39b-32fe-432e-b7dd-412608cd87aa","Type":"ContainerDied","Data":"29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d"} Feb 20 15:19:48.637854 master-0 kubenswrapper[28120]: I0220 15:19:48.637005 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a500f39b-32fe-432e-b7dd-412608cd87aa","Type":"ContainerDied","Data":"2619867e768f798000d782a0d41161521eb0c04e0b4abf524eeee0bd7ec577b3"} Feb 20 15:19:48.637854 master-0 kubenswrapper[28120]: I0220 15:19:48.637083 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 15:19:48.679094 master-0 kubenswrapper[28120]: I0220 15:19:48.674290 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 15:19:48.690431 master-0 kubenswrapper[28120]: I0220 15:19:48.690390 28120 scope.go:117] "RemoveContainer" containerID="ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d" Feb 20 15:19:48.697107 master-0 kubenswrapper[28120]: I0220 15:19:48.697045 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-config-data\") pod \"a500f39b-32fe-432e-b7dd-412608cd87aa\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " Feb 20 15:19:48.697209 master-0 kubenswrapper[28120]: I0220 15:19:48.697134 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-combined-ca-bundle\") pod \"a500f39b-32fe-432e-b7dd-412608cd87aa\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " Feb 20 15:19:48.697352 master-0 kubenswrapper[28120]: I0220 15:19:48.697309 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvncc\" (UniqueName: \"kubernetes.io/projected/a500f39b-32fe-432e-b7dd-412608cd87aa-kube-api-access-wvncc\") pod \"a500f39b-32fe-432e-b7dd-412608cd87aa\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " Feb 20 15:19:48.697414 master-0 kubenswrapper[28120]: I0220 15:19:48.697364 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-nova-metadata-tls-certs\") pod \"a500f39b-32fe-432e-b7dd-412608cd87aa\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " Feb 20 15:19:48.699558 master-0 kubenswrapper[28120]: I0220 15:19:48.699488 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a500f39b-32fe-432e-b7dd-412608cd87aa-logs\") pod \"a500f39b-32fe-432e-b7dd-412608cd87aa\" (UID: \"a500f39b-32fe-432e-b7dd-412608cd87aa\") " Feb 20 15:19:48.742946 master-0 kubenswrapper[28120]: I0220 15:19:48.739137 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 15:19:48.742946 master-0 kubenswrapper[28120]: I0220 15:19:48.739557 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a500f39b-32fe-432e-b7dd-412608cd87aa-logs" (OuterVolumeSpecName: "logs") pod "a500f39b-32fe-432e-b7dd-412608cd87aa" (UID: "a500f39b-32fe-432e-b7dd-412608cd87aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:19:48.783942 master-0 kubenswrapper[28120]: I0220 15:19:48.782506 28120 scope.go:117] "RemoveContainer" containerID="29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d" Feb 20 15:19:48.783942 master-0 kubenswrapper[28120]: I0220 15:19:48.783013 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a500f39b-32fe-432e-b7dd-412608cd87aa-kube-api-access-wvncc" (OuterVolumeSpecName: "kube-api-access-wvncc") pod "a500f39b-32fe-432e-b7dd-412608cd87aa" (UID: "a500f39b-32fe-432e-b7dd-412608cd87aa"). InnerVolumeSpecName "kube-api-access-wvncc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:19:48.786678 master-0 kubenswrapper[28120]: I0220 15:19:48.786043 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 15:19:48.786678 master-0 kubenswrapper[28120]: E0220 15:19:48.786583 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a500f39b-32fe-432e-b7dd-412608cd87aa" containerName="nova-metadata-log" Feb 20 15:19:48.786678 master-0 kubenswrapper[28120]: I0220 15:19:48.786599 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a500f39b-32fe-432e-b7dd-412608cd87aa" containerName="nova-metadata-log" Feb 20 15:19:48.786678 master-0 kubenswrapper[28120]: E0220 15:19:48.786617 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a500f39b-32fe-432e-b7dd-412608cd87aa" containerName="nova-metadata-metadata" Feb 20 15:19:48.786678 master-0 kubenswrapper[28120]: I0220 15:19:48.786622 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a500f39b-32fe-432e-b7dd-412608cd87aa" containerName="nova-metadata-metadata" Feb 20 15:19:48.787767 master-0 kubenswrapper[28120]: E0220 15:19:48.787044 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b07a801-92f0-4edc-ae58-a816afea6976" containerName="nova-scheduler-scheduler" Feb 20 15:19:48.787767 master-0 kubenswrapper[28120]: I0220 15:19:48.787062 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b07a801-92f0-4edc-ae58-a816afea6976" containerName="nova-scheduler-scheduler" Feb 20 15:19:48.787767 master-0 kubenswrapper[28120]: I0220 15:19:48.787336 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="a500f39b-32fe-432e-b7dd-412608cd87aa" containerName="nova-metadata-log" Feb 20 15:19:48.787767 master-0 kubenswrapper[28120]: I0220 15:19:48.787348 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b07a801-92f0-4edc-ae58-a816afea6976" containerName="nova-scheduler-scheduler" Feb 20 15:19:48.787767 master-0 kubenswrapper[28120]: I0220 15:19:48.787389 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="a500f39b-32fe-432e-b7dd-412608cd87aa" containerName="nova-metadata-metadata" Feb 20 15:19:48.788894 master-0 kubenswrapper[28120]: I0220 15:19:48.788539 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 15:19:48.791859 master-0 kubenswrapper[28120]: I0220 15:19:48.791826 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 20 15:19:48.800899 master-0 kubenswrapper[28120]: I0220 15:19:48.800811 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 15:19:48.809401 master-0 kubenswrapper[28120]: I0220 15:19:48.809347 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a500f39b-32fe-432e-b7dd-412608cd87aa" (UID: "a500f39b-32fe-432e-b7dd-412608cd87aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:48.810867 master-0 kubenswrapper[28120]: I0220 15:19:48.810819 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-config-data" (OuterVolumeSpecName: "config-data") pod "a500f39b-32fe-432e-b7dd-412608cd87aa" (UID: "a500f39b-32fe-432e-b7dd-412608cd87aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:48.834230 master-0 kubenswrapper[28120]: I0220 15:19:48.834075 28120 scope.go:117] "RemoveContainer" containerID="ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d" Feb 20 15:19:48.834712 master-0 kubenswrapper[28120]: E0220 15:19:48.834649 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d\": container with ID starting with ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d not found: ID does not exist" containerID="ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d" Feb 20 15:19:48.836097 master-0 kubenswrapper[28120]: I0220 15:19:48.836027 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d"} err="failed to get container status \"ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d\": rpc error: code = NotFound desc = could not find container \"ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d\": container with ID starting with ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d not found: ID does not exist" Feb 20 15:19:48.836097 master-0 kubenswrapper[28120]: I0220 15:19:48.836087 28120 scope.go:117] "RemoveContainer" containerID="29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d" Feb 20 15:19:48.836619 master-0 kubenswrapper[28120]: E0220 15:19:48.836586 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d\": container with ID starting with 29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d not found: ID does not exist" containerID="29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d" Feb 20 15:19:48.836693 master-0 kubenswrapper[28120]: I0220 15:19:48.836618 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d"} err="failed to get container status \"29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d\": rpc error: code = NotFound desc = could not find container \"29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d\": container with ID starting with 29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d not found: ID does not exist" Feb 20 15:19:48.836693 master-0 kubenswrapper[28120]: I0220 15:19:48.836634 28120 scope.go:117] "RemoveContainer" containerID="ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d" Feb 20 15:19:48.837015 master-0 kubenswrapper[28120]: I0220 15:19:48.836969 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d"} err="failed to get container status \"ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d\": rpc error: code = NotFound desc = could not find container \"ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d\": container with ID starting with ab08aadccd22c9ebf34026b2cb972f4b373ed23e859e7cb02a26a1107408e00d not found: ID does not exist" Feb 20 15:19:48.837133 master-0 kubenswrapper[28120]: I0220 15:19:48.837111 28120 scope.go:117] "RemoveContainer" containerID="29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d" Feb 20 15:19:48.837957 master-0 kubenswrapper[28120]: I0220 15:19:48.837914 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d"} err="failed to get container status \"29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d\": rpc error: code = NotFound desc = could not find container \"29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d\": container with ID starting with 29396bab4828d1cfa99352652b31aaf8431caff79b8f4f9e7cecda5cb10abb2d not found: ID does not exist" Feb 20 15:19:48.842352 master-0 kubenswrapper[28120]: I0220 15:19:48.842297 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:48.842532 master-0 kubenswrapper[28120]: I0220 15:19:48.842516 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:48.842668 master-0 kubenswrapper[28120]: I0220 15:19:48.842651 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvncc\" (UniqueName: \"kubernetes.io/projected/a500f39b-32fe-432e-b7dd-412608cd87aa-kube-api-access-wvncc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:48.842957 master-0 kubenswrapper[28120]: I0220 15:19:48.842943 28120 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a500f39b-32fe-432e-b7dd-412608cd87aa-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:48.843888 master-0 kubenswrapper[28120]: I0220 15:19:48.843825 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a500f39b-32fe-432e-b7dd-412608cd87aa" (UID: "a500f39b-32fe-432e-b7dd-412608cd87aa"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:48.945699 master-0 kubenswrapper[28120]: I0220 15:19:48.945540 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-config-data\") pod \"nova-scheduler-0\" (UID: \"36eb9f8e-a58a-46e6-9fe5-f36c0058810d\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:48.946049 master-0 kubenswrapper[28120]: I0220 15:19:48.945802 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36eb9f8e-a58a-46e6-9fe5-f36c0058810d\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:48.946901 master-0 kubenswrapper[28120]: I0220 15:19:48.946777 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzh49\" (UniqueName: \"kubernetes.io/projected/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-kube-api-access-rzh49\") pod \"nova-scheduler-0\" (UID: \"36eb9f8e-a58a-46e6-9fe5-f36c0058810d\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:48.947388 master-0 kubenswrapper[28120]: I0220 15:19:48.947360 28120 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a500f39b-32fe-432e-b7dd-412608cd87aa-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:48.983431 master-0 kubenswrapper[28120]: I0220 15:19:48.983374 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:19:49.000668 master-0 kubenswrapper[28120]: I0220 15:19:49.000603 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:19:49.021456 master-0 kubenswrapper[28120]: I0220 15:19:49.021381 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:19:49.023962 master-0 kubenswrapper[28120]: I0220 15:19:49.023909 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 15:19:49.028001 master-0 kubenswrapper[28120]: I0220 15:19:49.027908 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 20 15:19:49.028305 master-0 kubenswrapper[28120]: I0220 15:19:49.028253 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 20 15:19:49.046177 master-0 kubenswrapper[28120]: I0220 15:19:49.046113 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:19:49.049089 master-0 kubenswrapper[28120]: I0220 15:19:49.049027 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzh49\" (UniqueName: \"kubernetes.io/projected/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-kube-api-access-rzh49\") pod \"nova-scheduler-0\" (UID: \"36eb9f8e-a58a-46e6-9fe5-f36c0058810d\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:49.049599 master-0 kubenswrapper[28120]: I0220 15:19:49.049299 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-config-data\") pod \"nova-scheduler-0\" (UID: \"36eb9f8e-a58a-46e6-9fe5-f36c0058810d\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:49.049822 master-0 kubenswrapper[28120]: I0220 15:19:49.049689 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36eb9f8e-a58a-46e6-9fe5-f36c0058810d\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:49.053832 master-0 kubenswrapper[28120]: I0220 15:19:49.053785 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36eb9f8e-a58a-46e6-9fe5-f36c0058810d\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:49.057483 master-0 kubenswrapper[28120]: I0220 15:19:49.057437 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-config-data\") pod \"nova-scheduler-0\" (UID: \"36eb9f8e-a58a-46e6-9fe5-f36c0058810d\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:49.072285 master-0 kubenswrapper[28120]: I0220 15:19:49.072199 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzh49\" (UniqueName: \"kubernetes.io/projected/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-kube-api-access-rzh49\") pod \"nova-scheduler-0\" (UID: \"36eb9f8e-a58a-46e6-9fe5-f36c0058810d\") " pod="openstack/nova-scheduler-0" Feb 20 15:19:49.127441 master-0 kubenswrapper[28120]: I0220 15:19:49.127357 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 15:19:49.158440 master-0 kubenswrapper[28120]: I0220 15:19:49.156335 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " pod="openstack/nova-metadata-0" Feb 20 15:19:49.158440 master-0 kubenswrapper[28120]: I0220 15:19:49.156729 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-config-data\") pod \"nova-metadata-0\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " pod="openstack/nova-metadata-0" Feb 20 15:19:49.158440 master-0 kubenswrapper[28120]: I0220 15:19:49.156948 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c45a9112-7488-415a-8a09-70d1af190834-logs\") pod \"nova-metadata-0\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " pod="openstack/nova-metadata-0" Feb 20 15:19:49.158440 master-0 kubenswrapper[28120]: I0220 15:19:49.157061 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd7zq\" (UniqueName: \"kubernetes.io/projected/c45a9112-7488-415a-8a09-70d1af190834-kube-api-access-bd7zq\") pod \"nova-metadata-0\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " pod="openstack/nova-metadata-0" Feb 20 15:19:49.158440 master-0 kubenswrapper[28120]: I0220 15:19:49.157362 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " pod="openstack/nova-metadata-0" Feb 20 15:19:49.273193 master-0 kubenswrapper[28120]: I0220 15:19:49.262792 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd7zq\" (UniqueName: \"kubernetes.io/projected/c45a9112-7488-415a-8a09-70d1af190834-kube-api-access-bd7zq\") pod \"nova-metadata-0\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " pod="openstack/nova-metadata-0" Feb 20 15:19:49.273193 master-0 kubenswrapper[28120]: I0220 15:19:49.269398 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " pod="openstack/nova-metadata-0" Feb 20 15:19:49.273193 master-0 kubenswrapper[28120]: I0220 15:19:49.269659 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " pod="openstack/nova-metadata-0" Feb 20 15:19:49.273193 master-0 kubenswrapper[28120]: I0220 15:19:49.269820 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-config-data\") pod \"nova-metadata-0\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " pod="openstack/nova-metadata-0" Feb 20 15:19:49.273193 master-0 kubenswrapper[28120]: I0220 15:19:49.270034 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c45a9112-7488-415a-8a09-70d1af190834-logs\") pod \"nova-metadata-0\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " pod="openstack/nova-metadata-0" Feb 20 15:19:49.273193 master-0 kubenswrapper[28120]: I0220 15:19:49.270771 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c45a9112-7488-415a-8a09-70d1af190834-logs\") pod \"nova-metadata-0\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " pod="openstack/nova-metadata-0" Feb 20 15:19:49.273193 master-0 kubenswrapper[28120]: I0220 15:19:49.272869 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " pod="openstack/nova-metadata-0" Feb 20 15:19:49.273193 master-0 kubenswrapper[28120]: I0220 15:19:49.273147 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " pod="openstack/nova-metadata-0" Feb 20 15:19:49.277059 master-0 kubenswrapper[28120]: I0220 15:19:49.274554 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-config-data\") pod \"nova-metadata-0\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " pod="openstack/nova-metadata-0" Feb 20 15:19:49.283547 master-0 kubenswrapper[28120]: I0220 15:19:49.280225 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd7zq\" (UniqueName: \"kubernetes.io/projected/c45a9112-7488-415a-8a09-70d1af190834-kube-api-access-bd7zq\") pod \"nova-metadata-0\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " pod="openstack/nova-metadata-0" Feb 20 15:19:49.474591 master-0 kubenswrapper[28120]: I0220 15:19:49.460123 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 15:19:49.647267 master-0 kubenswrapper[28120]: I0220 15:19:49.647196 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 15:19:49.663451 master-0 kubenswrapper[28120]: I0220 15:19:49.663227 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"206ddf01-8889-4900-86a6-ca2e848dd0a1","Type":"ContainerStarted","Data":"94870bd33c46b00b98e6662b4f792d5cfed0f90002d9debc16c300536c0df03e"} Feb 20 15:19:49.663699 master-0 kubenswrapper[28120]: I0220 15:19:49.663675 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 20 15:19:49.667029 master-0 kubenswrapper[28120]: I0220 15:19:49.665647 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36eb9f8e-a58a-46e6-9fe5-f36c0058810d","Type":"ContainerStarted","Data":"afabd19320b83de37f323eb78e446b875efaaf79f8939527a4cdf4095d97ec61"} Feb 20 15:19:49.668990 master-0 kubenswrapper[28120]: I0220 15:19:49.668953 28120 generic.go:334] "Generic (PLEG): container finished" podID="488fe4b5-a9c6-4373-b370-c9d253d13bf9" containerID="8be46299983c7984eb66b699659574e8a232daf4ef3ed407e17b693fec2ed59f" exitCode=0 Feb 20 15:19:49.669097 master-0 kubenswrapper[28120]: I0220 15:19:49.669004 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"488fe4b5-a9c6-4373-b370-c9d253d13bf9","Type":"ContainerDied","Data":"8be46299983c7984eb66b699659574e8a232daf4ef3ed407e17b693fec2ed59f"} Feb 20 15:19:49.690757 master-0 kubenswrapper[28120]: I0220 15:19:49.690653 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.6906285370000003 podStartE2EDuration="2.690628537s" podCreationTimestamp="2026-02-20 15:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:19:49.684593887 +0000 UTC m=+1127.945387460" watchObservedRunningTime="2026-02-20 15:19:49.690628537 +0000 UTC m=+1127.951422130" Feb 20 15:19:49.966726 master-0 kubenswrapper[28120]: I0220 15:19:49.966666 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:19:49.979422 master-0 kubenswrapper[28120]: W0220 15:19:49.978711 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc45a9112_7488_415a_8a09_70d1af190834.slice/crio-6f96282fd2745e896bf68946e8484765d054adf422d3ecd91277db1223918d17 WatchSource:0}: Error finding container 6f96282fd2745e896bf68946e8484765d054adf422d3ecd91277db1223918d17: Status 404 returned error can't find the container with id 6f96282fd2745e896bf68946e8484765d054adf422d3ecd91277db1223918d17 Feb 20 15:19:50.073994 master-0 kubenswrapper[28120]: I0220 15:19:50.073935 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b07a801-92f0-4edc-ae58-a816afea6976" path="/var/lib/kubelet/pods/6b07a801-92f0-4edc-ae58-a816afea6976/volumes" Feb 20 15:19:50.074705 master-0 kubenswrapper[28120]: I0220 15:19:50.074680 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a500f39b-32fe-432e-b7dd-412608cd87aa" path="/var/lib/kubelet/pods/a500f39b-32fe-432e-b7dd-412608cd87aa/volumes" Feb 20 15:19:50.171777 master-0 kubenswrapper[28120]: I0220 15:19:50.171719 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 15:19:50.188978 master-0 kubenswrapper[28120]: I0220 15:19:50.188681 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/488fe4b5-a9c6-4373-b370-c9d253d13bf9-config-data\") pod \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\" (UID: \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\") " Feb 20 15:19:50.188978 master-0 kubenswrapper[28120]: I0220 15:19:50.188977 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qzml\" (UniqueName: \"kubernetes.io/projected/488fe4b5-a9c6-4373-b370-c9d253d13bf9-kube-api-access-8qzml\") pod \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\" (UID: \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\") " Feb 20 15:19:50.189400 master-0 kubenswrapper[28120]: I0220 15:19:50.189149 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/488fe4b5-a9c6-4373-b370-c9d253d13bf9-logs\") pod \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\" (UID: \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\") " Feb 20 15:19:50.189528 master-0 kubenswrapper[28120]: I0220 15:19:50.189452 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488fe4b5-a9c6-4373-b370-c9d253d13bf9-combined-ca-bundle\") pod \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\" (UID: \"488fe4b5-a9c6-4373-b370-c9d253d13bf9\") " Feb 20 15:19:50.192754 master-0 kubenswrapper[28120]: I0220 15:19:50.192113 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/488fe4b5-a9c6-4373-b370-c9d253d13bf9-logs" (OuterVolumeSpecName: "logs") pod "488fe4b5-a9c6-4373-b370-c9d253d13bf9" (UID: "488fe4b5-a9c6-4373-b370-c9d253d13bf9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:19:50.242509 master-0 kubenswrapper[28120]: I0220 15:19:50.208071 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/488fe4b5-a9c6-4373-b370-c9d253d13bf9-kube-api-access-8qzml" (OuterVolumeSpecName: "kube-api-access-8qzml") pod "488fe4b5-a9c6-4373-b370-c9d253d13bf9" (UID: "488fe4b5-a9c6-4373-b370-c9d253d13bf9"). InnerVolumeSpecName "kube-api-access-8qzml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:19:50.254828 master-0 kubenswrapper[28120]: I0220 15:19:50.254765 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/488fe4b5-a9c6-4373-b370-c9d253d13bf9-config-data" (OuterVolumeSpecName: "config-data") pod "488fe4b5-a9c6-4373-b370-c9d253d13bf9" (UID: "488fe4b5-a9c6-4373-b370-c9d253d13bf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:50.275443 master-0 kubenswrapper[28120]: I0220 15:19:50.275344 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/488fe4b5-a9c6-4373-b370-c9d253d13bf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "488fe4b5-a9c6-4373-b370-c9d253d13bf9" (UID: "488fe4b5-a9c6-4373-b370-c9d253d13bf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:19:50.296944 master-0 kubenswrapper[28120]: I0220 15:19:50.296866 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/488fe4b5-a9c6-4373-b370-c9d253d13bf9-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:50.296944 master-0 kubenswrapper[28120]: I0220 15:19:50.296905 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qzml\" (UniqueName: \"kubernetes.io/projected/488fe4b5-a9c6-4373-b370-c9d253d13bf9-kube-api-access-8qzml\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:50.296944 master-0 kubenswrapper[28120]: I0220 15:19:50.296915 28120 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/488fe4b5-a9c6-4373-b370-c9d253d13bf9-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:50.296944 master-0 kubenswrapper[28120]: I0220 15:19:50.296940 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488fe4b5-a9c6-4373-b370-c9d253d13bf9-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:19:50.689256 master-0 kubenswrapper[28120]: I0220 15:19:50.689064 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36eb9f8e-a58a-46e6-9fe5-f36c0058810d","Type":"ContainerStarted","Data":"302e23686b43ba7052a2f06e1a7c22122a338cc9de6961ef418c9491c19497d6"} Feb 20 15:19:50.691618 master-0 kubenswrapper[28120]: I0220 15:19:50.691538 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c45a9112-7488-415a-8a09-70d1af190834","Type":"ContainerStarted","Data":"c3713c31c2f1e0747582562079acb92309e6cce2c860e78191563c33f87045ab"} Feb 20 15:19:50.691618 master-0 kubenswrapper[28120]: I0220 15:19:50.691614 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c45a9112-7488-415a-8a09-70d1af190834","Type":"ContainerStarted","Data":"bbff39fb3eabd8d1f8f828ee93c4a606f52ba17b78591c20872e1f60a7629cd4"} Feb 20 15:19:50.691618 master-0 kubenswrapper[28120]: I0220 15:19:50.691625 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c45a9112-7488-415a-8a09-70d1af190834","Type":"ContainerStarted","Data":"6f96282fd2745e896bf68946e8484765d054adf422d3ecd91277db1223918d17"} Feb 20 15:19:50.693485 master-0 kubenswrapper[28120]: I0220 15:19:50.693410 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"488fe4b5-a9c6-4373-b370-c9d253d13bf9","Type":"ContainerDied","Data":"e8f048c22cf008695a5ac6b274e8bfef55392e354e2768da9ea7bf5d222f83ee"} Feb 20 15:19:50.693608 master-0 kubenswrapper[28120]: I0220 15:19:50.693490 28120 scope.go:117] "RemoveContainer" containerID="8be46299983c7984eb66b699659574e8a232daf4ef3ed407e17b693fec2ed59f" Feb 20 15:19:50.693608 master-0 kubenswrapper[28120]: I0220 15:19:50.693425 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 15:19:50.740384 master-0 kubenswrapper[28120]: I0220 15:19:50.736530 28120 scope.go:117] "RemoveContainer" containerID="22add04063dcb1cde225ce4afb4c1ec90039b228e0bc849ebb73f4b7e87aef6d" Feb 20 15:19:50.751954 master-0 kubenswrapper[28120]: I0220 15:19:50.751842 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.7518172659999998 podStartE2EDuration="2.751817266s" podCreationTimestamp="2026-02-20 15:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:19:50.732764141 +0000 UTC m=+1128.993557754" watchObservedRunningTime="2026-02-20 15:19:50.751817266 +0000 UTC m=+1129.012610839" Feb 20 15:19:50.790546 master-0 kubenswrapper[28120]: I0220 15:19:50.790336 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.790316945 podStartE2EDuration="2.790316945s" podCreationTimestamp="2026-02-20 15:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:19:50.769328632 +0000 UTC m=+1129.030122215" watchObservedRunningTime="2026-02-20 15:19:50.790316945 +0000 UTC m=+1129.051110508" Feb 20 15:19:50.808609 master-0 kubenswrapper[28120]: I0220 15:19:50.808544 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:19:50.821063 master-0 kubenswrapper[28120]: I0220 15:19:50.820801 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:19:50.838410 master-0 kubenswrapper[28120]: I0220 15:19:50.838092 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 20 15:19:50.839586 master-0 kubenswrapper[28120]: E0220 15:19:50.839552 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488fe4b5-a9c6-4373-b370-c9d253d13bf9" containerName="nova-api-api" Feb 20 15:19:50.839586 master-0 kubenswrapper[28120]: I0220 15:19:50.839573 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="488fe4b5-a9c6-4373-b370-c9d253d13bf9" containerName="nova-api-api" Feb 20 15:19:50.839586 master-0 kubenswrapper[28120]: E0220 15:19:50.839588 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488fe4b5-a9c6-4373-b370-c9d253d13bf9" containerName="nova-api-log" Feb 20 15:19:50.839740 master-0 kubenswrapper[28120]: I0220 15:19:50.839595 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="488fe4b5-a9c6-4373-b370-c9d253d13bf9" containerName="nova-api-log" Feb 20 15:19:50.839818 master-0 kubenswrapper[28120]: I0220 15:19:50.839800 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="488fe4b5-a9c6-4373-b370-c9d253d13bf9" containerName="nova-api-api" Feb 20 15:19:50.839899 master-0 kubenswrapper[28120]: I0220 15:19:50.839824 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="488fe4b5-a9c6-4373-b370-c9d253d13bf9" containerName="nova-api-log" Feb 20 15:19:50.841113 master-0 kubenswrapper[28120]: I0220 15:19:50.841089 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 15:19:50.846458 master-0 kubenswrapper[28120]: I0220 15:19:50.846415 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 20 15:19:50.867372 master-0 kubenswrapper[28120]: I0220 15:19:50.861490 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:19:50.941117 master-0 kubenswrapper[28120]: I0220 15:19:50.940990 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650e94ea-5377-466e-8edb-b56da3270273-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"650e94ea-5377-466e-8edb-b56da3270273\") " pod="openstack/nova-api-0" Feb 20 15:19:50.941288 master-0 kubenswrapper[28120]: I0220 15:19:50.941135 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/650e94ea-5377-466e-8edb-b56da3270273-logs\") pod \"nova-api-0\" (UID: \"650e94ea-5377-466e-8edb-b56da3270273\") " pod="openstack/nova-api-0" Feb 20 15:19:50.941288 master-0 kubenswrapper[28120]: I0220 15:19:50.941168 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650e94ea-5377-466e-8edb-b56da3270273-config-data\") pod \"nova-api-0\" (UID: \"650e94ea-5377-466e-8edb-b56da3270273\") " pod="openstack/nova-api-0" Feb 20 15:19:50.941384 master-0 kubenswrapper[28120]: I0220 15:19:50.941264 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjppc\" (UniqueName: \"kubernetes.io/projected/650e94ea-5377-466e-8edb-b56da3270273-kube-api-access-cjppc\") pod \"nova-api-0\" (UID: \"650e94ea-5377-466e-8edb-b56da3270273\") " pod="openstack/nova-api-0" Feb 20 15:19:51.043619 master-0 kubenswrapper[28120]: I0220 15:19:51.043531 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjppc\" (UniqueName: \"kubernetes.io/projected/650e94ea-5377-466e-8edb-b56da3270273-kube-api-access-cjppc\") pod \"nova-api-0\" (UID: \"650e94ea-5377-466e-8edb-b56da3270273\") " pod="openstack/nova-api-0" Feb 20 15:19:51.043774 master-0 kubenswrapper[28120]: I0220 15:19:51.043713 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650e94ea-5377-466e-8edb-b56da3270273-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"650e94ea-5377-466e-8edb-b56da3270273\") " pod="openstack/nova-api-0" Feb 20 15:19:51.043896 master-0 kubenswrapper[28120]: I0220 15:19:51.043794 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/650e94ea-5377-466e-8edb-b56da3270273-logs\") pod \"nova-api-0\" (UID: \"650e94ea-5377-466e-8edb-b56da3270273\") " pod="openstack/nova-api-0" Feb 20 15:19:51.043896 master-0 kubenswrapper[28120]: I0220 15:19:51.043821 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650e94ea-5377-466e-8edb-b56da3270273-config-data\") pod \"nova-api-0\" (UID: \"650e94ea-5377-466e-8edb-b56da3270273\") " pod="openstack/nova-api-0" Feb 20 15:19:51.044971 master-0 kubenswrapper[28120]: I0220 15:19:51.044895 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/650e94ea-5377-466e-8edb-b56da3270273-logs\") pod \"nova-api-0\" (UID: \"650e94ea-5377-466e-8edb-b56da3270273\") " pod="openstack/nova-api-0" Feb 20 15:19:51.048790 master-0 kubenswrapper[28120]: I0220 15:19:51.048731 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650e94ea-5377-466e-8edb-b56da3270273-config-data\") pod \"nova-api-0\" (UID: \"650e94ea-5377-466e-8edb-b56da3270273\") " pod="openstack/nova-api-0" Feb 20 15:19:51.050076 master-0 kubenswrapper[28120]: I0220 15:19:51.050022 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650e94ea-5377-466e-8edb-b56da3270273-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"650e94ea-5377-466e-8edb-b56da3270273\") " pod="openstack/nova-api-0" Feb 20 15:19:51.071326 master-0 kubenswrapper[28120]: I0220 15:19:51.071251 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjppc\" (UniqueName: \"kubernetes.io/projected/650e94ea-5377-466e-8edb-b56da3270273-kube-api-access-cjppc\") pod \"nova-api-0\" (UID: \"650e94ea-5377-466e-8edb-b56da3270273\") " pod="openstack/nova-api-0" Feb 20 15:19:51.165823 master-0 kubenswrapper[28120]: I0220 15:19:51.165750 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 15:19:51.721064 master-0 kubenswrapper[28120]: I0220 15:19:51.720492 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:19:52.073709 master-0 kubenswrapper[28120]: I0220 15:19:52.073636 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="488fe4b5-a9c6-4373-b370-c9d253d13bf9" path="/var/lib/kubelet/pods/488fe4b5-a9c6-4373-b370-c9d253d13bf9/volumes" Feb 20 15:19:52.743390 master-0 kubenswrapper[28120]: I0220 15:19:52.743285 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"650e94ea-5377-466e-8edb-b56da3270273","Type":"ContainerStarted","Data":"b7f49e587c42c065b05fb5c760110dfa3293fb8fbd02d01f8149a490f7389523"} Feb 20 15:19:52.743390 master-0 kubenswrapper[28120]: I0220 15:19:52.743371 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"650e94ea-5377-466e-8edb-b56da3270273","Type":"ContainerStarted","Data":"88902a2caa6f2334db31bfc9411b4e3db96e9598f6333e615c14498ab1f0bb90"} Feb 20 15:19:52.744505 master-0 kubenswrapper[28120]: I0220 15:19:52.743394 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"650e94ea-5377-466e-8edb-b56da3270273","Type":"ContainerStarted","Data":"c28ab7f5e246ada5ab6dd45d63013c35b2d0550016e7c16193fc5ce1d20227f6"} Feb 20 15:19:52.791551 master-0 kubenswrapper[28120]: I0220 15:19:52.791305 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.791279357 podStartE2EDuration="2.791279357s" podCreationTimestamp="2026-02-20 15:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:19:52.770904829 +0000 UTC m=+1131.031698482" watchObservedRunningTime="2026-02-20 15:19:52.791279357 +0000 UTC m=+1131.052072950" Feb 20 15:19:53.145072 master-0 kubenswrapper[28120]: I0220 15:19:53.144821 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 20 15:19:54.127857 master-0 kubenswrapper[28120]: I0220 15:19:54.127788 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 20 15:19:54.461523 master-0 kubenswrapper[28120]: I0220 15:19:54.461427 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 20 15:19:54.461839 master-0 kubenswrapper[28120]: I0220 15:19:54.461639 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 20 15:19:59.128198 master-0 kubenswrapper[28120]: I0220 15:19:59.128101 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 20 15:19:59.192901 master-0 kubenswrapper[28120]: I0220 15:19:59.192826 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 20 15:19:59.461120 master-0 kubenswrapper[28120]: I0220 15:19:59.460843 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 20 15:19:59.461120 master-0 kubenswrapper[28120]: I0220 15:19:59.460998 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 20 15:19:59.907947 master-0 kubenswrapper[28120]: I0220 15:19:59.907773 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 20 15:20:00.484950 master-0 kubenswrapper[28120]: I0220 15:20:00.481223 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c45a9112-7488-415a-8a09-70d1af190834" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.15:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 15:20:00.484950 master-0 kubenswrapper[28120]: I0220 15:20:00.481573 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c45a9112-7488-415a-8a09-70d1af190834" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.15:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 15:20:01.167793 master-0 kubenswrapper[28120]: I0220 15:20:01.167717 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 20 15:20:01.168094 master-0 kubenswrapper[28120]: I0220 15:20:01.167808 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 20 15:20:02.209331 master-0 kubenswrapper[28120]: I0220 15:20:02.209217 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="650e94ea-5377-466e-8edb-b56da3270273" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.16:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 20 15:20:02.209952 master-0 kubenswrapper[28120]: I0220 15:20:02.209242 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="650e94ea-5377-466e-8edb-b56da3270273" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.16:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 20 15:20:06.929504 master-0 kubenswrapper[28120]: I0220 15:20:06.929437 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:06.970351 master-0 kubenswrapper[28120]: I0220 15:20:06.970275 28120 generic.go:334] "Generic (PLEG): container finished" podID="a8ef7eb7-14c0-4364-a011-a9c22ea465a0" containerID="0d7422736568bf586d93c97a0b4a298caeec7f1089717570c8c52980655fd9f1" exitCode=137 Feb 20 15:20:06.970351 master-0 kubenswrapper[28120]: I0220 15:20:06.970339 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a8ef7eb7-14c0-4364-a011-a9c22ea465a0","Type":"ContainerDied","Data":"0d7422736568bf586d93c97a0b4a298caeec7f1089717570c8c52980655fd9f1"} Feb 20 15:20:06.970650 master-0 kubenswrapper[28120]: I0220 15:20:06.970371 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a8ef7eb7-14c0-4364-a011-a9c22ea465a0","Type":"ContainerDied","Data":"da14ca86de91fa875a9b8e4994c92f98c46e2a4a0ac1d19be01f4fc57d9be1de"} Feb 20 15:20:06.970650 master-0 kubenswrapper[28120]: I0220 15:20:06.970392 28120 scope.go:117] "RemoveContainer" containerID="0d7422736568bf586d93c97a0b4a298caeec7f1089717570c8c52980655fd9f1" Feb 20 15:20:06.970650 master-0 kubenswrapper[28120]: I0220 15:20:06.970535 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:06.999427 master-0 kubenswrapper[28120]: I0220 15:20:06.999334 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-config-data\") pod \"a8ef7eb7-14c0-4364-a011-a9c22ea465a0\" (UID: \"a8ef7eb7-14c0-4364-a011-a9c22ea465a0\") " Feb 20 15:20:06.999641 master-0 kubenswrapper[28120]: I0220 15:20:06.999436 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-combined-ca-bundle\") pod \"a8ef7eb7-14c0-4364-a011-a9c22ea465a0\" (UID: \"a8ef7eb7-14c0-4364-a011-a9c22ea465a0\") " Feb 20 15:20:06.999698 master-0 kubenswrapper[28120]: I0220 15:20:06.999643 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm5k4\" (UniqueName: \"kubernetes.io/projected/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-kube-api-access-fm5k4\") pod \"a8ef7eb7-14c0-4364-a011-a9c22ea465a0\" (UID: \"a8ef7eb7-14c0-4364-a011-a9c22ea465a0\") " Feb 20 15:20:07.005893 master-0 kubenswrapper[28120]: I0220 15:20:07.005817 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-kube-api-access-fm5k4" (OuterVolumeSpecName: "kube-api-access-fm5k4") pod "a8ef7eb7-14c0-4364-a011-a9c22ea465a0" (UID: "a8ef7eb7-14c0-4364-a011-a9c22ea465a0"). InnerVolumeSpecName "kube-api-access-fm5k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:20:07.014862 master-0 kubenswrapper[28120]: I0220 15:20:07.014791 28120 scope.go:117] "RemoveContainer" containerID="0d7422736568bf586d93c97a0b4a298caeec7f1089717570c8c52980655fd9f1" Feb 20 15:20:07.024187 master-0 kubenswrapper[28120]: E0220 15:20:07.024080 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d7422736568bf586d93c97a0b4a298caeec7f1089717570c8c52980655fd9f1\": container with ID starting with 0d7422736568bf586d93c97a0b4a298caeec7f1089717570c8c52980655fd9f1 not found: ID does not exist" containerID="0d7422736568bf586d93c97a0b4a298caeec7f1089717570c8c52980655fd9f1" Feb 20 15:20:07.024332 master-0 kubenswrapper[28120]: I0220 15:20:07.024185 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d7422736568bf586d93c97a0b4a298caeec7f1089717570c8c52980655fd9f1"} err="failed to get container status \"0d7422736568bf586d93c97a0b4a298caeec7f1089717570c8c52980655fd9f1\": rpc error: code = NotFound desc = could not find container \"0d7422736568bf586d93c97a0b4a298caeec7f1089717570c8c52980655fd9f1\": container with ID starting with 0d7422736568bf586d93c97a0b4a298caeec7f1089717570c8c52980655fd9f1 not found: ID does not exist" Feb 20 15:20:07.033599 master-0 kubenswrapper[28120]: I0220 15:20:07.033535 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8ef7eb7-14c0-4364-a011-a9c22ea465a0" (UID: "a8ef7eb7-14c0-4364-a011-a9c22ea465a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:07.047296 master-0 kubenswrapper[28120]: I0220 15:20:07.047247 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-config-data" (OuterVolumeSpecName: "config-data") pod "a8ef7eb7-14c0-4364-a011-a9c22ea465a0" (UID: "a8ef7eb7-14c0-4364-a011-a9c22ea465a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:07.105943 master-0 kubenswrapper[28120]: I0220 15:20:07.105791 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm5k4\" (UniqueName: \"kubernetes.io/projected/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-kube-api-access-fm5k4\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:07.106148 master-0 kubenswrapper[28120]: I0220 15:20:07.105917 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:07.106148 master-0 kubenswrapper[28120]: I0220 15:20:07.105995 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ef7eb7-14c0-4364-a011-a9c22ea465a0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:07.314265 master-0 kubenswrapper[28120]: I0220 15:20:07.314213 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 15:20:07.332003 master-0 kubenswrapper[28120]: I0220 15:20:07.330955 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 15:20:07.387231 master-0 kubenswrapper[28120]: I0220 15:20:07.387090 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 15:20:07.392072 master-0 kubenswrapper[28120]: E0220 15:20:07.389683 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ef7eb7-14c0-4364-a011-a9c22ea465a0" containerName="nova-cell1-novncproxy-novncproxy" Feb 20 15:20:07.392072 master-0 kubenswrapper[28120]: I0220 15:20:07.389728 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ef7eb7-14c0-4364-a011-a9c22ea465a0" containerName="nova-cell1-novncproxy-novncproxy" Feb 20 15:20:07.399677 master-0 kubenswrapper[28120]: I0220 15:20:07.399619 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8ef7eb7-14c0-4364-a011-a9c22ea465a0" containerName="nova-cell1-novncproxy-novncproxy" Feb 20 15:20:07.400862 master-0 kubenswrapper[28120]: I0220 15:20:07.400836 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:07.403848 master-0 kubenswrapper[28120]: I0220 15:20:07.403121 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 20 15:20:07.404559 master-0 kubenswrapper[28120]: I0220 15:20:07.404534 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 20 15:20:07.404710 master-0 kubenswrapper[28120]: I0220 15:20:07.404539 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 20 15:20:07.422215 master-0 kubenswrapper[28120]: I0220 15:20:07.417821 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 15:20:07.514794 master-0 kubenswrapper[28120]: I0220 15:20:07.514735 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82a2e13-fe11-495b-a1fb-e8817f65f24d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c82a2e13-fe11-495b-a1fb-e8817f65f24d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:07.515304 master-0 kubenswrapper[28120]: I0220 15:20:07.515239 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82a2e13-fe11-495b-a1fb-e8817f65f24d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c82a2e13-fe11-495b-a1fb-e8817f65f24d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:07.515481 master-0 kubenswrapper[28120]: I0220 15:20:07.515426 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82a2e13-fe11-495b-a1fb-e8817f65f24d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c82a2e13-fe11-495b-a1fb-e8817f65f24d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:07.515481 master-0 kubenswrapper[28120]: I0220 15:20:07.515478 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4wrj\" (UniqueName: \"kubernetes.io/projected/c82a2e13-fe11-495b-a1fb-e8817f65f24d-kube-api-access-d4wrj\") pod \"nova-cell1-novncproxy-0\" (UID: \"c82a2e13-fe11-495b-a1fb-e8817f65f24d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:07.515641 master-0 kubenswrapper[28120]: I0220 15:20:07.515545 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82a2e13-fe11-495b-a1fb-e8817f65f24d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c82a2e13-fe11-495b-a1fb-e8817f65f24d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:07.618549 master-0 kubenswrapper[28120]: I0220 15:20:07.618426 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82a2e13-fe11-495b-a1fb-e8817f65f24d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c82a2e13-fe11-495b-a1fb-e8817f65f24d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:07.618549 master-0 kubenswrapper[28120]: I0220 15:20:07.618559 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82a2e13-fe11-495b-a1fb-e8817f65f24d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c82a2e13-fe11-495b-a1fb-e8817f65f24d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:07.619502 master-0 kubenswrapper[28120]: I0220 15:20:07.618604 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4wrj\" (UniqueName: \"kubernetes.io/projected/c82a2e13-fe11-495b-a1fb-e8817f65f24d-kube-api-access-d4wrj\") pod \"nova-cell1-novncproxy-0\" (UID: \"c82a2e13-fe11-495b-a1fb-e8817f65f24d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:07.619502 master-0 kubenswrapper[28120]: I0220 15:20:07.618703 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82a2e13-fe11-495b-a1fb-e8817f65f24d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c82a2e13-fe11-495b-a1fb-e8817f65f24d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:07.619502 master-0 kubenswrapper[28120]: I0220 15:20:07.618874 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82a2e13-fe11-495b-a1fb-e8817f65f24d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c82a2e13-fe11-495b-a1fb-e8817f65f24d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:07.623261 master-0 kubenswrapper[28120]: I0220 15:20:07.623110 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82a2e13-fe11-495b-a1fb-e8817f65f24d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c82a2e13-fe11-495b-a1fb-e8817f65f24d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:07.623977 master-0 kubenswrapper[28120]: I0220 15:20:07.623943 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82a2e13-fe11-495b-a1fb-e8817f65f24d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c82a2e13-fe11-495b-a1fb-e8817f65f24d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:07.624544 master-0 kubenswrapper[28120]: I0220 15:20:07.624498 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82a2e13-fe11-495b-a1fb-e8817f65f24d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c82a2e13-fe11-495b-a1fb-e8817f65f24d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:07.626596 master-0 kubenswrapper[28120]: I0220 15:20:07.626554 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82a2e13-fe11-495b-a1fb-e8817f65f24d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c82a2e13-fe11-495b-a1fb-e8817f65f24d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:07.649612 master-0 kubenswrapper[28120]: I0220 15:20:07.649453 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4wrj\" (UniqueName: \"kubernetes.io/projected/c82a2e13-fe11-495b-a1fb-e8817f65f24d-kube-api-access-d4wrj\") pod \"nova-cell1-novncproxy-0\" (UID: \"c82a2e13-fe11-495b-a1fb-e8817f65f24d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:07.754123 master-0 kubenswrapper[28120]: I0220 15:20:07.754068 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:08.073095 master-0 kubenswrapper[28120]: I0220 15:20:08.073023 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8ef7eb7-14c0-4364-a011-a9c22ea465a0" path="/var/lib/kubelet/pods/a8ef7eb7-14c0-4364-a011-a9c22ea465a0/volumes" Feb 20 15:20:08.263022 master-0 kubenswrapper[28120]: I0220 15:20:08.261275 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 20 15:20:08.264718 master-0 kubenswrapper[28120]: W0220 15:20:08.264651 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc82a2e13_fe11_495b_a1fb_e8817f65f24d.slice/crio-97fb5b75207bda444af393b445cb2b7e0293803036688f6f5c8c89a76e2ac527 WatchSource:0}: Error finding container 97fb5b75207bda444af393b445cb2b7e0293803036688f6f5c8c89a76e2ac527: Status 404 returned error can't find the container with id 97fb5b75207bda444af393b445cb2b7e0293803036688f6f5c8c89a76e2ac527 Feb 20 15:20:09.001125 master-0 kubenswrapper[28120]: I0220 15:20:09.001043 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c82a2e13-fe11-495b-a1fb-e8817f65f24d","Type":"ContainerStarted","Data":"35544776bd04915e795b2a09ca02bc96df294d40eab6e593529400b34f3fc106"} Feb 20 15:20:09.001125 master-0 kubenswrapper[28120]: I0220 15:20:09.001116 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c82a2e13-fe11-495b-a1fb-e8817f65f24d","Type":"ContainerStarted","Data":"97fb5b75207bda444af393b445cb2b7e0293803036688f6f5c8c89a76e2ac527"} Feb 20 15:20:09.026152 master-0 kubenswrapper[28120]: I0220 15:20:09.025975 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.02594781 podStartE2EDuration="2.02594781s" podCreationTimestamp="2026-02-20 15:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:20:09.022062163 +0000 UTC m=+1147.282855766" watchObservedRunningTime="2026-02-20 15:20:09.02594781 +0000 UTC m=+1147.286741383" Feb 20 15:20:09.467498 master-0 kubenswrapper[28120]: I0220 15:20:09.467405 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 20 15:20:09.471673 master-0 kubenswrapper[28120]: I0220 15:20:09.471600 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 20 15:20:09.494210 master-0 kubenswrapper[28120]: I0220 15:20:09.494124 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 20 15:20:10.024484 master-0 kubenswrapper[28120]: I0220 15:20:10.024404 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 20 15:20:11.172795 master-0 kubenswrapper[28120]: I0220 15:20:11.172704 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 20 15:20:11.173434 master-0 kubenswrapper[28120]: I0220 15:20:11.173048 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 20 15:20:11.173948 master-0 kubenswrapper[28120]: I0220 15:20:11.173891 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 20 15:20:11.174041 master-0 kubenswrapper[28120]: I0220 15:20:11.173983 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 20 15:20:11.178094 master-0 kubenswrapper[28120]: I0220 15:20:11.178047 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 20 15:20:11.180258 master-0 kubenswrapper[28120]: I0220 15:20:11.180235 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 20 15:20:11.477665 master-0 kubenswrapper[28120]: I0220 15:20:11.477595 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8f95c8447-2k7cv"] Feb 20 15:20:11.501115 master-0 kubenswrapper[28120]: I0220 15:20:11.501015 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.512063 master-0 kubenswrapper[28120]: I0220 15:20:11.512007 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8f95c8447-2k7cv"] Feb 20 15:20:11.548650 master-0 kubenswrapper[28120]: I0220 15:20:11.548588 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85e63473-9702-4e57-b751-ef5db82cdbf4-dns-swift-storage-0\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.548847 master-0 kubenswrapper[28120]: I0220 15:20:11.548657 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjtk4\" (UniqueName: \"kubernetes.io/projected/85e63473-9702-4e57-b751-ef5db82cdbf4-kube-api-access-sjtk4\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.548890 master-0 kubenswrapper[28120]: I0220 15:20:11.548846 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85e63473-9702-4e57-b751-ef5db82cdbf4-ovsdbserver-sb\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.548890 master-0 kubenswrapper[28120]: I0220 15:20:11.548878 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85e63473-9702-4e57-b751-ef5db82cdbf4-ovsdbserver-nb\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.549006 master-0 kubenswrapper[28120]: I0220 15:20:11.548901 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85e63473-9702-4e57-b751-ef5db82cdbf4-dns-svc\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.549006 master-0 kubenswrapper[28120]: I0220 15:20:11.548962 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85e63473-9702-4e57-b751-ef5db82cdbf4-config\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.651083 master-0 kubenswrapper[28120]: I0220 15:20:11.651034 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85e63473-9702-4e57-b751-ef5db82cdbf4-config\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.651368 master-0 kubenswrapper[28120]: I0220 15:20:11.651317 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85e63473-9702-4e57-b751-ef5db82cdbf4-dns-swift-storage-0\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.651476 master-0 kubenswrapper[28120]: I0220 15:20:11.651451 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjtk4\" (UniqueName: \"kubernetes.io/projected/85e63473-9702-4e57-b751-ef5db82cdbf4-kube-api-access-sjtk4\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.652127 master-0 kubenswrapper[28120]: I0220 15:20:11.652097 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85e63473-9702-4e57-b751-ef5db82cdbf4-ovsdbserver-sb\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.652231 master-0 kubenswrapper[28120]: I0220 15:20:11.652204 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85e63473-9702-4e57-b751-ef5db82cdbf4-ovsdbserver-nb\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.652289 master-0 kubenswrapper[28120]: I0220 15:20:11.652268 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85e63473-9702-4e57-b751-ef5db82cdbf4-dns-svc\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.652415 master-0 kubenswrapper[28120]: I0220 15:20:11.652382 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85e63473-9702-4e57-b751-ef5db82cdbf4-dns-swift-storage-0\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.652507 master-0 kubenswrapper[28120]: I0220 15:20:11.652468 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85e63473-9702-4e57-b751-ef5db82cdbf4-config\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.653367 master-0 kubenswrapper[28120]: I0220 15:20:11.653347 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85e63473-9702-4e57-b751-ef5db82cdbf4-ovsdbserver-nb\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.653875 master-0 kubenswrapper[28120]: I0220 15:20:11.653845 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85e63473-9702-4e57-b751-ef5db82cdbf4-dns-svc\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.655534 master-0 kubenswrapper[28120]: I0220 15:20:11.655503 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85e63473-9702-4e57-b751-ef5db82cdbf4-ovsdbserver-sb\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.671176 master-0 kubenswrapper[28120]: I0220 15:20:11.671133 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjtk4\" (UniqueName: \"kubernetes.io/projected/85e63473-9702-4e57-b751-ef5db82cdbf4-kube-api-access-sjtk4\") pod \"dnsmasq-dns-8f95c8447-2k7cv\" (UID: \"85e63473-9702-4e57-b751-ef5db82cdbf4\") " pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:11.848221 master-0 kubenswrapper[28120]: I0220 15:20:11.848147 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:12.346170 master-0 kubenswrapper[28120]: W0220 15:20:12.345171 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85e63473_9702_4e57_b751_ef5db82cdbf4.slice/crio-2def3ac06b3186aaa38d96940a56c5e56abebb6ef21086ab7b5a625231ced0cd WatchSource:0}: Error finding container 2def3ac06b3186aaa38d96940a56c5e56abebb6ef21086ab7b5a625231ced0cd: Status 404 returned error can't find the container with id 2def3ac06b3186aaa38d96940a56c5e56abebb6ef21086ab7b5a625231ced0cd Feb 20 15:20:12.355879 master-0 kubenswrapper[28120]: I0220 15:20:12.355779 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8f95c8447-2k7cv"] Feb 20 15:20:12.756109 master-0 kubenswrapper[28120]: I0220 15:20:12.755071 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:13.053962 master-0 kubenswrapper[28120]: I0220 15:20:13.053894 28120 generic.go:334] "Generic (PLEG): container finished" podID="85e63473-9702-4e57-b751-ef5db82cdbf4" containerID="2eaa6a80e6fafa261e44aa13627173fff79e1cd354504346e2c605bdac8e5a9e" exitCode=0 Feb 20 15:20:13.054188 master-0 kubenswrapper[28120]: I0220 15:20:13.053967 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" event={"ID":"85e63473-9702-4e57-b751-ef5db82cdbf4","Type":"ContainerDied","Data":"2eaa6a80e6fafa261e44aa13627173fff79e1cd354504346e2c605bdac8e5a9e"} Feb 20 15:20:13.054188 master-0 kubenswrapper[28120]: I0220 15:20:13.054081 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" event={"ID":"85e63473-9702-4e57-b751-ef5db82cdbf4","Type":"ContainerStarted","Data":"2def3ac06b3186aaa38d96940a56c5e56abebb6ef21086ab7b5a625231ced0cd"} Feb 20 15:20:14.084135 master-0 kubenswrapper[28120]: I0220 15:20:14.084034 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" event={"ID":"85e63473-9702-4e57-b751-ef5db82cdbf4","Type":"ContainerStarted","Data":"a9f5b7129fb0324fff67dfefc1c8516e64f2d8f58069686bc211a17bdfef47eb"} Feb 20 15:20:14.084135 master-0 kubenswrapper[28120]: I0220 15:20:14.084112 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:14.125359 master-0 kubenswrapper[28120]: I0220 15:20:14.125230 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" podStartSLOduration=3.12520321 podStartE2EDuration="3.12520321s" podCreationTimestamp="2026-02-20 15:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:20:14.110645438 +0000 UTC m=+1152.371439031" watchObservedRunningTime="2026-02-20 15:20:14.12520321 +0000 UTC m=+1152.385996783" Feb 20 15:20:14.392509 master-0 kubenswrapper[28120]: I0220 15:20:14.392312 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:20:14.392879 master-0 kubenswrapper[28120]: I0220 15:20:14.392603 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="650e94ea-5377-466e-8edb-b56da3270273" containerName="nova-api-log" containerID="cri-o://88902a2caa6f2334db31bfc9411b4e3db96e9598f6333e615c14498ab1f0bb90" gracePeriod=30 Feb 20 15:20:14.392879 master-0 kubenswrapper[28120]: I0220 15:20:14.392718 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="650e94ea-5377-466e-8edb-b56da3270273" containerName="nova-api-api" containerID="cri-o://b7f49e587c42c065b05fb5c760110dfa3293fb8fbd02d01f8149a490f7389523" gracePeriod=30 Feb 20 15:20:15.093387 master-0 kubenswrapper[28120]: I0220 15:20:15.093226 28120 generic.go:334] "Generic (PLEG): container finished" podID="650e94ea-5377-466e-8edb-b56da3270273" containerID="88902a2caa6f2334db31bfc9411b4e3db96e9598f6333e615c14498ab1f0bb90" exitCode=143 Feb 20 15:20:15.094004 master-0 kubenswrapper[28120]: I0220 15:20:15.093583 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"650e94ea-5377-466e-8edb-b56da3270273","Type":"ContainerDied","Data":"88902a2caa6f2334db31bfc9411b4e3db96e9598f6333e615c14498ab1f0bb90"} Feb 20 15:20:17.755204 master-0 kubenswrapper[28120]: I0220 15:20:17.755109 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:17.790574 master-0 kubenswrapper[28120]: I0220 15:20:17.789289 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:18.140456 master-0 kubenswrapper[28120]: I0220 15:20:18.140389 28120 generic.go:334] "Generic (PLEG): container finished" podID="650e94ea-5377-466e-8edb-b56da3270273" containerID="b7f49e587c42c065b05fb5c760110dfa3293fb8fbd02d01f8149a490f7389523" exitCode=0 Feb 20 15:20:18.140705 master-0 kubenswrapper[28120]: I0220 15:20:18.140486 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"650e94ea-5377-466e-8edb-b56da3270273","Type":"ContainerDied","Data":"b7f49e587c42c065b05fb5c760110dfa3293fb8fbd02d01f8149a490f7389523"} Feb 20 15:20:18.140705 master-0 kubenswrapper[28120]: I0220 15:20:18.140561 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"650e94ea-5377-466e-8edb-b56da3270273","Type":"ContainerDied","Data":"c28ab7f5e246ada5ab6dd45d63013c35b2d0550016e7c16193fc5ce1d20227f6"} Feb 20 15:20:18.140705 master-0 kubenswrapper[28120]: I0220 15:20:18.140627 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c28ab7f5e246ada5ab6dd45d63013c35b2d0550016e7c16193fc5ce1d20227f6" Feb 20 15:20:18.170029 master-0 kubenswrapper[28120]: I0220 15:20:18.169967 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 20 15:20:18.262422 master-0 kubenswrapper[28120]: I0220 15:20:18.262326 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 15:20:18.352186 master-0 kubenswrapper[28120]: I0220 15:20:18.350990 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjppc\" (UniqueName: \"kubernetes.io/projected/650e94ea-5377-466e-8edb-b56da3270273-kube-api-access-cjppc\") pod \"650e94ea-5377-466e-8edb-b56da3270273\" (UID: \"650e94ea-5377-466e-8edb-b56da3270273\") " Feb 20 15:20:18.352186 master-0 kubenswrapper[28120]: I0220 15:20:18.351302 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/650e94ea-5377-466e-8edb-b56da3270273-logs\") pod \"650e94ea-5377-466e-8edb-b56da3270273\" (UID: \"650e94ea-5377-466e-8edb-b56da3270273\") " Feb 20 15:20:18.352186 master-0 kubenswrapper[28120]: I0220 15:20:18.351361 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650e94ea-5377-466e-8edb-b56da3270273-config-data\") pod \"650e94ea-5377-466e-8edb-b56da3270273\" (UID: \"650e94ea-5377-466e-8edb-b56da3270273\") " Feb 20 15:20:18.352186 master-0 kubenswrapper[28120]: I0220 15:20:18.351438 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650e94ea-5377-466e-8edb-b56da3270273-combined-ca-bundle\") pod \"650e94ea-5377-466e-8edb-b56da3270273\" (UID: \"650e94ea-5377-466e-8edb-b56da3270273\") " Feb 20 15:20:18.352186 master-0 kubenswrapper[28120]: I0220 15:20:18.351807 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/650e94ea-5377-466e-8edb-b56da3270273-logs" (OuterVolumeSpecName: "logs") pod "650e94ea-5377-466e-8edb-b56da3270273" (UID: "650e94ea-5377-466e-8edb-b56da3270273"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:20:18.352874 master-0 kubenswrapper[28120]: I0220 15:20:18.352247 28120 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/650e94ea-5377-466e-8edb-b56da3270273-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:18.359056 master-0 kubenswrapper[28120]: I0220 15:20:18.355460 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650e94ea-5377-466e-8edb-b56da3270273-kube-api-access-cjppc" (OuterVolumeSpecName: "kube-api-access-cjppc") pod "650e94ea-5377-466e-8edb-b56da3270273" (UID: "650e94ea-5377-466e-8edb-b56da3270273"). InnerVolumeSpecName "kube-api-access-cjppc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:20:18.381356 master-0 kubenswrapper[28120]: I0220 15:20:18.381302 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650e94ea-5377-466e-8edb-b56da3270273-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "650e94ea-5377-466e-8edb-b56da3270273" (UID: "650e94ea-5377-466e-8edb-b56da3270273"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:18.398169 master-0 kubenswrapper[28120]: I0220 15:20:18.398114 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650e94ea-5377-466e-8edb-b56da3270273-config-data" (OuterVolumeSpecName: "config-data") pod "650e94ea-5377-466e-8edb-b56da3270273" (UID: "650e94ea-5377-466e-8edb-b56da3270273"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:18.460519 master-0 kubenswrapper[28120]: I0220 15:20:18.460482 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650e94ea-5377-466e-8edb-b56da3270273-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:18.460752 master-0 kubenswrapper[28120]: I0220 15:20:18.460740 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650e94ea-5377-466e-8edb-b56da3270273-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:18.460827 master-0 kubenswrapper[28120]: I0220 15:20:18.460815 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjppc\" (UniqueName: \"kubernetes.io/projected/650e94ea-5377-466e-8edb-b56da3270273-kube-api-access-cjppc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:18.569225 master-0 kubenswrapper[28120]: I0220 15:20:18.566798 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-lcrwp"] Feb 20 15:20:18.569225 master-0 kubenswrapper[28120]: E0220 15:20:18.567573 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650e94ea-5377-466e-8edb-b56da3270273" containerName="nova-api-api" Feb 20 15:20:18.569225 master-0 kubenswrapper[28120]: I0220 15:20:18.567594 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="650e94ea-5377-466e-8edb-b56da3270273" containerName="nova-api-api" Feb 20 15:20:18.569225 master-0 kubenswrapper[28120]: E0220 15:20:18.567614 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650e94ea-5377-466e-8edb-b56da3270273" containerName="nova-api-log" Feb 20 15:20:18.569225 master-0 kubenswrapper[28120]: I0220 15:20:18.567623 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="650e94ea-5377-466e-8edb-b56da3270273" containerName="nova-api-log" Feb 20 15:20:18.569225 master-0 kubenswrapper[28120]: I0220 15:20:18.568016 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="650e94ea-5377-466e-8edb-b56da3270273" containerName="nova-api-api" Feb 20 15:20:18.569225 master-0 kubenswrapper[28120]: I0220 15:20:18.568053 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="650e94ea-5377-466e-8edb-b56da3270273" containerName="nova-api-log" Feb 20 15:20:18.571600 master-0 kubenswrapper[28120]: I0220 15:20:18.570465 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lcrwp" Feb 20 15:20:18.573187 master-0 kubenswrapper[28120]: I0220 15:20:18.572993 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 20 15:20:18.576556 master-0 kubenswrapper[28120]: I0220 15:20:18.574902 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 20 15:20:18.585440 master-0 kubenswrapper[28120]: I0220 15:20:18.585380 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-ttdlx"] Feb 20 15:20:18.587670 master-0 kubenswrapper[28120]: I0220 15:20:18.587631 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-ttdlx" Feb 20 15:20:18.601381 master-0 kubenswrapper[28120]: I0220 15:20:18.601322 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lcrwp"] Feb 20 15:20:18.622357 master-0 kubenswrapper[28120]: I0220 15:20:18.622298 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-ttdlx"] Feb 20 15:20:18.665507 master-0 kubenswrapper[28120]: I0220 15:20:18.665433 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-combined-ca-bundle\") pod \"nova-cell1-host-discover-ttdlx\" (UID: \"c66fd46e-c999-4b3d-8430-3322dd85068c\") " pod="openstack/nova-cell1-host-discover-ttdlx" Feb 20 15:20:18.665708 master-0 kubenswrapper[28120]: I0220 15:20:18.665548 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-scripts\") pod \"nova-cell1-host-discover-ttdlx\" (UID: \"c66fd46e-c999-4b3d-8430-3322dd85068c\") " pod="openstack/nova-cell1-host-discover-ttdlx" Feb 20 15:20:18.665708 master-0 kubenswrapper[28120]: I0220 15:20:18.665587 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn6j9\" (UniqueName: \"kubernetes.io/projected/c66fd46e-c999-4b3d-8430-3322dd85068c-kube-api-access-bn6j9\") pod \"nova-cell1-host-discover-ttdlx\" (UID: \"c66fd46e-c999-4b3d-8430-3322dd85068c\") " pod="openstack/nova-cell1-host-discover-ttdlx" Feb 20 15:20:18.665708 master-0 kubenswrapper[28120]: I0220 15:20:18.665662 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b59x\" (UniqueName: \"kubernetes.io/projected/77c2fabe-8695-4a34-b116-f790f5b49b46-kube-api-access-7b59x\") pod \"nova-cell1-cell-mapping-lcrwp\" (UID: \"77c2fabe-8695-4a34-b116-f790f5b49b46\") " pod="openstack/nova-cell1-cell-mapping-lcrwp" Feb 20 15:20:18.665812 master-0 kubenswrapper[28120]: I0220 15:20:18.665722 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lcrwp\" (UID: \"77c2fabe-8695-4a34-b116-f790f5b49b46\") " pod="openstack/nova-cell1-cell-mapping-lcrwp" Feb 20 15:20:18.665812 master-0 kubenswrapper[28120]: I0220 15:20:18.665793 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-config-data\") pod \"nova-cell1-cell-mapping-lcrwp\" (UID: \"77c2fabe-8695-4a34-b116-f790f5b49b46\") " pod="openstack/nova-cell1-cell-mapping-lcrwp" Feb 20 15:20:18.665985 master-0 kubenswrapper[28120]: I0220 15:20:18.665855 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-scripts\") pod \"nova-cell1-cell-mapping-lcrwp\" (UID: \"77c2fabe-8695-4a34-b116-f790f5b49b46\") " pod="openstack/nova-cell1-cell-mapping-lcrwp" Feb 20 15:20:18.665985 master-0 kubenswrapper[28120]: I0220 15:20:18.665877 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-config-data\") pod \"nova-cell1-host-discover-ttdlx\" (UID: \"c66fd46e-c999-4b3d-8430-3322dd85068c\") " pod="openstack/nova-cell1-host-discover-ttdlx" Feb 20 15:20:18.769816 master-0 kubenswrapper[28120]: I0220 15:20:18.768236 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lcrwp\" (UID: \"77c2fabe-8695-4a34-b116-f790f5b49b46\") " pod="openstack/nova-cell1-cell-mapping-lcrwp" Feb 20 15:20:18.769816 master-0 kubenswrapper[28120]: I0220 15:20:18.768386 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-config-data\") pod \"nova-cell1-cell-mapping-lcrwp\" (UID: \"77c2fabe-8695-4a34-b116-f790f5b49b46\") " pod="openstack/nova-cell1-cell-mapping-lcrwp" Feb 20 15:20:18.769816 master-0 kubenswrapper[28120]: I0220 15:20:18.768518 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-scripts\") pod \"nova-cell1-cell-mapping-lcrwp\" (UID: \"77c2fabe-8695-4a34-b116-f790f5b49b46\") " pod="openstack/nova-cell1-cell-mapping-lcrwp" Feb 20 15:20:18.769816 master-0 kubenswrapper[28120]: I0220 15:20:18.768559 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-config-data\") pod \"nova-cell1-host-discover-ttdlx\" (UID: \"c66fd46e-c999-4b3d-8430-3322dd85068c\") " pod="openstack/nova-cell1-host-discover-ttdlx" Feb 20 15:20:18.769816 master-0 kubenswrapper[28120]: I0220 15:20:18.768628 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-combined-ca-bundle\") pod \"nova-cell1-host-discover-ttdlx\" (UID: \"c66fd46e-c999-4b3d-8430-3322dd85068c\") " pod="openstack/nova-cell1-host-discover-ttdlx" Feb 20 15:20:18.769816 master-0 kubenswrapper[28120]: I0220 15:20:18.768712 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-scripts\") pod \"nova-cell1-host-discover-ttdlx\" (UID: \"c66fd46e-c999-4b3d-8430-3322dd85068c\") " pod="openstack/nova-cell1-host-discover-ttdlx" Feb 20 15:20:18.769816 master-0 kubenswrapper[28120]: I0220 15:20:18.768748 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn6j9\" (UniqueName: \"kubernetes.io/projected/c66fd46e-c999-4b3d-8430-3322dd85068c-kube-api-access-bn6j9\") pod \"nova-cell1-host-discover-ttdlx\" (UID: \"c66fd46e-c999-4b3d-8430-3322dd85068c\") " pod="openstack/nova-cell1-host-discover-ttdlx" Feb 20 15:20:18.769816 master-0 kubenswrapper[28120]: I0220 15:20:18.768858 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b59x\" (UniqueName: \"kubernetes.io/projected/77c2fabe-8695-4a34-b116-f790f5b49b46-kube-api-access-7b59x\") pod \"nova-cell1-cell-mapping-lcrwp\" (UID: \"77c2fabe-8695-4a34-b116-f790f5b49b46\") " pod="openstack/nova-cell1-cell-mapping-lcrwp" Feb 20 15:20:18.773031 master-0 kubenswrapper[28120]: I0220 15:20:18.772990 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lcrwp\" (UID: \"77c2fabe-8695-4a34-b116-f790f5b49b46\") " pod="openstack/nova-cell1-cell-mapping-lcrwp" Feb 20 15:20:18.773592 master-0 kubenswrapper[28120]: I0220 15:20:18.773549 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-combined-ca-bundle\") pod \"nova-cell1-host-discover-ttdlx\" (UID: \"c66fd46e-c999-4b3d-8430-3322dd85068c\") " pod="openstack/nova-cell1-host-discover-ttdlx" Feb 20 15:20:18.774471 master-0 kubenswrapper[28120]: I0220 15:20:18.773733 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-config-data\") pod \"nova-cell1-host-discover-ttdlx\" (UID: \"c66fd46e-c999-4b3d-8430-3322dd85068c\") " pod="openstack/nova-cell1-host-discover-ttdlx" Feb 20 15:20:18.774471 master-0 kubenswrapper[28120]: I0220 15:20:18.773699 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-config-data\") pod \"nova-cell1-cell-mapping-lcrwp\" (UID: \"77c2fabe-8695-4a34-b116-f790f5b49b46\") " pod="openstack/nova-cell1-cell-mapping-lcrwp" Feb 20 15:20:18.774605 master-0 kubenswrapper[28120]: I0220 15:20:18.774568 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-scripts\") pod \"nova-cell1-host-discover-ttdlx\" (UID: \"c66fd46e-c999-4b3d-8430-3322dd85068c\") " pod="openstack/nova-cell1-host-discover-ttdlx" Feb 20 15:20:18.775081 master-0 kubenswrapper[28120]: I0220 15:20:18.775043 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-scripts\") pod \"nova-cell1-cell-mapping-lcrwp\" (UID: \"77c2fabe-8695-4a34-b116-f790f5b49b46\") " pod="openstack/nova-cell1-cell-mapping-lcrwp" Feb 20 15:20:18.791859 master-0 kubenswrapper[28120]: I0220 15:20:18.791818 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn6j9\" (UniqueName: \"kubernetes.io/projected/c66fd46e-c999-4b3d-8430-3322dd85068c-kube-api-access-bn6j9\") pod \"nova-cell1-host-discover-ttdlx\" (UID: \"c66fd46e-c999-4b3d-8430-3322dd85068c\") " pod="openstack/nova-cell1-host-discover-ttdlx" Feb 20 15:20:18.793205 master-0 kubenswrapper[28120]: I0220 15:20:18.793169 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b59x\" (UniqueName: \"kubernetes.io/projected/77c2fabe-8695-4a34-b116-f790f5b49b46-kube-api-access-7b59x\") pod \"nova-cell1-cell-mapping-lcrwp\" (UID: \"77c2fabe-8695-4a34-b116-f790f5b49b46\") " pod="openstack/nova-cell1-cell-mapping-lcrwp" Feb 20 15:20:18.913626 master-0 kubenswrapper[28120]: I0220 15:20:18.913494 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lcrwp" Feb 20 15:20:18.925307 master-0 kubenswrapper[28120]: I0220 15:20:18.925230 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-ttdlx" Feb 20 15:20:19.161564 master-0 kubenswrapper[28120]: I0220 15:20:19.161305 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 15:20:19.232858 master-0 kubenswrapper[28120]: I0220 15:20:19.232802 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:20:19.249334 master-0 kubenswrapper[28120]: I0220 15:20:19.246662 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:20:19.262484 master-0 kubenswrapper[28120]: I0220 15:20:19.262410 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 20 15:20:19.270311 master-0 kubenswrapper[28120]: I0220 15:20:19.270216 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 15:20:19.272959 master-0 kubenswrapper[28120]: I0220 15:20:19.272906 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 20 15:20:19.273264 master-0 kubenswrapper[28120]: I0220 15:20:19.273212 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 20 15:20:19.274789 master-0 kubenswrapper[28120]: I0220 15:20:19.274709 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 20 15:20:19.276763 master-0 kubenswrapper[28120]: I0220 15:20:19.276694 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:20:19.386804 master-0 kubenswrapper[28120]: I0220 15:20:19.386750 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.386804 master-0 kubenswrapper[28120]: I0220 15:20:19.386819 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-config-data\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.387083 master-0 kubenswrapper[28120]: I0220 15:20:19.386908 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.387149 master-0 kubenswrapper[28120]: I0220 15:20:19.387111 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-public-tls-certs\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.387301 master-0 kubenswrapper[28120]: I0220 15:20:19.387272 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6e5b14e-8b50-4436-86cd-959de104c5d7-logs\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.387361 master-0 kubenswrapper[28120]: I0220 15:20:19.387344 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdmjq\" (UniqueName: \"kubernetes.io/projected/e6e5b14e-8b50-4436-86cd-959de104c5d7-kube-api-access-sdmjq\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.489121 master-0 kubenswrapper[28120]: I0220 15:20:19.489070 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-public-tls-certs\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.489312 master-0 kubenswrapper[28120]: I0220 15:20:19.489188 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6e5b14e-8b50-4436-86cd-959de104c5d7-logs\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.489312 master-0 kubenswrapper[28120]: I0220 15:20:19.489232 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdmjq\" (UniqueName: \"kubernetes.io/projected/e6e5b14e-8b50-4436-86cd-959de104c5d7-kube-api-access-sdmjq\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.489433 master-0 kubenswrapper[28120]: I0220 15:20:19.489401 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.489472 master-0 kubenswrapper[28120]: I0220 15:20:19.489459 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-config-data\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.489535 master-0 kubenswrapper[28120]: I0220 15:20:19.489520 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.490135 master-0 kubenswrapper[28120]: I0220 15:20:19.490093 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6e5b14e-8b50-4436-86cd-959de104c5d7-logs\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.492535 master-0 kubenswrapper[28120]: I0220 15:20:19.492501 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.494720 master-0 kubenswrapper[28120]: I0220 15:20:19.494655 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.495996 master-0 kubenswrapper[28120]: I0220 15:20:19.495959 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-config-data\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.496702 master-0 kubenswrapper[28120]: I0220 15:20:19.496652 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-public-tls-certs\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.506170 master-0 kubenswrapper[28120]: I0220 15:20:19.506125 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdmjq\" (UniqueName: \"kubernetes.io/projected/e6e5b14e-8b50-4436-86cd-959de104c5d7-kube-api-access-sdmjq\") pod \"nova-api-0\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " pod="openstack/nova-api-0" Feb 20 15:20:19.562444 master-0 kubenswrapper[28120]: I0220 15:20:19.562379 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-ttdlx"] Feb 20 15:20:19.594561 master-0 kubenswrapper[28120]: I0220 15:20:19.594432 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 15:20:19.644060 master-0 kubenswrapper[28120]: I0220 15:20:19.643996 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lcrwp"] Feb 20 15:20:20.095159 master-0 kubenswrapper[28120]: I0220 15:20:20.095123 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="650e94ea-5377-466e-8edb-b56da3270273" path="/var/lib/kubelet/pods/650e94ea-5377-466e-8edb-b56da3270273/volumes" Feb 20 15:20:20.097714 master-0 kubenswrapper[28120]: I0220 15:20:20.097654 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:20:20.099998 master-0 kubenswrapper[28120]: W0220 15:20:20.099411 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6e5b14e_8b50_4436_86cd_959de104c5d7.slice/crio-0be18ea021032ade5628e24302aabb8471090d4bd992f37147da519baabc225d WatchSource:0}: Error finding container 0be18ea021032ade5628e24302aabb8471090d4bd992f37147da519baabc225d: Status 404 returned error can't find the container with id 0be18ea021032ade5628e24302aabb8471090d4bd992f37147da519baabc225d Feb 20 15:20:20.175044 master-0 kubenswrapper[28120]: I0220 15:20:20.174979 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6e5b14e-8b50-4436-86cd-959de104c5d7","Type":"ContainerStarted","Data":"0be18ea021032ade5628e24302aabb8471090d4bd992f37147da519baabc225d"} Feb 20 15:20:20.177139 master-0 kubenswrapper[28120]: I0220 15:20:20.177058 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lcrwp" event={"ID":"77c2fabe-8695-4a34-b116-f790f5b49b46","Type":"ContainerStarted","Data":"d683ac0b48f81b8c74b430dcd53c000e86043b4ca9f4b06c5c3cc473d3002c5c"} Feb 20 15:20:20.177217 master-0 kubenswrapper[28120]: I0220 15:20:20.177154 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lcrwp" event={"ID":"77c2fabe-8695-4a34-b116-f790f5b49b46","Type":"ContainerStarted","Data":"0af08d70832e469d6242279c9917fcd9cb3b11f4c93bdd962ec7064ccec1dadd"} Feb 20 15:20:20.180129 master-0 kubenswrapper[28120]: I0220 15:20:20.180057 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-ttdlx" event={"ID":"c66fd46e-c999-4b3d-8430-3322dd85068c","Type":"ContainerStarted","Data":"8d7e36d535e0138fbd926c874a5b11583cddabbdab5f295dba04605576720036"} Feb 20 15:20:20.180129 master-0 kubenswrapper[28120]: I0220 15:20:20.180129 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-ttdlx" event={"ID":"c66fd46e-c999-4b3d-8430-3322dd85068c","Type":"ContainerStarted","Data":"b3f74c11e2017cbce5ad5d3135cb4d6d7376f239e81ba26cac6eeab8d84a9e2b"} Feb 20 15:20:20.202648 master-0 kubenswrapper[28120]: I0220 15:20:20.202562 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-lcrwp" podStartSLOduration=2.202538616 podStartE2EDuration="2.202538616s" podCreationTimestamp="2026-02-20 15:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:20:20.198354992 +0000 UTC m=+1158.459148565" watchObservedRunningTime="2026-02-20 15:20:20.202538616 +0000 UTC m=+1158.463332189" Feb 20 15:20:20.227769 master-0 kubenswrapper[28120]: I0220 15:20:20.227685 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-ttdlx" podStartSLOduration=2.227664541 podStartE2EDuration="2.227664541s" podCreationTimestamp="2026-02-20 15:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:20:20.221826626 +0000 UTC m=+1158.482620209" watchObservedRunningTime="2026-02-20 15:20:20.227664541 +0000 UTC m=+1158.488458124" Feb 20 15:20:21.197367 master-0 kubenswrapper[28120]: I0220 15:20:21.197305 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6e5b14e-8b50-4436-86cd-959de104c5d7","Type":"ContainerStarted","Data":"a385192dcfb1674f43208acc27d68c912611600b95daf150ec56beabe76af437"} Feb 20 15:20:21.198100 master-0 kubenswrapper[28120]: I0220 15:20:21.198066 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6e5b14e-8b50-4436-86cd-959de104c5d7","Type":"ContainerStarted","Data":"62e521eda21509d2e889d866086d5be9a009e50de08c98619d8a4962112ac3a7"} Feb 20 15:20:21.850337 master-0 kubenswrapper[28120]: I0220 15:20:21.850258 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8f95c8447-2k7cv" Feb 20 15:20:21.903467 master-0 kubenswrapper[28120]: I0220 15:20:21.903358 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.903330545 podStartE2EDuration="2.903330545s" podCreationTimestamp="2026-02-20 15:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:20:21.222975157 +0000 UTC m=+1159.483768730" watchObservedRunningTime="2026-02-20 15:20:21.903330545 +0000 UTC m=+1160.164124138" Feb 20 15:20:21.976951 master-0 kubenswrapper[28120]: I0220 15:20:21.969575 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78d5d45447-5tn9v"] Feb 20 15:20:21.976951 master-0 kubenswrapper[28120]: I0220 15:20:21.969848 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" podUID="4bf741d6-32da-404c-a508-ddf648ba8b62" containerName="dnsmasq-dns" containerID="cri-o://5afc18c64ceee0305ad458ab244e54e6cdbd0094d674cdb594580aa525b20d2f" gracePeriod=10 Feb 20 15:20:22.211521 master-0 kubenswrapper[28120]: I0220 15:20:22.211478 28120 generic.go:334] "Generic (PLEG): container finished" podID="4bf741d6-32da-404c-a508-ddf648ba8b62" containerID="5afc18c64ceee0305ad458ab244e54e6cdbd0094d674cdb594580aa525b20d2f" exitCode=0 Feb 20 15:20:22.212090 master-0 kubenswrapper[28120]: I0220 15:20:22.211562 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" event={"ID":"4bf741d6-32da-404c-a508-ddf648ba8b62","Type":"ContainerDied","Data":"5afc18c64ceee0305ad458ab244e54e6cdbd0094d674cdb594580aa525b20d2f"} Feb 20 15:20:22.213718 master-0 kubenswrapper[28120]: I0220 15:20:22.213673 28120 generic.go:334] "Generic (PLEG): container finished" podID="c66fd46e-c999-4b3d-8430-3322dd85068c" containerID="8d7e36d535e0138fbd926c874a5b11583cddabbdab5f295dba04605576720036" exitCode=0 Feb 20 15:20:22.213944 master-0 kubenswrapper[28120]: I0220 15:20:22.213878 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-ttdlx" event={"ID":"c66fd46e-c999-4b3d-8430-3322dd85068c","Type":"ContainerDied","Data":"8d7e36d535e0138fbd926c874a5b11583cddabbdab5f295dba04605576720036"} Feb 20 15:20:22.535836 master-0 kubenswrapper[28120]: I0220 15:20:22.535777 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:20:22.578114 master-0 kubenswrapper[28120]: I0220 15:20:22.577595 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-dns-swift-storage-0\") pod \"4bf741d6-32da-404c-a508-ddf648ba8b62\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " Feb 20 15:20:22.578114 master-0 kubenswrapper[28120]: I0220 15:20:22.577734 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-ovsdbserver-sb\") pod \"4bf741d6-32da-404c-a508-ddf648ba8b62\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " Feb 20 15:20:22.578114 master-0 kubenswrapper[28120]: I0220 15:20:22.577910 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbfwp\" (UniqueName: \"kubernetes.io/projected/4bf741d6-32da-404c-a508-ddf648ba8b62-kube-api-access-vbfwp\") pod \"4bf741d6-32da-404c-a508-ddf648ba8b62\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " Feb 20 15:20:22.578114 master-0 kubenswrapper[28120]: I0220 15:20:22.578025 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-ovsdbserver-nb\") pod \"4bf741d6-32da-404c-a508-ddf648ba8b62\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " Feb 20 15:20:22.578114 master-0 kubenswrapper[28120]: I0220 15:20:22.578065 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-dns-svc\") pod \"4bf741d6-32da-404c-a508-ddf648ba8b62\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " Feb 20 15:20:22.578114 master-0 kubenswrapper[28120]: I0220 15:20:22.578100 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-config\") pod \"4bf741d6-32da-404c-a508-ddf648ba8b62\" (UID: \"4bf741d6-32da-404c-a508-ddf648ba8b62\") " Feb 20 15:20:22.603443 master-0 kubenswrapper[28120]: I0220 15:20:22.603381 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf741d6-32da-404c-a508-ddf648ba8b62-kube-api-access-vbfwp" (OuterVolumeSpecName: "kube-api-access-vbfwp") pod "4bf741d6-32da-404c-a508-ddf648ba8b62" (UID: "4bf741d6-32da-404c-a508-ddf648ba8b62"). InnerVolumeSpecName "kube-api-access-vbfwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:20:22.665948 master-0 kubenswrapper[28120]: I0220 15:20:22.658069 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4bf741d6-32da-404c-a508-ddf648ba8b62" (UID: "4bf741d6-32da-404c-a508-ddf648ba8b62"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:20:22.665948 master-0 kubenswrapper[28120]: I0220 15:20:22.665582 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4bf741d6-32da-404c-a508-ddf648ba8b62" (UID: "4bf741d6-32da-404c-a508-ddf648ba8b62"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:20:22.670779 master-0 kubenswrapper[28120]: I0220 15:20:22.670707 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4bf741d6-32da-404c-a508-ddf648ba8b62" (UID: "4bf741d6-32da-404c-a508-ddf648ba8b62"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:20:22.675647 master-0 kubenswrapper[28120]: I0220 15:20:22.675590 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-config" (OuterVolumeSpecName: "config") pod "4bf741d6-32da-404c-a508-ddf648ba8b62" (UID: "4bf741d6-32da-404c-a508-ddf648ba8b62"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:20:22.681494 master-0 kubenswrapper[28120]: I0220 15:20:22.681460 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbfwp\" (UniqueName: \"kubernetes.io/projected/4bf741d6-32da-404c-a508-ddf648ba8b62-kube-api-access-vbfwp\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:22.681494 master-0 kubenswrapper[28120]: I0220 15:20:22.681492 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:22.681613 master-0 kubenswrapper[28120]: I0220 15:20:22.681507 28120 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:22.681613 master-0 kubenswrapper[28120]: I0220 15:20:22.681519 28120 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:22.681613 master-0 kubenswrapper[28120]: I0220 15:20:22.681532 28120 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:22.689581 master-0 kubenswrapper[28120]: I0220 15:20:22.689520 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4bf741d6-32da-404c-a508-ddf648ba8b62" (UID: "4bf741d6-32da-404c-a508-ddf648ba8b62"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:20:22.783388 master-0 kubenswrapper[28120]: I0220 15:20:22.783313 28120 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf741d6-32da-404c-a508-ddf648ba8b62-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:23.231942 master-0 kubenswrapper[28120]: I0220 15:20:23.231833 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" Feb 20 15:20:23.232819 master-0 kubenswrapper[28120]: I0220 15:20:23.231854 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" event={"ID":"4bf741d6-32da-404c-a508-ddf648ba8b62","Type":"ContainerDied","Data":"157765b7e92e39426d7a54f490519595618fdc1f459a37b748cebc9bdca97d97"} Feb 20 15:20:23.232819 master-0 kubenswrapper[28120]: I0220 15:20:23.231992 28120 scope.go:117] "RemoveContainer" containerID="5afc18c64ceee0305ad458ab244e54e6cdbd0094d674cdb594580aa525b20d2f" Feb 20 15:20:23.283075 master-0 kubenswrapper[28120]: I0220 15:20:23.282336 28120 scope.go:117] "RemoveContainer" containerID="3d9da4b096e7092a01b9f71846b734bdab551b42eb80ba3e41fa25ff6abea189" Feb 20 15:20:23.315960 master-0 kubenswrapper[28120]: I0220 15:20:23.313973 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78d5d45447-5tn9v"] Feb 20 15:20:23.328099 master-0 kubenswrapper[28120]: I0220 15:20:23.327847 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78d5d45447-5tn9v"] Feb 20 15:20:23.739995 master-0 kubenswrapper[28120]: I0220 15:20:23.739913 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-ttdlx" Feb 20 15:20:23.828054 master-0 kubenswrapper[28120]: I0220 15:20:23.827958 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-combined-ca-bundle\") pod \"c66fd46e-c999-4b3d-8430-3322dd85068c\" (UID: \"c66fd46e-c999-4b3d-8430-3322dd85068c\") " Feb 20 15:20:23.828335 master-0 kubenswrapper[28120]: I0220 15:20:23.828074 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-scripts\") pod \"c66fd46e-c999-4b3d-8430-3322dd85068c\" (UID: \"c66fd46e-c999-4b3d-8430-3322dd85068c\") " Feb 20 15:20:23.828335 master-0 kubenswrapper[28120]: I0220 15:20:23.828272 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-config-data\") pod \"c66fd46e-c999-4b3d-8430-3322dd85068c\" (UID: \"c66fd46e-c999-4b3d-8430-3322dd85068c\") " Feb 20 15:20:23.828335 master-0 kubenswrapper[28120]: I0220 15:20:23.828316 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn6j9\" (UniqueName: \"kubernetes.io/projected/c66fd46e-c999-4b3d-8430-3322dd85068c-kube-api-access-bn6j9\") pod \"c66fd46e-c999-4b3d-8430-3322dd85068c\" (UID: \"c66fd46e-c999-4b3d-8430-3322dd85068c\") " Feb 20 15:20:23.831739 master-0 kubenswrapper[28120]: I0220 15:20:23.831652 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c66fd46e-c999-4b3d-8430-3322dd85068c-kube-api-access-bn6j9" (OuterVolumeSpecName: "kube-api-access-bn6j9") pod "c66fd46e-c999-4b3d-8430-3322dd85068c" (UID: "c66fd46e-c999-4b3d-8430-3322dd85068c"). InnerVolumeSpecName "kube-api-access-bn6j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:20:23.832678 master-0 kubenswrapper[28120]: I0220 15:20:23.832636 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-scripts" (OuterVolumeSpecName: "scripts") pod "c66fd46e-c999-4b3d-8430-3322dd85068c" (UID: "c66fd46e-c999-4b3d-8430-3322dd85068c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:23.871445 master-0 kubenswrapper[28120]: I0220 15:20:23.871355 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-config-data" (OuterVolumeSpecName: "config-data") pod "c66fd46e-c999-4b3d-8430-3322dd85068c" (UID: "c66fd46e-c999-4b3d-8430-3322dd85068c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:23.878435 master-0 kubenswrapper[28120]: I0220 15:20:23.878363 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c66fd46e-c999-4b3d-8430-3322dd85068c" (UID: "c66fd46e-c999-4b3d-8430-3322dd85068c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:23.931620 master-0 kubenswrapper[28120]: I0220 15:20:23.931509 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:23.931879 master-0 kubenswrapper[28120]: I0220 15:20:23.931863 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:23.932007 master-0 kubenswrapper[28120]: I0220 15:20:23.931993 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c66fd46e-c999-4b3d-8430-3322dd85068c-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:23.932190 master-0 kubenswrapper[28120]: I0220 15:20:23.932173 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn6j9\" (UniqueName: \"kubernetes.io/projected/c66fd46e-c999-4b3d-8430-3322dd85068c-kube-api-access-bn6j9\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:24.079510 master-0 kubenswrapper[28120]: I0220 15:20:24.079409 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf741d6-32da-404c-a508-ddf648ba8b62" path="/var/lib/kubelet/pods/4bf741d6-32da-404c-a508-ddf648ba8b62/volumes" Feb 20 15:20:24.250876 master-0 kubenswrapper[28120]: I0220 15:20:24.250821 28120 generic.go:334] "Generic (PLEG): container finished" podID="77c2fabe-8695-4a34-b116-f790f5b49b46" containerID="d683ac0b48f81b8c74b430dcd53c000e86043b4ca9f4b06c5c3cc473d3002c5c" exitCode=0 Feb 20 15:20:24.251693 master-0 kubenswrapper[28120]: I0220 15:20:24.250956 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lcrwp" event={"ID":"77c2fabe-8695-4a34-b116-f790f5b49b46","Type":"ContainerDied","Data":"d683ac0b48f81b8c74b430dcd53c000e86043b4ca9f4b06c5c3cc473d3002c5c"} Feb 20 15:20:24.256372 master-0 kubenswrapper[28120]: I0220 15:20:24.256270 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-ttdlx" event={"ID":"c66fd46e-c999-4b3d-8430-3322dd85068c","Type":"ContainerDied","Data":"b3f74c11e2017cbce5ad5d3135cb4d6d7376f239e81ba26cac6eeab8d84a9e2b"} Feb 20 15:20:24.256372 master-0 kubenswrapper[28120]: I0220 15:20:24.256364 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3f74c11e2017cbce5ad5d3135cb4d6d7376f239e81ba26cac6eeab8d84a9e2b" Feb 20 15:20:24.256621 master-0 kubenswrapper[28120]: I0220 15:20:24.256297 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-ttdlx" Feb 20 15:20:25.858204 master-0 kubenswrapper[28120]: I0220 15:20:25.858153 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lcrwp" Feb 20 15:20:25.988973 master-0 kubenswrapper[28120]: I0220 15:20:25.988896 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-scripts\") pod \"77c2fabe-8695-4a34-b116-f790f5b49b46\" (UID: \"77c2fabe-8695-4a34-b116-f790f5b49b46\") " Feb 20 15:20:25.989465 master-0 kubenswrapper[28120]: I0220 15:20:25.989433 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b59x\" (UniqueName: \"kubernetes.io/projected/77c2fabe-8695-4a34-b116-f790f5b49b46-kube-api-access-7b59x\") pod \"77c2fabe-8695-4a34-b116-f790f5b49b46\" (UID: \"77c2fabe-8695-4a34-b116-f790f5b49b46\") " Feb 20 15:20:25.989773 master-0 kubenswrapper[28120]: I0220 15:20:25.989745 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-config-data\") pod \"77c2fabe-8695-4a34-b116-f790f5b49b46\" (UID: \"77c2fabe-8695-4a34-b116-f790f5b49b46\") " Feb 20 15:20:25.991004 master-0 kubenswrapper[28120]: I0220 15:20:25.990974 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-combined-ca-bundle\") pod \"77c2fabe-8695-4a34-b116-f790f5b49b46\" (UID: \"77c2fabe-8695-4a34-b116-f790f5b49b46\") " Feb 20 15:20:25.998649 master-0 kubenswrapper[28120]: I0220 15:20:25.998536 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77c2fabe-8695-4a34-b116-f790f5b49b46-kube-api-access-7b59x" (OuterVolumeSpecName: "kube-api-access-7b59x") pod "77c2fabe-8695-4a34-b116-f790f5b49b46" (UID: "77c2fabe-8695-4a34-b116-f790f5b49b46"). InnerVolumeSpecName "kube-api-access-7b59x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:20:26.000317 master-0 kubenswrapper[28120]: I0220 15:20:25.999866 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-scripts" (OuterVolumeSpecName: "scripts") pod "77c2fabe-8695-4a34-b116-f790f5b49b46" (UID: "77c2fabe-8695-4a34-b116-f790f5b49b46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:26.025284 master-0 kubenswrapper[28120]: I0220 15:20:26.025212 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-config-data" (OuterVolumeSpecName: "config-data") pod "77c2fabe-8695-4a34-b116-f790f5b49b46" (UID: "77c2fabe-8695-4a34-b116-f790f5b49b46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:26.032499 master-0 kubenswrapper[28120]: I0220 15:20:26.032457 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77c2fabe-8695-4a34-b116-f790f5b49b46" (UID: "77c2fabe-8695-4a34-b116-f790f5b49b46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:26.095361 master-0 kubenswrapper[28120]: I0220 15:20:26.095280 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b59x\" (UniqueName: \"kubernetes.io/projected/77c2fabe-8695-4a34-b116-f790f5b49b46-kube-api-access-7b59x\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:26.095361 master-0 kubenswrapper[28120]: I0220 15:20:26.095341 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:26.095361 master-0 kubenswrapper[28120]: I0220 15:20:26.095355 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:26.095361 master-0 kubenswrapper[28120]: I0220 15:20:26.095363 28120 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77c2fabe-8695-4a34-b116-f790f5b49b46-scripts\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:26.298206 master-0 kubenswrapper[28120]: I0220 15:20:26.298057 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lcrwp" event={"ID":"77c2fabe-8695-4a34-b116-f790f5b49b46","Type":"ContainerDied","Data":"0af08d70832e469d6242279c9917fcd9cb3b11f4c93bdd962ec7064ccec1dadd"} Feb 20 15:20:26.298206 master-0 kubenswrapper[28120]: I0220 15:20:26.298126 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af08d70832e469d6242279c9917fcd9cb3b11f4c93bdd962ec7064ccec1dadd" Feb 20 15:20:26.298206 master-0 kubenswrapper[28120]: I0220 15:20:26.298158 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lcrwp" Feb 20 15:20:26.615354 master-0 kubenswrapper[28120]: I0220 15:20:26.615213 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:20:26.615589 master-0 kubenswrapper[28120]: I0220 15:20:26.615495 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e6e5b14e-8b50-4436-86cd-959de104c5d7" containerName="nova-api-log" containerID="cri-o://62e521eda21509d2e889d866086d5be9a009e50de08c98619d8a4962112ac3a7" gracePeriod=30 Feb 20 15:20:26.616096 master-0 kubenswrapper[28120]: I0220 15:20:26.616060 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e6e5b14e-8b50-4436-86cd-959de104c5d7" containerName="nova-api-api" containerID="cri-o://a385192dcfb1674f43208acc27d68c912611600b95daf150ec56beabe76af437" gracePeriod=30 Feb 20 15:20:26.645887 master-0 kubenswrapper[28120]: I0220 15:20:26.645825 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 15:20:26.646093 master-0 kubenswrapper[28120]: I0220 15:20:26.646057 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="36eb9f8e-a58a-46e6-9fe5-f36c0058810d" containerName="nova-scheduler-scheduler" containerID="cri-o://302e23686b43ba7052a2f06e1a7c22122a338cc9de6961ef418c9491c19497d6" gracePeriod=30 Feb 20 15:20:26.682873 master-0 kubenswrapper[28120]: I0220 15:20:26.682815 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:20:26.683175 master-0 kubenswrapper[28120]: I0220 15:20:26.683138 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c45a9112-7488-415a-8a09-70d1af190834" containerName="nova-metadata-log" containerID="cri-o://bbff39fb3eabd8d1f8f828ee93c4a606f52ba17b78591c20872e1f60a7629cd4" gracePeriod=30 Feb 20 15:20:26.683306 master-0 kubenswrapper[28120]: I0220 15:20:26.683284 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c45a9112-7488-415a-8a09-70d1af190834" containerName="nova-metadata-metadata" containerID="cri-o://c3713c31c2f1e0747582562079acb92309e6cce2c860e78191563c33f87045ab" gracePeriod=30 Feb 20 15:20:27.193471 master-0 kubenswrapper[28120]: I0220 15:20:27.193332 28120 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-78d5d45447-5tn9v" podUID="4bf741d6-32da-404c-a508-ddf648ba8b62" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.1.10:5353: i/o timeout" Feb 20 15:20:27.312620 master-0 kubenswrapper[28120]: I0220 15:20:27.312556 28120 generic.go:334] "Generic (PLEG): container finished" podID="e6e5b14e-8b50-4436-86cd-959de104c5d7" containerID="a385192dcfb1674f43208acc27d68c912611600b95daf150ec56beabe76af437" exitCode=0 Feb 20 15:20:27.312620 master-0 kubenswrapper[28120]: I0220 15:20:27.312608 28120 generic.go:334] "Generic (PLEG): container finished" podID="e6e5b14e-8b50-4436-86cd-959de104c5d7" containerID="62e521eda21509d2e889d866086d5be9a009e50de08c98619d8a4962112ac3a7" exitCode=143 Feb 20 15:20:27.312889 master-0 kubenswrapper[28120]: I0220 15:20:27.312653 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6e5b14e-8b50-4436-86cd-959de104c5d7","Type":"ContainerDied","Data":"a385192dcfb1674f43208acc27d68c912611600b95daf150ec56beabe76af437"} Feb 20 15:20:27.312889 master-0 kubenswrapper[28120]: I0220 15:20:27.312685 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6e5b14e-8b50-4436-86cd-959de104c5d7","Type":"ContainerDied","Data":"62e521eda21509d2e889d866086d5be9a009e50de08c98619d8a4962112ac3a7"} Feb 20 15:20:27.312889 master-0 kubenswrapper[28120]: I0220 15:20:27.312699 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6e5b14e-8b50-4436-86cd-959de104c5d7","Type":"ContainerDied","Data":"0be18ea021032ade5628e24302aabb8471090d4bd992f37147da519baabc225d"} Feb 20 15:20:27.312889 master-0 kubenswrapper[28120]: I0220 15:20:27.312712 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0be18ea021032ade5628e24302aabb8471090d4bd992f37147da519baabc225d" Feb 20 15:20:27.315658 master-0 kubenswrapper[28120]: I0220 15:20:27.315618 28120 generic.go:334] "Generic (PLEG): container finished" podID="c45a9112-7488-415a-8a09-70d1af190834" containerID="bbff39fb3eabd8d1f8f828ee93c4a606f52ba17b78591c20872e1f60a7629cd4" exitCode=143 Feb 20 15:20:27.315737 master-0 kubenswrapper[28120]: I0220 15:20:27.315660 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c45a9112-7488-415a-8a09-70d1af190834","Type":"ContainerDied","Data":"bbff39fb3eabd8d1f8f828ee93c4a606f52ba17b78591c20872e1f60a7629cd4"} Feb 20 15:20:27.329740 master-0 kubenswrapper[28120]: I0220 15:20:27.329691 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 15:20:27.429945 master-0 kubenswrapper[28120]: I0220 15:20:27.429095 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-combined-ca-bundle\") pod \"e6e5b14e-8b50-4436-86cd-959de104c5d7\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " Feb 20 15:20:27.429945 master-0 kubenswrapper[28120]: I0220 15:20:27.429204 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6e5b14e-8b50-4436-86cd-959de104c5d7-logs\") pod \"e6e5b14e-8b50-4436-86cd-959de104c5d7\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " Feb 20 15:20:27.429945 master-0 kubenswrapper[28120]: I0220 15:20:27.429298 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-config-data\") pod \"e6e5b14e-8b50-4436-86cd-959de104c5d7\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " Feb 20 15:20:27.429945 master-0 kubenswrapper[28120]: I0220 15:20:27.429411 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-internal-tls-certs\") pod \"e6e5b14e-8b50-4436-86cd-959de104c5d7\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " Feb 20 15:20:27.429945 master-0 kubenswrapper[28120]: I0220 15:20:27.429473 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdmjq\" (UniqueName: \"kubernetes.io/projected/e6e5b14e-8b50-4436-86cd-959de104c5d7-kube-api-access-sdmjq\") pod \"e6e5b14e-8b50-4436-86cd-959de104c5d7\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " Feb 20 15:20:27.429945 master-0 kubenswrapper[28120]: I0220 15:20:27.429534 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-public-tls-certs\") pod \"e6e5b14e-8b50-4436-86cd-959de104c5d7\" (UID: \"e6e5b14e-8b50-4436-86cd-959de104c5d7\") " Feb 20 15:20:27.429945 master-0 kubenswrapper[28120]: I0220 15:20:27.429611 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e5b14e-8b50-4436-86cd-959de104c5d7-logs" (OuterVolumeSpecName: "logs") pod "e6e5b14e-8b50-4436-86cd-959de104c5d7" (UID: "e6e5b14e-8b50-4436-86cd-959de104c5d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:20:27.431021 master-0 kubenswrapper[28120]: I0220 15:20:27.430427 28120 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6e5b14e-8b50-4436-86cd-959de104c5d7-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:27.444007 master-0 kubenswrapper[28120]: I0220 15:20:27.433526 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6e5b14e-8b50-4436-86cd-959de104c5d7-kube-api-access-sdmjq" (OuterVolumeSpecName: "kube-api-access-sdmjq") pod "e6e5b14e-8b50-4436-86cd-959de104c5d7" (UID: "e6e5b14e-8b50-4436-86cd-959de104c5d7"). InnerVolumeSpecName "kube-api-access-sdmjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:20:27.471883 master-0 kubenswrapper[28120]: I0220 15:20:27.471808 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6e5b14e-8b50-4436-86cd-959de104c5d7" (UID: "e6e5b14e-8b50-4436-86cd-959de104c5d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:27.476838 master-0 kubenswrapper[28120]: I0220 15:20:27.476555 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-config-data" (OuterVolumeSpecName: "config-data") pod "e6e5b14e-8b50-4436-86cd-959de104c5d7" (UID: "e6e5b14e-8b50-4436-86cd-959de104c5d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:27.497852 master-0 kubenswrapper[28120]: I0220 15:20:27.497015 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e6e5b14e-8b50-4436-86cd-959de104c5d7" (UID: "e6e5b14e-8b50-4436-86cd-959de104c5d7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:27.510215 master-0 kubenswrapper[28120]: I0220 15:20:27.510156 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e6e5b14e-8b50-4436-86cd-959de104c5d7" (UID: "e6e5b14e-8b50-4436-86cd-959de104c5d7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:27.535570 master-0 kubenswrapper[28120]: I0220 15:20:27.535499 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:27.535570 master-0 kubenswrapper[28120]: I0220 15:20:27.535558 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:27.535570 master-0 kubenswrapper[28120]: I0220 15:20:27.535567 28120 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:27.535712 master-0 kubenswrapper[28120]: I0220 15:20:27.535577 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdmjq\" (UniqueName: \"kubernetes.io/projected/e6e5b14e-8b50-4436-86cd-959de104c5d7-kube-api-access-sdmjq\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:27.535712 master-0 kubenswrapper[28120]: I0220 15:20:27.535592 28120 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6e5b14e-8b50-4436-86cd-959de104c5d7-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:28.329080 master-0 kubenswrapper[28120]: I0220 15:20:28.329019 28120 generic.go:334] "Generic (PLEG): container finished" podID="36eb9f8e-a58a-46e6-9fe5-f36c0058810d" containerID="302e23686b43ba7052a2f06e1a7c22122a338cc9de6961ef418c9491c19497d6" exitCode=0 Feb 20 15:20:28.329639 master-0 kubenswrapper[28120]: I0220 15:20:28.329067 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36eb9f8e-a58a-46e6-9fe5-f36c0058810d","Type":"ContainerDied","Data":"302e23686b43ba7052a2f06e1a7c22122a338cc9de6961ef418c9491c19497d6"} Feb 20 15:20:28.329639 master-0 kubenswrapper[28120]: I0220 15:20:28.329118 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 15:20:28.329639 master-0 kubenswrapper[28120]: I0220 15:20:28.329134 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36eb9f8e-a58a-46e6-9fe5-f36c0058810d","Type":"ContainerDied","Data":"afabd19320b83de37f323eb78e446b875efaaf79f8939527a4cdf4095d97ec61"} Feb 20 15:20:28.329639 master-0 kubenswrapper[28120]: I0220 15:20:28.329146 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afabd19320b83de37f323eb78e446b875efaaf79f8939527a4cdf4095d97ec61" Feb 20 15:20:28.403865 master-0 kubenswrapper[28120]: I0220 15:20:28.403813 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 15:20:28.422046 master-0 kubenswrapper[28120]: I0220 15:20:28.421989 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:20:28.431940 master-0 kubenswrapper[28120]: I0220 15:20:28.431864 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: I0220 15:20:28.469869 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: E0220 15:20:28.470741 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77c2fabe-8695-4a34-b116-f790f5b49b46" containerName="nova-manage" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: I0220 15:20:28.470761 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="77c2fabe-8695-4a34-b116-f790f5b49b46" containerName="nova-manage" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: E0220 15:20:28.470774 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf741d6-32da-404c-a508-ddf648ba8b62" containerName="dnsmasq-dns" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: I0220 15:20:28.470780 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf741d6-32da-404c-a508-ddf648ba8b62" containerName="dnsmasq-dns" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: E0220 15:20:28.470794 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e5b14e-8b50-4436-86cd-959de104c5d7" containerName="nova-api-api" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: I0220 15:20:28.470802 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e5b14e-8b50-4436-86cd-959de104c5d7" containerName="nova-api-api" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: E0220 15:20:28.470813 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c66fd46e-c999-4b3d-8430-3322dd85068c" containerName="nova-manage" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: I0220 15:20:28.470822 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="c66fd46e-c999-4b3d-8430-3322dd85068c" containerName="nova-manage" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: E0220 15:20:28.470842 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf741d6-32da-404c-a508-ddf648ba8b62" containerName="init" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: I0220 15:20:28.470849 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf741d6-32da-404c-a508-ddf648ba8b62" containerName="init" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: E0220 15:20:28.470862 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36eb9f8e-a58a-46e6-9fe5-f36c0058810d" containerName="nova-scheduler-scheduler" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: I0220 15:20:28.470869 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="36eb9f8e-a58a-46e6-9fe5-f36c0058810d" containerName="nova-scheduler-scheduler" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: E0220 15:20:28.470945 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e5b14e-8b50-4436-86cd-959de104c5d7" containerName="nova-api-log" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: I0220 15:20:28.470954 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e5b14e-8b50-4436-86cd-959de104c5d7" containerName="nova-api-log" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: I0220 15:20:28.471272 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e5b14e-8b50-4436-86cd-959de104c5d7" containerName="nova-api-api" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: I0220 15:20:28.471303 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="77c2fabe-8695-4a34-b116-f790f5b49b46" containerName="nova-manage" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: I0220 15:20:28.471326 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e5b14e-8b50-4436-86cd-959de104c5d7" containerName="nova-api-log" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: I0220 15:20:28.471337 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf741d6-32da-404c-a508-ddf648ba8b62" containerName="dnsmasq-dns" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: I0220 15:20:28.471355 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="36eb9f8e-a58a-46e6-9fe5-f36c0058810d" containerName="nova-scheduler-scheduler" Feb 20 15:20:28.471424 master-0 kubenswrapper[28120]: I0220 15:20:28.471370 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="c66fd46e-c999-4b3d-8430-3322dd85068c" containerName="nova-manage" Feb 20 15:20:28.472856 master-0 kubenswrapper[28120]: I0220 15:20:28.472818 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 15:20:28.475952 master-0 kubenswrapper[28120]: I0220 15:20:28.475647 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 20 15:20:28.476047 master-0 kubenswrapper[28120]: I0220 15:20:28.475959 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 20 15:20:28.476095 master-0 kubenswrapper[28120]: I0220 15:20:28.476077 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 20 15:20:28.508013 master-0 kubenswrapper[28120]: I0220 15:20:28.507952 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:20:28.587039 master-0 kubenswrapper[28120]: I0220 15:20:28.586916 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzh49\" (UniqueName: \"kubernetes.io/projected/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-kube-api-access-rzh49\") pod \"36eb9f8e-a58a-46e6-9fe5-f36c0058810d\" (UID: \"36eb9f8e-a58a-46e6-9fe5-f36c0058810d\") " Feb 20 15:20:28.587039 master-0 kubenswrapper[28120]: I0220 15:20:28.587022 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-config-data\") pod \"36eb9f8e-a58a-46e6-9fe5-f36c0058810d\" (UID: \"36eb9f8e-a58a-46e6-9fe5-f36c0058810d\") " Feb 20 15:20:28.587535 master-0 kubenswrapper[28120]: I0220 15:20:28.587156 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-combined-ca-bundle\") pod \"36eb9f8e-a58a-46e6-9fe5-f36c0058810d\" (UID: \"36eb9f8e-a58a-46e6-9fe5-f36c0058810d\") " Feb 20 15:20:28.587535 master-0 kubenswrapper[28120]: I0220 15:20:28.587328 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23cc9938-e54b-4183-b50d-1893158f4be5-config-data\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.587535 master-0 kubenswrapper[28120]: I0220 15:20:28.587376 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23cc9938-e54b-4183-b50d-1893158f4be5-public-tls-certs\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.587535 master-0 kubenswrapper[28120]: I0220 15:20:28.587402 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7cdf\" (UniqueName: \"kubernetes.io/projected/23cc9938-e54b-4183-b50d-1893158f4be5-kube-api-access-t7cdf\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.587535 master-0 kubenswrapper[28120]: I0220 15:20:28.587452 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23cc9938-e54b-4183-b50d-1893158f4be5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.587535 master-0 kubenswrapper[28120]: I0220 15:20:28.587471 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23cc9938-e54b-4183-b50d-1893158f4be5-logs\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.587535 master-0 kubenswrapper[28120]: I0220 15:20:28.587495 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23cc9938-e54b-4183-b50d-1893158f4be5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.591839 master-0 kubenswrapper[28120]: I0220 15:20:28.591768 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-kube-api-access-rzh49" (OuterVolumeSpecName: "kube-api-access-rzh49") pod "36eb9f8e-a58a-46e6-9fe5-f36c0058810d" (UID: "36eb9f8e-a58a-46e6-9fe5-f36c0058810d"). InnerVolumeSpecName "kube-api-access-rzh49". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:20:28.616412 master-0 kubenswrapper[28120]: I0220 15:20:28.616335 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-config-data" (OuterVolumeSpecName: "config-data") pod "36eb9f8e-a58a-46e6-9fe5-f36c0058810d" (UID: "36eb9f8e-a58a-46e6-9fe5-f36c0058810d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:28.618869 master-0 kubenswrapper[28120]: I0220 15:20:28.618798 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36eb9f8e-a58a-46e6-9fe5-f36c0058810d" (UID: "36eb9f8e-a58a-46e6-9fe5-f36c0058810d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:28.689731 master-0 kubenswrapper[28120]: I0220 15:20:28.689259 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23cc9938-e54b-4183-b50d-1893158f4be5-public-tls-certs\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.689731 master-0 kubenswrapper[28120]: I0220 15:20:28.689337 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7cdf\" (UniqueName: \"kubernetes.io/projected/23cc9938-e54b-4183-b50d-1893158f4be5-kube-api-access-t7cdf\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.689731 master-0 kubenswrapper[28120]: I0220 15:20:28.689426 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23cc9938-e54b-4183-b50d-1893158f4be5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.689731 master-0 kubenswrapper[28120]: I0220 15:20:28.689455 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23cc9938-e54b-4183-b50d-1893158f4be5-logs\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.689731 master-0 kubenswrapper[28120]: I0220 15:20:28.689492 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23cc9938-e54b-4183-b50d-1893158f4be5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.689731 master-0 kubenswrapper[28120]: I0220 15:20:28.689664 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23cc9938-e54b-4183-b50d-1893158f4be5-config-data\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.690667 master-0 kubenswrapper[28120]: I0220 15:20:28.689806 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:28.690667 master-0 kubenswrapper[28120]: I0220 15:20:28.689838 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzh49\" (UniqueName: \"kubernetes.io/projected/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-kube-api-access-rzh49\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:28.690667 master-0 kubenswrapper[28120]: I0220 15:20:28.689858 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36eb9f8e-a58a-46e6-9fe5-f36c0058810d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:28.690667 master-0 kubenswrapper[28120]: I0220 15:20:28.690182 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23cc9938-e54b-4183-b50d-1893158f4be5-logs\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.692570 master-0 kubenswrapper[28120]: I0220 15:20:28.692525 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23cc9938-e54b-4183-b50d-1893158f4be5-public-tls-certs\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.695144 master-0 kubenswrapper[28120]: I0220 15:20:28.694469 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23cc9938-e54b-4183-b50d-1893158f4be5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.696031 master-0 kubenswrapper[28120]: I0220 15:20:28.695993 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23cc9938-e54b-4183-b50d-1893158f4be5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.697345 master-0 kubenswrapper[28120]: I0220 15:20:28.697291 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23cc9938-e54b-4183-b50d-1893158f4be5-config-data\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.706499 master-0 kubenswrapper[28120]: I0220 15:20:28.706452 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7cdf\" (UniqueName: \"kubernetes.io/projected/23cc9938-e54b-4183-b50d-1893158f4be5-kube-api-access-t7cdf\") pod \"nova-api-0\" (UID: \"23cc9938-e54b-4183-b50d-1893158f4be5\") " pod="openstack/nova-api-0" Feb 20 15:20:28.822643 master-0 kubenswrapper[28120]: I0220 15:20:28.822567 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 20 15:20:29.382195 master-0 kubenswrapper[28120]: I0220 15:20:29.382103 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 15:20:29.389165 master-0 kubenswrapper[28120]: W0220 15:20:29.389083 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23cc9938_e54b_4183_b50d_1893158f4be5.slice/crio-831394ea33b14ea16723a375bb49194afbe77ba2b6989748510d9e3cbf0c253a WatchSource:0}: Error finding container 831394ea33b14ea16723a375bb49194afbe77ba2b6989748510d9e3cbf0c253a: Status 404 returned error can't find the container with id 831394ea33b14ea16723a375bb49194afbe77ba2b6989748510d9e3cbf0c253a Feb 20 15:20:29.391541 master-0 kubenswrapper[28120]: I0220 15:20:29.391482 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 20 15:20:29.466272 master-0 kubenswrapper[28120]: I0220 15:20:29.466203 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 15:20:29.483536 master-0 kubenswrapper[28120]: I0220 15:20:29.483479 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 15:20:29.497773 master-0 kubenswrapper[28120]: I0220 15:20:29.497645 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 15:20:29.514172 master-0 kubenswrapper[28120]: I0220 15:20:29.514083 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 15:20:29.514363 master-0 kubenswrapper[28120]: I0220 15:20:29.514284 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 15:20:29.516776 master-0 kubenswrapper[28120]: I0220 15:20:29.516739 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 20 15:20:29.715467 master-0 kubenswrapper[28120]: I0220 15:20:29.715306 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/991254c1-f5e7-4023-af77-e52796e26b2f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"991254c1-f5e7-4023-af77-e52796e26b2f\") " pod="openstack/nova-scheduler-0" Feb 20 15:20:29.715772 master-0 kubenswrapper[28120]: I0220 15:20:29.715493 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7hdz\" (UniqueName: \"kubernetes.io/projected/991254c1-f5e7-4023-af77-e52796e26b2f-kube-api-access-b7hdz\") pod \"nova-scheduler-0\" (UID: \"991254c1-f5e7-4023-af77-e52796e26b2f\") " pod="openstack/nova-scheduler-0" Feb 20 15:20:29.715883 master-0 kubenswrapper[28120]: I0220 15:20:29.715847 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/991254c1-f5e7-4023-af77-e52796e26b2f-config-data\") pod \"nova-scheduler-0\" (UID: \"991254c1-f5e7-4023-af77-e52796e26b2f\") " pod="openstack/nova-scheduler-0" Feb 20 15:20:29.818563 master-0 kubenswrapper[28120]: I0220 15:20:29.818446 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/991254c1-f5e7-4023-af77-e52796e26b2f-config-data\") pod \"nova-scheduler-0\" (UID: \"991254c1-f5e7-4023-af77-e52796e26b2f\") " pod="openstack/nova-scheduler-0" Feb 20 15:20:29.818903 master-0 kubenswrapper[28120]: I0220 15:20:29.818771 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/991254c1-f5e7-4023-af77-e52796e26b2f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"991254c1-f5e7-4023-af77-e52796e26b2f\") " pod="openstack/nova-scheduler-0" Feb 20 15:20:29.818903 master-0 kubenswrapper[28120]: I0220 15:20:29.818810 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7hdz\" (UniqueName: \"kubernetes.io/projected/991254c1-f5e7-4023-af77-e52796e26b2f-kube-api-access-b7hdz\") pod \"nova-scheduler-0\" (UID: \"991254c1-f5e7-4023-af77-e52796e26b2f\") " pod="openstack/nova-scheduler-0" Feb 20 15:20:29.823780 master-0 kubenswrapper[28120]: I0220 15:20:29.823719 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/991254c1-f5e7-4023-af77-e52796e26b2f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"991254c1-f5e7-4023-af77-e52796e26b2f\") " pod="openstack/nova-scheduler-0" Feb 20 15:20:29.823960 master-0 kubenswrapper[28120]: I0220 15:20:29.823853 28120 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c45a9112-7488-415a-8a09-70d1af190834" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.15:8775/\": read tcp 10.128.0.2:46338->10.128.1.15:8775: read: connection reset by peer" Feb 20 15:20:29.824117 master-0 kubenswrapper[28120]: I0220 15:20:29.824041 28120 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c45a9112-7488-415a-8a09-70d1af190834" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.15:8775/\": read tcp 10.128.0.2:46326->10.128.1.15:8775: read: connection reset by peer" Feb 20 15:20:29.825041 master-0 kubenswrapper[28120]: I0220 15:20:29.824984 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/991254c1-f5e7-4023-af77-e52796e26b2f-config-data\") pod \"nova-scheduler-0\" (UID: \"991254c1-f5e7-4023-af77-e52796e26b2f\") " pod="openstack/nova-scheduler-0" Feb 20 15:20:29.836543 master-0 kubenswrapper[28120]: I0220 15:20:29.836464 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7hdz\" (UniqueName: \"kubernetes.io/projected/991254c1-f5e7-4023-af77-e52796e26b2f-kube-api-access-b7hdz\") pod \"nova-scheduler-0\" (UID: \"991254c1-f5e7-4023-af77-e52796e26b2f\") " pod="openstack/nova-scheduler-0" Feb 20 15:20:29.879603 master-0 kubenswrapper[28120]: I0220 15:20:29.879503 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 20 15:20:30.103999 master-0 kubenswrapper[28120]: I0220 15:20:30.103878 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36eb9f8e-a58a-46e6-9fe5-f36c0058810d" path="/var/lib/kubelet/pods/36eb9f8e-a58a-46e6-9fe5-f36c0058810d/volumes" Feb 20 15:20:30.104882 master-0 kubenswrapper[28120]: I0220 15:20:30.104850 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6e5b14e-8b50-4436-86cd-959de104c5d7" path="/var/lib/kubelet/pods/e6e5b14e-8b50-4436-86cd-959de104c5d7/volumes" Feb 20 15:20:30.424218 master-0 kubenswrapper[28120]: I0220 15:20:30.424152 28120 generic.go:334] "Generic (PLEG): container finished" podID="c45a9112-7488-415a-8a09-70d1af190834" containerID="c3713c31c2f1e0747582562079acb92309e6cce2c860e78191563c33f87045ab" exitCode=0 Feb 20 15:20:30.424676 master-0 kubenswrapper[28120]: I0220 15:20:30.424240 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c45a9112-7488-415a-8a09-70d1af190834","Type":"ContainerDied","Data":"c3713c31c2f1e0747582562079acb92309e6cce2c860e78191563c33f87045ab"} Feb 20 15:20:30.424676 master-0 kubenswrapper[28120]: I0220 15:20:30.424284 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c45a9112-7488-415a-8a09-70d1af190834","Type":"ContainerDied","Data":"6f96282fd2745e896bf68946e8484765d054adf422d3ecd91277db1223918d17"} Feb 20 15:20:30.424676 master-0 kubenswrapper[28120]: I0220 15:20:30.424302 28120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f96282fd2745e896bf68946e8484765d054adf422d3ecd91277db1223918d17" Feb 20 15:20:30.440149 master-0 kubenswrapper[28120]: I0220 15:20:30.440106 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23cc9938-e54b-4183-b50d-1893158f4be5","Type":"ContainerStarted","Data":"4a250aeb468cce1f236e651d6e90237f4fae14808b92360a064b9ff935991325"} Feb 20 15:20:30.440247 master-0 kubenswrapper[28120]: I0220 15:20:30.440155 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23cc9938-e54b-4183-b50d-1893158f4be5","Type":"ContainerStarted","Data":"54201b09dd22706d1f4f084635ffbd53d91e72942a0514b21de647802bf5fb6b"} Feb 20 15:20:30.440247 master-0 kubenswrapper[28120]: I0220 15:20:30.440170 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"23cc9938-e54b-4183-b50d-1893158f4be5","Type":"ContainerStarted","Data":"831394ea33b14ea16723a375bb49194afbe77ba2b6989748510d9e3cbf0c253a"} Feb 20 15:20:30.473054 master-0 kubenswrapper[28120]: I0220 15:20:30.470793 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.470776268 podStartE2EDuration="2.470776268s" podCreationTimestamp="2026-02-20 15:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:20:30.46927342 +0000 UTC m=+1168.730067023" watchObservedRunningTime="2026-02-20 15:20:30.470776268 +0000 UTC m=+1168.731569831" Feb 20 15:20:30.501020 master-0 kubenswrapper[28120]: I0220 15:20:30.500965 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 15:20:30.612664 master-0 kubenswrapper[28120]: I0220 15:20:30.612604 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 20 15:20:30.651679 master-0 kubenswrapper[28120]: I0220 15:20:30.651605 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-nova-metadata-tls-certs\") pod \"c45a9112-7488-415a-8a09-70d1af190834\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " Feb 20 15:20:30.651873 master-0 kubenswrapper[28120]: I0220 15:20:30.651756 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c45a9112-7488-415a-8a09-70d1af190834-logs\") pod \"c45a9112-7488-415a-8a09-70d1af190834\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " Feb 20 15:20:30.651873 master-0 kubenswrapper[28120]: I0220 15:20:30.651780 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd7zq\" (UniqueName: \"kubernetes.io/projected/c45a9112-7488-415a-8a09-70d1af190834-kube-api-access-bd7zq\") pod \"c45a9112-7488-415a-8a09-70d1af190834\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " Feb 20 15:20:30.651873 master-0 kubenswrapper[28120]: I0220 15:20:30.651840 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-config-data\") pod \"c45a9112-7488-415a-8a09-70d1af190834\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " Feb 20 15:20:30.652038 master-0 kubenswrapper[28120]: I0220 15:20:30.651974 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-combined-ca-bundle\") pod \"c45a9112-7488-415a-8a09-70d1af190834\" (UID: \"c45a9112-7488-415a-8a09-70d1af190834\") " Feb 20 15:20:30.652301 master-0 kubenswrapper[28120]: I0220 15:20:30.652240 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c45a9112-7488-415a-8a09-70d1af190834-logs" (OuterVolumeSpecName: "logs") pod "c45a9112-7488-415a-8a09-70d1af190834" (UID: "c45a9112-7488-415a-8a09-70d1af190834"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 20 15:20:30.654193 master-0 kubenswrapper[28120]: I0220 15:20:30.654166 28120 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c45a9112-7488-415a-8a09-70d1af190834-logs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:30.656014 master-0 kubenswrapper[28120]: I0220 15:20:30.655853 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c45a9112-7488-415a-8a09-70d1af190834-kube-api-access-bd7zq" (OuterVolumeSpecName: "kube-api-access-bd7zq") pod "c45a9112-7488-415a-8a09-70d1af190834" (UID: "c45a9112-7488-415a-8a09-70d1af190834"). InnerVolumeSpecName "kube-api-access-bd7zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:20:30.686996 master-0 kubenswrapper[28120]: I0220 15:20:30.686936 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c45a9112-7488-415a-8a09-70d1af190834" (UID: "c45a9112-7488-415a-8a09-70d1af190834"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:30.701695 master-0 kubenswrapper[28120]: I0220 15:20:30.701642 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-config-data" (OuterVolumeSpecName: "config-data") pod "c45a9112-7488-415a-8a09-70d1af190834" (UID: "c45a9112-7488-415a-8a09-70d1af190834"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:30.730617 master-0 kubenswrapper[28120]: I0220 15:20:30.730507 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c45a9112-7488-415a-8a09-70d1af190834" (UID: "c45a9112-7488-415a-8a09-70d1af190834"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:20:30.756370 master-0 kubenswrapper[28120]: I0220 15:20:30.756323 28120 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:30.756370 master-0 kubenswrapper[28120]: I0220 15:20:30.756362 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd7zq\" (UniqueName: \"kubernetes.io/projected/c45a9112-7488-415a-8a09-70d1af190834-kube-api-access-bd7zq\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:30.756515 master-0 kubenswrapper[28120]: I0220 15:20:30.756376 28120 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-config-data\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:30.756515 master-0 kubenswrapper[28120]: I0220 15:20:30.756386 28120 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c45a9112-7488-415a-8a09-70d1af190834-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 20 15:20:31.453633 master-0 kubenswrapper[28120]: I0220 15:20:31.453548 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"991254c1-f5e7-4023-af77-e52796e26b2f","Type":"ContainerStarted","Data":"5f907ef36edb36d2a16aa85093229ddbaca471b1e20ea04ff0338ef8a489c9cd"} Feb 20 15:20:31.453633 master-0 kubenswrapper[28120]: I0220 15:20:31.453597 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"991254c1-f5e7-4023-af77-e52796e26b2f","Type":"ContainerStarted","Data":"b29dddefac214c2950754095c839ebc6f813adca90ed8f7e762d3b3a26be41e6"} Feb 20 15:20:31.454620 master-0 kubenswrapper[28120]: I0220 15:20:31.453651 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 15:20:31.497684 master-0 kubenswrapper[28120]: I0220 15:20:31.497568 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.497550546 podStartE2EDuration="2.497550546s" podCreationTimestamp="2026-02-20 15:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:20:31.485031425 +0000 UTC m=+1169.745825008" watchObservedRunningTime="2026-02-20 15:20:31.497550546 +0000 UTC m=+1169.758344119" Feb 20 15:20:31.540575 master-0 kubenswrapper[28120]: I0220 15:20:31.540458 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:20:31.585658 master-0 kubenswrapper[28120]: I0220 15:20:31.585539 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:20:31.613005 master-0 kubenswrapper[28120]: I0220 15:20:31.612946 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:20:31.613615 master-0 kubenswrapper[28120]: E0220 15:20:31.613579 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45a9112-7488-415a-8a09-70d1af190834" containerName="nova-metadata-log" Feb 20 15:20:31.613615 master-0 kubenswrapper[28120]: I0220 15:20:31.613604 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45a9112-7488-415a-8a09-70d1af190834" containerName="nova-metadata-log" Feb 20 15:20:31.613729 master-0 kubenswrapper[28120]: E0220 15:20:31.613665 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45a9112-7488-415a-8a09-70d1af190834" containerName="nova-metadata-metadata" Feb 20 15:20:31.613729 master-0 kubenswrapper[28120]: I0220 15:20:31.613674 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45a9112-7488-415a-8a09-70d1af190834" containerName="nova-metadata-metadata" Feb 20 15:20:31.614038 master-0 kubenswrapper[28120]: I0220 15:20:31.614003 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="c45a9112-7488-415a-8a09-70d1af190834" containerName="nova-metadata-log" Feb 20 15:20:31.614120 master-0 kubenswrapper[28120]: I0220 15:20:31.614075 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="c45a9112-7488-415a-8a09-70d1af190834" containerName="nova-metadata-metadata" Feb 20 15:20:31.615874 master-0 kubenswrapper[28120]: I0220 15:20:31.615835 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 15:20:31.618063 master-0 kubenswrapper[28120]: I0220 15:20:31.618020 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 20 15:20:31.618390 master-0 kubenswrapper[28120]: I0220 15:20:31.618356 28120 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 20 15:20:31.626306 master-0 kubenswrapper[28120]: I0220 15:20:31.626215 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:20:31.707909 master-0 kubenswrapper[28120]: I0220 15:20:31.707787 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/693cfbf3-50eb-4df5-96ee-30fb0ee34e31-logs\") pod \"nova-metadata-0\" (UID: \"693cfbf3-50eb-4df5-96ee-30fb0ee34e31\") " pod="openstack/nova-metadata-0" Feb 20 15:20:31.707909 master-0 kubenswrapper[28120]: I0220 15:20:31.707834 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/693cfbf3-50eb-4df5-96ee-30fb0ee34e31-config-data\") pod \"nova-metadata-0\" (UID: \"693cfbf3-50eb-4df5-96ee-30fb0ee34e31\") " pod="openstack/nova-metadata-0" Feb 20 15:20:31.708181 master-0 kubenswrapper[28120]: I0220 15:20:31.707982 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/693cfbf3-50eb-4df5-96ee-30fb0ee34e31-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"693cfbf3-50eb-4df5-96ee-30fb0ee34e31\") " pod="openstack/nova-metadata-0" Feb 20 15:20:31.708181 master-0 kubenswrapper[28120]: I0220 15:20:31.708008 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/693cfbf3-50eb-4df5-96ee-30fb0ee34e31-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"693cfbf3-50eb-4df5-96ee-30fb0ee34e31\") " pod="openstack/nova-metadata-0" Feb 20 15:20:31.708579 master-0 kubenswrapper[28120]: I0220 15:20:31.708514 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzszj\" (UniqueName: \"kubernetes.io/projected/693cfbf3-50eb-4df5-96ee-30fb0ee34e31-kube-api-access-xzszj\") pod \"nova-metadata-0\" (UID: \"693cfbf3-50eb-4df5-96ee-30fb0ee34e31\") " pod="openstack/nova-metadata-0" Feb 20 15:20:31.809870 master-0 kubenswrapper[28120]: I0220 15:20:31.809800 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/693cfbf3-50eb-4df5-96ee-30fb0ee34e31-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"693cfbf3-50eb-4df5-96ee-30fb0ee34e31\") " pod="openstack/nova-metadata-0" Feb 20 15:20:31.809870 master-0 kubenswrapper[28120]: I0220 15:20:31.809871 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/693cfbf3-50eb-4df5-96ee-30fb0ee34e31-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"693cfbf3-50eb-4df5-96ee-30fb0ee34e31\") " pod="openstack/nova-metadata-0" Feb 20 15:20:31.810127 master-0 kubenswrapper[28120]: I0220 15:20:31.810037 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzszj\" (UniqueName: \"kubernetes.io/projected/693cfbf3-50eb-4df5-96ee-30fb0ee34e31-kube-api-access-xzszj\") pod \"nova-metadata-0\" (UID: \"693cfbf3-50eb-4df5-96ee-30fb0ee34e31\") " pod="openstack/nova-metadata-0" Feb 20 15:20:31.810127 master-0 kubenswrapper[28120]: I0220 15:20:31.810082 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/693cfbf3-50eb-4df5-96ee-30fb0ee34e31-logs\") pod \"nova-metadata-0\" (UID: \"693cfbf3-50eb-4df5-96ee-30fb0ee34e31\") " pod="openstack/nova-metadata-0" Feb 20 15:20:31.810127 master-0 kubenswrapper[28120]: I0220 15:20:31.810108 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/693cfbf3-50eb-4df5-96ee-30fb0ee34e31-config-data\") pod \"nova-metadata-0\" (UID: \"693cfbf3-50eb-4df5-96ee-30fb0ee34e31\") " pod="openstack/nova-metadata-0" Feb 20 15:20:31.811623 master-0 kubenswrapper[28120]: I0220 15:20:31.811573 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/693cfbf3-50eb-4df5-96ee-30fb0ee34e31-logs\") pod \"nova-metadata-0\" (UID: \"693cfbf3-50eb-4df5-96ee-30fb0ee34e31\") " pod="openstack/nova-metadata-0" Feb 20 15:20:31.813684 master-0 kubenswrapper[28120]: I0220 15:20:31.813627 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/693cfbf3-50eb-4df5-96ee-30fb0ee34e31-config-data\") pod \"nova-metadata-0\" (UID: \"693cfbf3-50eb-4df5-96ee-30fb0ee34e31\") " pod="openstack/nova-metadata-0" Feb 20 15:20:31.817902 master-0 kubenswrapper[28120]: I0220 15:20:31.817854 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/693cfbf3-50eb-4df5-96ee-30fb0ee34e31-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"693cfbf3-50eb-4df5-96ee-30fb0ee34e31\") " pod="openstack/nova-metadata-0" Feb 20 15:20:31.827392 master-0 kubenswrapper[28120]: I0220 15:20:31.827356 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/693cfbf3-50eb-4df5-96ee-30fb0ee34e31-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"693cfbf3-50eb-4df5-96ee-30fb0ee34e31\") " pod="openstack/nova-metadata-0" Feb 20 15:20:31.829971 master-0 kubenswrapper[28120]: I0220 15:20:31.829911 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzszj\" (UniqueName: \"kubernetes.io/projected/693cfbf3-50eb-4df5-96ee-30fb0ee34e31-kube-api-access-xzszj\") pod \"nova-metadata-0\" (UID: \"693cfbf3-50eb-4df5-96ee-30fb0ee34e31\") " pod="openstack/nova-metadata-0" Feb 20 15:20:31.948997 master-0 kubenswrapper[28120]: I0220 15:20:31.948911 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 20 15:20:32.075609 master-0 kubenswrapper[28120]: I0220 15:20:32.075524 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c45a9112-7488-415a-8a09-70d1af190834" path="/var/lib/kubelet/pods/c45a9112-7488-415a-8a09-70d1af190834/volumes" Feb 20 15:20:32.499563 master-0 kubenswrapper[28120]: W0220 15:20:32.499387 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod693cfbf3_50eb_4df5_96ee_30fb0ee34e31.slice/crio-ce9b16e29e3f85335290e87a8c651ec232884948dd03e381b35004c2d5ec06ce WatchSource:0}: Error finding container ce9b16e29e3f85335290e87a8c651ec232884948dd03e381b35004c2d5ec06ce: Status 404 returned error can't find the container with id ce9b16e29e3f85335290e87a8c651ec232884948dd03e381b35004c2d5ec06ce Feb 20 15:20:32.500474 master-0 kubenswrapper[28120]: I0220 15:20:32.499645 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 20 15:20:33.500455 master-0 kubenswrapper[28120]: I0220 15:20:33.500367 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"693cfbf3-50eb-4df5-96ee-30fb0ee34e31","Type":"ContainerStarted","Data":"0a784dfe05b1870aebc3d19e939cad806c1b668b8cf1f4948ffad27067602fa4"} Feb 20 15:20:33.500455 master-0 kubenswrapper[28120]: I0220 15:20:33.500432 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"693cfbf3-50eb-4df5-96ee-30fb0ee34e31","Type":"ContainerStarted","Data":"d7fb1acc531ea7df6bad6496e6717f596bc388ea1c093f71a3ed1137be0ecf55"} Feb 20 15:20:33.500455 master-0 kubenswrapper[28120]: I0220 15:20:33.500445 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"693cfbf3-50eb-4df5-96ee-30fb0ee34e31","Type":"ContainerStarted","Data":"ce9b16e29e3f85335290e87a8c651ec232884948dd03e381b35004c2d5ec06ce"} Feb 20 15:20:33.536715 master-0 kubenswrapper[28120]: I0220 15:20:33.536584 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.5365569900000002 podStartE2EDuration="2.53655699s" podCreationTimestamp="2026-02-20 15:20:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:20:33.528389057 +0000 UTC m=+1171.789182620" watchObservedRunningTime="2026-02-20 15:20:33.53655699 +0000 UTC m=+1171.797350593" Feb 20 15:20:34.879947 master-0 kubenswrapper[28120]: I0220 15:20:34.879721 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 20 15:20:36.949581 master-0 kubenswrapper[28120]: I0220 15:20:36.949443 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 20 15:20:36.949581 master-0 kubenswrapper[28120]: I0220 15:20:36.949557 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 20 15:20:38.823354 master-0 kubenswrapper[28120]: I0220 15:20:38.823193 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 20 15:20:38.823354 master-0 kubenswrapper[28120]: I0220 15:20:38.823299 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 20 15:20:39.839957 master-0 kubenswrapper[28120]: I0220 15:20:39.839193 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="23cc9938-e54b-4183-b50d-1893158f4be5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.22:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 15:20:39.839957 master-0 kubenswrapper[28120]: I0220 15:20:39.839232 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="23cc9938-e54b-4183-b50d-1893158f4be5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.22:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 15:20:39.880521 master-0 kubenswrapper[28120]: I0220 15:20:39.880452 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 20 15:20:39.943278 master-0 kubenswrapper[28120]: I0220 15:20:39.943207 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 20 15:20:40.663691 master-0 kubenswrapper[28120]: I0220 15:20:40.663597 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 20 15:20:41.949544 master-0 kubenswrapper[28120]: I0220 15:20:41.949464 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 20 15:20:41.949544 master-0 kubenswrapper[28120]: I0220 15:20:41.949537 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 20 15:20:42.972635 master-0 kubenswrapper[28120]: I0220 15:20:42.972176 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="693cfbf3-50eb-4df5-96ee-30fb0ee34e31" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.24:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 15:20:42.972635 master-0 kubenswrapper[28120]: I0220 15:20:42.972574 28120 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="693cfbf3-50eb-4df5-96ee-30fb0ee34e31" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.24:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 20 15:20:48.833907 master-0 kubenswrapper[28120]: I0220 15:20:48.833787 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 20 15:20:48.835105 master-0 kubenswrapper[28120]: I0220 15:20:48.834610 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 20 15:20:48.835376 master-0 kubenswrapper[28120]: I0220 15:20:48.835296 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 20 15:20:48.850144 master-0 kubenswrapper[28120]: I0220 15:20:48.850081 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 20 15:20:49.789576 master-0 kubenswrapper[28120]: I0220 15:20:49.788018 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 20 15:20:49.807337 master-0 kubenswrapper[28120]: I0220 15:20:49.807265 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 20 15:20:51.958481 master-0 kubenswrapper[28120]: I0220 15:20:51.958413 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 20 15:20:51.959658 master-0 kubenswrapper[28120]: I0220 15:20:51.959579 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 20 15:20:51.968297 master-0 kubenswrapper[28120]: I0220 15:20:51.968237 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 20 15:20:51.971995 master-0 kubenswrapper[28120]: I0220 15:20:51.971883 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 20 15:21:18.476894 master-0 kubenswrapper[28120]: I0220 15:21:18.476778 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-xfrxj"] Feb 20 15:21:18.478203 master-0 kubenswrapper[28120]: I0220 15:21:18.477037 28120 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" podUID="d1cb8849-9478-4af0-a159-0c003d9ceaed" containerName="sushy-emulator" containerID="cri-o://bf302f9251ae8798bcffe1721ada8edf55c2f2759d249fb91e81c73fb9703636" gracePeriod=30 Feb 20 15:21:19.238713 master-0 kubenswrapper[28120]: I0220 15:21:19.238645 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:21:19.285271 master-0 kubenswrapper[28120]: I0220 15:21:19.285203 28120 generic.go:334] "Generic (PLEG): container finished" podID="d1cb8849-9478-4af0-a159-0c003d9ceaed" containerID="bf302f9251ae8798bcffe1721ada8edf55c2f2759d249fb91e81c73fb9703636" exitCode=0 Feb 20 15:21:19.285271 master-0 kubenswrapper[28120]: I0220 15:21:19.285257 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" event={"ID":"d1cb8849-9478-4af0-a159-0c003d9ceaed","Type":"ContainerDied","Data":"bf302f9251ae8798bcffe1721ada8edf55c2f2759d249fb91e81c73fb9703636"} Feb 20 15:21:19.285271 master-0 kubenswrapper[28120]: I0220 15:21:19.285284 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" event={"ID":"d1cb8849-9478-4af0-a159-0c003d9ceaed","Type":"ContainerDied","Data":"cef1151ce2478d3c096b8abd6f53a028decaf578aea9229e7e6f14f368f77825"} Feb 20 15:21:19.285664 master-0 kubenswrapper[28120]: I0220 15:21:19.285300 28120 scope.go:117] "RemoveContainer" containerID="bf302f9251ae8798bcffe1721ada8edf55c2f2759d249fb91e81c73fb9703636" Feb 20 15:21:19.285664 master-0 kubenswrapper[28120]: I0220 15:21:19.285422 28120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-xfrxj" Feb 20 15:21:19.336675 master-0 kubenswrapper[28120]: I0220 15:21:19.336636 28120 scope.go:117] "RemoveContainer" containerID="bf302f9251ae8798bcffe1721ada8edf55c2f2759d249fb91e81c73fb9703636" Feb 20 15:21:19.337250 master-0 kubenswrapper[28120]: E0220 15:21:19.337223 28120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf302f9251ae8798bcffe1721ada8edf55c2f2759d249fb91e81c73fb9703636\": container with ID starting with bf302f9251ae8798bcffe1721ada8edf55c2f2759d249fb91e81c73fb9703636 not found: ID does not exist" containerID="bf302f9251ae8798bcffe1721ada8edf55c2f2759d249fb91e81c73fb9703636" Feb 20 15:21:19.337334 master-0 kubenswrapper[28120]: I0220 15:21:19.337268 28120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf302f9251ae8798bcffe1721ada8edf55c2f2759d249fb91e81c73fb9703636"} err="failed to get container status \"bf302f9251ae8798bcffe1721ada8edf55c2f2759d249fb91e81c73fb9703636\": rpc error: code = NotFound desc = could not find container \"bf302f9251ae8798bcffe1721ada8edf55c2f2759d249fb91e81c73fb9703636\": container with ID starting with bf302f9251ae8798bcffe1721ada8edf55c2f2759d249fb91e81c73fb9703636 not found: ID does not exist" Feb 20 15:21:19.346158 master-0 kubenswrapper[28120]: I0220 15:21:19.346121 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/d1cb8849-9478-4af0-a159-0c003d9ceaed-os-client-config\") pod \"d1cb8849-9478-4af0-a159-0c003d9ceaed\" (UID: \"d1cb8849-9478-4af0-a159-0c003d9ceaed\") " Feb 20 15:21:19.346524 master-0 kubenswrapper[28120]: I0220 15:21:19.346493 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/d1cb8849-9478-4af0-a159-0c003d9ceaed-sushy-emulator-config\") pod \"d1cb8849-9478-4af0-a159-0c003d9ceaed\" (UID: \"d1cb8849-9478-4af0-a159-0c003d9ceaed\") " Feb 20 15:21:19.346698 master-0 kubenswrapper[28120]: I0220 15:21:19.346670 28120 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcltd\" (UniqueName: \"kubernetes.io/projected/d1cb8849-9478-4af0-a159-0c003d9ceaed-kube-api-access-fcltd\") pod \"d1cb8849-9478-4af0-a159-0c003d9ceaed\" (UID: \"d1cb8849-9478-4af0-a159-0c003d9ceaed\") " Feb 20 15:21:19.347544 master-0 kubenswrapper[28120]: I0220 15:21:19.347471 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1cb8849-9478-4af0-a159-0c003d9ceaed-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "d1cb8849-9478-4af0-a159-0c003d9ceaed" (UID: "d1cb8849-9478-4af0-a159-0c003d9ceaed"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 20 15:21:19.352018 master-0 kubenswrapper[28120]: I0220 15:21:19.349726 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1cb8849-9478-4af0-a159-0c003d9ceaed-kube-api-access-fcltd" (OuterVolumeSpecName: "kube-api-access-fcltd") pod "d1cb8849-9478-4af0-a159-0c003d9ceaed" (UID: "d1cb8849-9478-4af0-a159-0c003d9ceaed"). InnerVolumeSpecName "kube-api-access-fcltd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 20 15:21:19.363208 master-0 kubenswrapper[28120]: I0220 15:21:19.363116 28120 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1cb8849-9478-4af0-a159-0c003d9ceaed-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "d1cb8849-9478-4af0-a159-0c003d9ceaed" (UID: "d1cb8849-9478-4af0-a159-0c003d9ceaed"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 20 15:21:19.409347 master-0 kubenswrapper[28120]: I0220 15:21:19.402611 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-gjxww"] Feb 20 15:21:19.409347 master-0 kubenswrapper[28120]: E0220 15:21:19.403441 28120 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1cb8849-9478-4af0-a159-0c003d9ceaed" containerName="sushy-emulator" Feb 20 15:21:19.409347 master-0 kubenswrapper[28120]: I0220 15:21:19.403489 28120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1cb8849-9478-4af0-a159-0c003d9ceaed" containerName="sushy-emulator" Feb 20 15:21:19.409347 master-0 kubenswrapper[28120]: I0220 15:21:19.403863 28120 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1cb8849-9478-4af0-a159-0c003d9ceaed" containerName="sushy-emulator" Feb 20 15:21:19.409347 master-0 kubenswrapper[28120]: I0220 15:21:19.405559 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" Feb 20 15:21:19.424604 master-0 kubenswrapper[28120]: I0220 15:21:19.424553 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-gjxww"] Feb 20 15:21:19.449604 master-0 kubenswrapper[28120]: I0220 15:21:19.449483 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rm7h\" (UniqueName: \"kubernetes.io/projected/7974f5e3-1e88-4461-8787-797a6d6173fc-kube-api-access-4rm7h\") pod \"sushy-emulator-84965d5d88-gjxww\" (UID: \"7974f5e3-1e88-4461-8787-797a6d6173fc\") " pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" Feb 20 15:21:19.449825 master-0 kubenswrapper[28120]: I0220 15:21:19.449674 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/7974f5e3-1e88-4461-8787-797a6d6173fc-os-client-config\") pod \"sushy-emulator-84965d5d88-gjxww\" (UID: \"7974f5e3-1e88-4461-8787-797a6d6173fc\") " pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" Feb 20 15:21:19.449825 master-0 kubenswrapper[28120]: I0220 15:21:19.449727 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/7974f5e3-1e88-4461-8787-797a6d6173fc-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-gjxww\" (UID: \"7974f5e3-1e88-4461-8787-797a6d6173fc\") " pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" Feb 20 15:21:19.449825 master-0 kubenswrapper[28120]: I0220 15:21:19.449810 28120 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/d1cb8849-9478-4af0-a159-0c003d9ceaed-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:21:19.449946 master-0 kubenswrapper[28120]: I0220 15:21:19.449826 28120 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcltd\" (UniqueName: \"kubernetes.io/projected/d1cb8849-9478-4af0-a159-0c003d9ceaed-kube-api-access-fcltd\") on node \"master-0\" DevicePath \"\"" Feb 20 15:21:19.449946 master-0 kubenswrapper[28120]: I0220 15:21:19.449840 28120 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/d1cb8849-9478-4af0-a159-0c003d9ceaed-os-client-config\") on node \"master-0\" DevicePath \"\"" Feb 20 15:21:19.552682 master-0 kubenswrapper[28120]: I0220 15:21:19.552585 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/7974f5e3-1e88-4461-8787-797a6d6173fc-os-client-config\") pod \"sushy-emulator-84965d5d88-gjxww\" (UID: \"7974f5e3-1e88-4461-8787-797a6d6173fc\") " pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" Feb 20 15:21:19.553268 master-0 kubenswrapper[28120]: I0220 15:21:19.552891 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/7974f5e3-1e88-4461-8787-797a6d6173fc-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-gjxww\" (UID: \"7974f5e3-1e88-4461-8787-797a6d6173fc\") " pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" Feb 20 15:21:19.553384 master-0 kubenswrapper[28120]: I0220 15:21:19.553109 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rm7h\" (UniqueName: \"kubernetes.io/projected/7974f5e3-1e88-4461-8787-797a6d6173fc-kube-api-access-4rm7h\") pod \"sushy-emulator-84965d5d88-gjxww\" (UID: \"7974f5e3-1e88-4461-8787-797a6d6173fc\") " pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" Feb 20 15:21:19.554126 master-0 kubenswrapper[28120]: I0220 15:21:19.554066 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/7974f5e3-1e88-4461-8787-797a6d6173fc-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-gjxww\" (UID: \"7974f5e3-1e88-4461-8787-797a6d6173fc\") " pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" Feb 20 15:21:19.558814 master-0 kubenswrapper[28120]: I0220 15:21:19.558783 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/7974f5e3-1e88-4461-8787-797a6d6173fc-os-client-config\") pod \"sushy-emulator-84965d5d88-gjxww\" (UID: \"7974f5e3-1e88-4461-8787-797a6d6173fc\") " pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" Feb 20 15:21:19.579685 master-0 kubenswrapper[28120]: I0220 15:21:19.579599 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rm7h\" (UniqueName: \"kubernetes.io/projected/7974f5e3-1e88-4461-8787-797a6d6173fc-kube-api-access-4rm7h\") pod \"sushy-emulator-84965d5d88-gjxww\" (UID: \"7974f5e3-1e88-4461-8787-797a6d6173fc\") " pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" Feb 20 15:21:19.649825 master-0 kubenswrapper[28120]: I0220 15:21:19.649734 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-xfrxj"] Feb 20 15:21:19.669129 master-0 kubenswrapper[28120]: I0220 15:21:19.668998 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-xfrxj"] Feb 20 15:21:19.781524 master-0 kubenswrapper[28120]: I0220 15:21:19.781456 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" Feb 20 15:21:20.080413 master-0 kubenswrapper[28120]: I0220 15:21:20.080358 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1cb8849-9478-4af0-a159-0c003d9ceaed" path="/var/lib/kubelet/pods/d1cb8849-9478-4af0-a159-0c003d9ceaed/volumes" Feb 20 15:21:20.458096 master-0 kubenswrapper[28120]: I0220 15:21:20.458036 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-gjxww"] Feb 20 15:21:20.466802 master-0 kubenswrapper[28120]: W0220 15:21:20.466755 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7974f5e3_1e88_4461_8787_797a6d6173fc.slice/crio-ae2dbe2513874c687f63579fb25391f90dd01df8c350b7c7a5d8a37ffae4d085 WatchSource:0}: Error finding container ae2dbe2513874c687f63579fb25391f90dd01df8c350b7c7a5d8a37ffae4d085: Status 404 returned error can't find the container with id ae2dbe2513874c687f63579fb25391f90dd01df8c350b7c7a5d8a37ffae4d085 Feb 20 15:21:21.328153 master-0 kubenswrapper[28120]: I0220 15:21:21.328067 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" event={"ID":"7974f5e3-1e88-4461-8787-797a6d6173fc","Type":"ContainerStarted","Data":"942a82c7ac6328f132401c2a1deb0ce6ea33f0bd5ecc9e2d29af6a0dcde67827"} Feb 20 15:21:21.328153 master-0 kubenswrapper[28120]: I0220 15:21:21.328152 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" event={"ID":"7974f5e3-1e88-4461-8787-797a6d6173fc","Type":"ContainerStarted","Data":"ae2dbe2513874c687f63579fb25391f90dd01df8c350b7c7a5d8a37ffae4d085"} Feb 20 15:21:21.347914 master-0 kubenswrapper[28120]: I0220 15:21:21.346624 28120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" podStartSLOduration=2.346606269 podStartE2EDuration="2.346606269s" podCreationTimestamp="2026-02-20 15:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-20 15:21:21.345306407 +0000 UTC m=+1219.606099980" watchObservedRunningTime="2026-02-20 15:21:21.346606269 +0000 UTC m=+1219.607399842" Feb 20 15:21:29.782502 master-0 kubenswrapper[28120]: I0220 15:21:29.782397 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" Feb 20 15:21:29.782502 master-0 kubenswrapper[28120]: I0220 15:21:29.782490 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" Feb 20 15:21:29.798402 master-0 kubenswrapper[28120]: I0220 15:21:29.798325 28120 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" Feb 20 15:21:30.522725 master-0 kubenswrapper[28120]: I0220 15:21:30.522611 28120 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-84965d5d88-gjxww" Feb 20 15:23:09.419763 master-0 kubenswrapper[28120]: I0220 15:23:09.419667 28120 scope.go:117] "RemoveContainer" containerID="9d3ab156505d0a147cafbd9f58f42ff51264ee66c0eb34b8112f4b16e3bfea31" Feb 20 15:23:09.488060 master-0 kubenswrapper[28120]: I0220 15:23:09.487972 28120 scope.go:117] "RemoveContainer" containerID="abe9bfbd99e3ebb7cc3894ba981f0a3cb3b18e6271103e9c832b5e1615b4860d" Feb 20 15:24:09.596858 master-0 kubenswrapper[28120]: I0220 15:24:09.596757 28120 scope.go:117] "RemoveContainer" containerID="8d0d95b16efe3caf22501cb057776fd5f6b09608c7ca41fdec256586d6e4e404" Feb 20 15:26:09.747819 master-0 kubenswrapper[28120]: I0220 15:26:09.747718 28120 scope.go:117] "RemoveContainer" containerID="302e23686b43ba7052a2f06e1a7c22122a338cc9de6961ef418c9491c19497d6" Feb 20 15:26:09.789235 master-0 kubenswrapper[28120]: I0220 15:26:09.789169 28120 scope.go:117] "RemoveContainer" containerID="b7f49e587c42c065b05fb5c760110dfa3293fb8fbd02d01f8149a490f7389523" Feb 20 15:26:09.847671 master-0 kubenswrapper[28120]: I0220 15:26:09.847604 28120 scope.go:117] "RemoveContainer" containerID="88902a2caa6f2334db31bfc9411b4e3db96e9598f6333e615c14498ab1f0bb90" Feb 20 15:26:09.884667 master-0 kubenswrapper[28120]: I0220 15:26:09.884602 28120 scope.go:117] "RemoveContainer" containerID="c3713c31c2f1e0747582562079acb92309e6cce2c860e78191563c33f87045ab" Feb 20 15:26:09.923212 master-0 kubenswrapper[28120]: I0220 15:26:09.923151 28120 scope.go:117] "RemoveContainer" containerID="bbff39fb3eabd8d1f8f828ee93c4a606f52ba17b78591c20872e1f60a7629cd4" Feb 20 15:26:27.057885 master-0 kubenswrapper[28120]: I0220 15:26:27.057834 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5334-account-create-update-s44rq"] Feb 20 15:26:27.073868 master-0 kubenswrapper[28120]: I0220 15:26:27.073813 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5334-account-create-update-s44rq"] Feb 20 15:26:27.907054 master-0 kubenswrapper[28120]: I0220 15:26:27.906992 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-txqnb/must-gather-2rmbw"] Feb 20 15:26:27.910050 master-0 kubenswrapper[28120]: I0220 15:26:27.910003 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txqnb/must-gather-2rmbw" Feb 20 15:26:27.913044 master-0 kubenswrapper[28120]: I0220 15:26:27.912969 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-txqnb"/"kube-root-ca.crt" Feb 20 15:26:27.913310 master-0 kubenswrapper[28120]: I0220 15:26:27.913276 28120 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-txqnb"/"openshift-service-ca.crt" Feb 20 15:26:27.923809 master-0 kubenswrapper[28120]: I0220 15:26:27.923747 28120 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-txqnb/must-gather-75gfz"] Feb 20 15:26:27.927092 master-0 kubenswrapper[28120]: I0220 15:26:27.926838 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txqnb/must-gather-75gfz" Feb 20 15:26:27.965432 master-0 kubenswrapper[28120]: I0220 15:26:27.964119 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-txqnb/must-gather-2rmbw"] Feb 20 15:26:27.968234 master-0 kubenswrapper[28120]: I0220 15:26:27.968192 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wws8w\" (UniqueName: \"kubernetes.io/projected/c138a21a-1c7c-44df-9341-f6f7eae61a50-kube-api-access-wws8w\") pod \"must-gather-2rmbw\" (UID: \"c138a21a-1c7c-44df-9341-f6f7eae61a50\") " pod="openshift-must-gather-txqnb/must-gather-2rmbw" Feb 20 15:26:27.968455 master-0 kubenswrapper[28120]: I0220 15:26:27.968441 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8gm2\" (UniqueName: \"kubernetes.io/projected/9da8f96d-0c01-4efb-b9f0-6c2f96806706-kube-api-access-j8gm2\") pod \"must-gather-75gfz\" (UID: \"9da8f96d-0c01-4efb-b9f0-6c2f96806706\") " pod="openshift-must-gather-txqnb/must-gather-75gfz" Feb 20 15:26:27.968535 master-0 kubenswrapper[28120]: I0220 15:26:27.968523 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c138a21a-1c7c-44df-9341-f6f7eae61a50-must-gather-output\") pod \"must-gather-2rmbw\" (UID: \"c138a21a-1c7c-44df-9341-f6f7eae61a50\") " pod="openshift-must-gather-txqnb/must-gather-2rmbw" Feb 20 15:26:27.968618 master-0 kubenswrapper[28120]: I0220 15:26:27.968606 28120 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9da8f96d-0c01-4efb-b9f0-6c2f96806706-must-gather-output\") pod \"must-gather-75gfz\" (UID: \"9da8f96d-0c01-4efb-b9f0-6c2f96806706\") " pod="openshift-must-gather-txqnb/must-gather-75gfz" Feb 20 15:26:28.006185 master-0 kubenswrapper[28120]: I0220 15:26:28.004001 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-txqnb/must-gather-75gfz"] Feb 20 15:26:28.072550 master-0 kubenswrapper[28120]: I0220 15:26:28.072443 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8gm2\" (UniqueName: \"kubernetes.io/projected/9da8f96d-0c01-4efb-b9f0-6c2f96806706-kube-api-access-j8gm2\") pod \"must-gather-75gfz\" (UID: \"9da8f96d-0c01-4efb-b9f0-6c2f96806706\") " pod="openshift-must-gather-txqnb/must-gather-75gfz" Feb 20 15:26:28.072550 master-0 kubenswrapper[28120]: I0220 15:26:28.072503 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c138a21a-1c7c-44df-9341-f6f7eae61a50-must-gather-output\") pod \"must-gather-2rmbw\" (UID: \"c138a21a-1c7c-44df-9341-f6f7eae61a50\") " pod="openshift-must-gather-txqnb/must-gather-2rmbw" Feb 20 15:26:28.072550 master-0 kubenswrapper[28120]: I0220 15:26:28.072530 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9da8f96d-0c01-4efb-b9f0-6c2f96806706-must-gather-output\") pod \"must-gather-75gfz\" (UID: \"9da8f96d-0c01-4efb-b9f0-6c2f96806706\") " pod="openshift-must-gather-txqnb/must-gather-75gfz" Feb 20 15:26:28.075582 master-0 kubenswrapper[28120]: I0220 15:26:28.075519 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9da8f96d-0c01-4efb-b9f0-6c2f96806706-must-gather-output\") pod \"must-gather-75gfz\" (UID: \"9da8f96d-0c01-4efb-b9f0-6c2f96806706\") " pod="openshift-must-gather-txqnb/must-gather-75gfz" Feb 20 15:26:28.076612 master-0 kubenswrapper[28120]: I0220 15:26:28.076577 28120 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wws8w\" (UniqueName: \"kubernetes.io/projected/c138a21a-1c7c-44df-9341-f6f7eae61a50-kube-api-access-wws8w\") pod \"must-gather-2rmbw\" (UID: \"c138a21a-1c7c-44df-9341-f6f7eae61a50\") " pod="openshift-must-gather-txqnb/must-gather-2rmbw" Feb 20 15:26:28.077448 master-0 kubenswrapper[28120]: I0220 15:26:28.077392 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ccdb626-016f-4380-9396-9597993bc3df" path="/var/lib/kubelet/pods/2ccdb626-016f-4380-9396-9597993bc3df/volumes" Feb 20 15:26:28.078543 master-0 kubenswrapper[28120]: I0220 15:26:28.078201 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-kcsfb"] Feb 20 15:26:28.078543 master-0 kubenswrapper[28120]: I0220 15:26:28.078229 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3973-account-create-update-hf6wb"] Feb 20 15:26:28.079524 master-0 kubenswrapper[28120]: I0220 15:26:28.079495 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c138a21a-1c7c-44df-9341-f6f7eae61a50-must-gather-output\") pod \"must-gather-2rmbw\" (UID: \"c138a21a-1c7c-44df-9341-f6f7eae61a50\") " pod="openshift-must-gather-txqnb/must-gather-2rmbw" Feb 20 15:26:28.087820 master-0 kubenswrapper[28120]: I0220 15:26:28.087692 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2588-account-create-update-2jr7b"] Feb 20 15:26:28.107269 master-0 kubenswrapper[28120]: I0220 15:26:28.106980 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-rccl4"] Feb 20 15:26:28.107805 master-0 kubenswrapper[28120]: I0220 15:26:28.107763 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wws8w\" (UniqueName: \"kubernetes.io/projected/c138a21a-1c7c-44df-9341-f6f7eae61a50-kube-api-access-wws8w\") pod \"must-gather-2rmbw\" (UID: \"c138a21a-1c7c-44df-9341-f6f7eae61a50\") " pod="openshift-must-gather-txqnb/must-gather-2rmbw" Feb 20 15:26:28.110277 master-0 kubenswrapper[28120]: I0220 15:26:28.110248 28120 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8gm2\" (UniqueName: \"kubernetes.io/projected/9da8f96d-0c01-4efb-b9f0-6c2f96806706-kube-api-access-j8gm2\") pod \"must-gather-75gfz\" (UID: \"9da8f96d-0c01-4efb-b9f0-6c2f96806706\") " pod="openshift-must-gather-txqnb/must-gather-75gfz" Feb 20 15:26:28.133839 master-0 kubenswrapper[28120]: I0220 15:26:28.133785 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-cdlc8"] Feb 20 15:26:28.144554 master-0 kubenswrapper[28120]: I0220 15:26:28.144511 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-kcsfb"] Feb 20 15:26:28.158195 master-0 kubenswrapper[28120]: I0220 15:26:28.158062 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3973-account-create-update-hf6wb"] Feb 20 15:26:28.169877 master-0 kubenswrapper[28120]: I0220 15:26:28.169803 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-cdlc8"] Feb 20 15:26:28.181917 master-0 kubenswrapper[28120]: I0220 15:26:28.181780 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-rccl4"] Feb 20 15:26:28.193495 master-0 kubenswrapper[28120]: I0220 15:26:28.193422 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2588-account-create-update-2jr7b"] Feb 20 15:26:28.232947 master-0 kubenswrapper[28120]: I0220 15:26:28.232863 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txqnb/must-gather-2rmbw" Feb 20 15:26:28.281927 master-0 kubenswrapper[28120]: I0220 15:26:28.281849 28120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txqnb/must-gather-75gfz" Feb 20 15:26:28.728402 master-0 kubenswrapper[28120]: I0220 15:26:28.728322 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-txqnb/must-gather-2rmbw"] Feb 20 15:26:28.730239 master-0 kubenswrapper[28120]: I0220 15:26:28.729619 28120 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 20 15:26:28.845225 master-0 kubenswrapper[28120]: W0220 15:26:28.845159 28120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9da8f96d_0c01_4efb_b9f0_6c2f96806706.slice/crio-5056d28d64f7a91afa7684fecb950618b835410e9f5a68bbe6b2635a6d374116 WatchSource:0}: Error finding container 5056d28d64f7a91afa7684fecb950618b835410e9f5a68bbe6b2635a6d374116: Status 404 returned error can't find the container with id 5056d28d64f7a91afa7684fecb950618b835410e9f5a68bbe6b2635a6d374116 Feb 20 15:26:28.852672 master-0 kubenswrapper[28120]: I0220 15:26:28.852599 28120 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-txqnb/must-gather-75gfz"] Feb 20 15:26:29.578417 master-0 kubenswrapper[28120]: I0220 15:26:29.578346 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txqnb/must-gather-75gfz" event={"ID":"9da8f96d-0c01-4efb-b9f0-6c2f96806706","Type":"ContainerStarted","Data":"5056d28d64f7a91afa7684fecb950618b835410e9f5a68bbe6b2635a6d374116"} Feb 20 15:26:29.580342 master-0 kubenswrapper[28120]: I0220 15:26:29.580302 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txqnb/must-gather-2rmbw" event={"ID":"c138a21a-1c7c-44df-9341-f6f7eae61a50","Type":"ContainerStarted","Data":"9ba7da2184eec5f78ad62f72a7aeff67320e3e33187e0aa5dbbc375755009f2c"} Feb 20 15:26:30.074194 master-0 kubenswrapper[28120]: I0220 15:26:30.073962 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15cb2c0b-9fbe-4047-b752-b96968eb5408" path="/var/lib/kubelet/pods/15cb2c0b-9fbe-4047-b752-b96968eb5408/volumes" Feb 20 15:26:30.074747 master-0 kubenswrapper[28120]: I0220 15:26:30.074723 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25bbf6e5-6c71-4b45-9aa6-cbfe06455f50" path="/var/lib/kubelet/pods/25bbf6e5-6c71-4b45-9aa6-cbfe06455f50/volumes" Feb 20 15:26:30.075524 master-0 kubenswrapper[28120]: I0220 15:26:30.075417 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b332210-786b-421d-8b99-5dcfbeb5196c" path="/var/lib/kubelet/pods/2b332210-786b-421d-8b99-5dcfbeb5196c/volumes" Feb 20 15:26:30.076801 master-0 kubenswrapper[28120]: I0220 15:26:30.076752 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6721337b-1c31-48d0-88e1-f3251edbebc3" path="/var/lib/kubelet/pods/6721337b-1c31-48d0-88e1-f3251edbebc3/volumes" Feb 20 15:26:30.077502 master-0 kubenswrapper[28120]: I0220 15:26:30.077479 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5684c90-d24c-4dff-b5aa-ae72282b2a6f" path="/var/lib/kubelet/pods/e5684c90-d24c-4dff-b5aa-ae72282b2a6f/volumes" Feb 20 15:26:31.604694 master-0 kubenswrapper[28120]: I0220 15:26:31.604626 28120 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txqnb/must-gather-75gfz" event={"ID":"9da8f96d-0c01-4efb-b9f0-6c2f96806706","Type":"ContainerStarted","Data":"8e5dd912587b3cddeb85d28e59f1e0e4ae0c09947209b1d1c40aa77c09381e94"} Feb 20 15:26:34.101944 master-0 kubenswrapper[28120]: I0220 15:26:34.101251 28120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-57476485-nl7tx_26473c28-db42-47e6-9164-8c441ccc48ca/cluster-version-operator/0.log" Feb 20 15:26:34.124855 master-0 kubenswrapper[28120]: I0220 15:26:34.122450 28120 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8z72f"] Feb 20 15:26:34.124855 master-0 kubenswrapper[28120]: I0220 15:26:34.122490 28120 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8z72f"] Feb 20 15:26:36.093943 master-0 kubenswrapper[28120]: I0220 15:26:36.093452 28120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b" path="/var/lib/kubelet/pods/01ea4d45-37ae-4b09-a8a3-4bd372b5ac7b/volumes"